www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Moving to D

reply Adrian Mercieca <amercieca gmail.com> writes:
Hi everyone,

I am currently mulling if I should be adopting D as my (and subsequently 
my company's) language of choice.

We have great experience/investment in C++, so D seems - from what I've 
seen so far - as the logical step; D seems to me to be as C++ done right.
I'm also looking at Go in the process, but Go seems to be more of a 'from 
C' progression, whilst D seems to be the 'from C++' progression.

I am only worried about 2 things though - which I've read on the net:

1. No 64 bit compiler
2. The Phobos vs Tango issue: is this resolved now? This issue represents 
a major stumbling block for me.

Any comments would be greatly appreciated.

Thanks.
Jan 02 2011
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Adrian Mercieca:

Welcome here.

 We have great experience/investment in C++, so D seems - from what I've 
 seen so far - as the logical step;
Maybe.
 D seems to me to be as C++ done right.
"C++ done right" was one of the main purposes for D design :-)
 I'm also looking at Go in the process, but Go seems to be more of a 'from 
 C' progression, whilst D seems to be the 'from C++' progression.
Go and D are quite different. You will probably need a short time to find what do you need more among the two. There is also C# Mono.
 I am only worried about 2 things though - which I've read on the net:
There are other things to be worried about :-)
 1. No 64 bit compiler
It's in development for Linux. It will come, it already compiles some code.
 2. The Phobos vs Tango issue: is this resolved now? This issue represents 
 a major stumbling block for me.
The Phobos vs Tango issue is essentially a D1 issue. If you are interested in D2 then Phobos is going to be good enough. Bye, bearophile
Jan 02 2011
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Adrian Mercieca wrote:
 1. No 64 bit compiler
The 64 bit dmd compiler (for Linux) is nearing alpha stage.
 2. The Phobos vs Tango issue: is this resolved now? This issue represents 
 a major stumbling block for me.
Tango does not exist for D2.
Jan 02 2011
prev sibling next sibling parent reply Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes:
Adrian Mercieca wrote:

 Hi everyone,
 
 I am currently mulling if I should be adopting D as my (and subsequently
 my company's) language of choice.
 
 We have great experience/investment in C++, so D seems - from what I've
 seen so far - as the logical step; D seems to me to be as C++ done right.
 I'm also looking at Go in the process, but Go seems to be more of a 'from
 C' progression, whilst D seems to be the 'from C++' progression.
 
 I am only worried about 2 things though - which I've read on the net:
 
 1. No 64 bit compiler
 2. The Phobos vs Tango issue: is this resolved now? This issue represents
 a major stumbling block for me.
 
 Any comments would be greatly appreciated.
 
 Thanks.
64 bit support is the main focus of dmd development at the moment. I take it that you would first evaluate D for a while, possibly 64-bit support will arrive when you are ready and need it. gdc development is also going strong. As for tango vs phobos the situation is now that most of development in the previous version of D (released circa 2007 iirc) is done with Tango. There is also a fine 64-bit compiler for D1, LDC. The feature set of D1 is frozen and significant (some backwards incompatible) changes have been made since. There isn't any sign that Tango will be ported to D2 and phobos is shaping up to be a fine library for D2. Some parts of phobos are still in flux, though other parts are more stable. Perhaps you'll find this thread about experiences with D worth a read: http://thread.gmane.org/gmane.comp.lang.d.general/45993
Jan 02 2011
parent reply bioinfornatics <bioinfornatics fedoraproject.org> writes:
LDC exist for D2: https://bitbucket.org/prokhin_alexey/ldc2
Same for tango a port to D2 exist, the job is not done: git clone
git://supraverse.net/tango.git
any help are welcome
Jan 02 2011
parent reply Adrian Mercieca <amercieca gmail.com> writes:
On Sun, 02 Jan 2011 11:21:38 +0000, bioinfornatics wrote:

 LDC exist for D2: https://bitbucket.org/prokhin_alexey/ldc2 Same for
 tango a port to D2 exist, the job is not done: git clone
 git://supraverse.net/tango.git any help are welcome
Geez! that was quick! I see that the community is very, very alive. Ok - that clears the issues re 64bit and Phobos vs Tango; Guess Phobos is the way to go with D2. Thanks a lot for your responses - very much appreciated. - Adrian.
Jan 02 2011
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/2/11 6:44 AM, Adrian Mercieca wrote:
 On Sun, 02 Jan 2011 11:21:38 +0000, bioinfornatics wrote:

 LDC exist for D2: https://bitbucket.org/prokhin_alexey/ldc2 Same for
 tango a port to D2 exist, the job is not done: git clone
 git://supraverse.net/tango.git any help are welcome
Geez! that was quick! I see that the community is very, very alive. Ok - that clears the issues re 64bit and Phobos vs Tango; Guess Phobos is the way to go with D2. Thanks a lot for your responses - very much appreciated. - Adrian.
I also recommend reading Adam Ruppe's recent posts. His tips on getting great work done in D in spite of its implementation's current imperfections are very valuable. Andrei
Jan 02 2011
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Adrian Mercieca" <amercieca gmail.com> wrote in message 
news:ifpj8l$lnm$1 digitalmars.com...
 Hi everyone,

 I am currently mulling if I should be adopting D as my (and subsequently
 my company's) language of choice.

 We have great experience/investment in C++, so D seems - from what I've
 seen so far - as the logical step; D seems to me to be as C++ done right.
 I'm also looking at Go in the process, but Go seems to be more of a 'from
 C' progression, whilst D seems to be the 'from C++' progression.
Personally, I love D and can't stand Go (the lack of exceptions, generics, metaprogramming and decent memory-access are deal-breakers for me, and overall it seems like a one-trick pony - it has the interesting goroutines and that's about it). But since this is the D newsgroup you can probably expect we'll be bigger D fans here ;)
 I am only worried about 2 things though - which I've read on the net:

 1. No 64 bit compiler
64-bit code generation is on the way and is Walter's top priority. In the meantime, I would recommend taking a good look at whether it really is necessary for your company's software. Certainly there are many things that benefit greatly from 64-bit, but even as "in-vogue" as 64-bit is, most things don't actually *need* it. And there are still plenty of times when 64-bit won't even make any real difference anyway. But regardless, 64-bit is absolutely on the way and is very high priority. In fact, AIUI, the basic "Hello World" has been working for quite some time now.
 2. The Phobos vs Tango issue: is this resolved now? This issue represents
 a major stumbling block for me.
If you use D2, there is no Tango. Just Phobos. And there are no plans for Tango to move to D2. If you use D1, Tango is really the "de facto" std lib because D1's Phobos is extremely minimal. (D1's Phobos was created way back before there was a real Phobos development team and Walter had to divide his time between language and library, and language was of course the higher priority.) So no, it's really not the issue it's been made out to be.
Jan 02 2011
parent reply bioinfornatics <bioinfornatics fedoraproject.org> writes:
they are a D2 port for tango. It is not done. take source here: git clone
git://supraverse.net/tango.git
The job is almost done. everyone can do this job.
Take a D2 compiler build and fix error
Jan 02 2011
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Sun, 02 Jan 2011 20:34:40 +0200, bioinfornatics  
<bioinfornatics fedoraproject.org> wrote:

 they are a D2 port for tango. It is not done. take source here: git  
 clone git://supraverse.net/tango.git
 The job is almost done. everyone can do this job.
 Take a D2 compiler build and fix error
How many people are working on this port? How many people will be interested in using it, considering that a direct port won't use many of D2's features (why not just use D1)? Will this port be around in 1 year? 5 years? Will it have the same kind of momentum as the original D1 version, with as many developers working on it, fixing bugs etc.? Will the API always stay in sync with the developments in the original D1 version? What about all the existing documentation, tutorials, even book(s)? Sorry, having more options is a good thing, but I think there is a lot more to a real "Tango for D2" than just someone fixing the code so it compiles and works. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 03 2011
parent reply Ulrik Mikaelsson <ulrik.mikaelsson gmail.com> writes:
 How many people are working on this port? How many people will be interested
 in using it, considering that a direct port won't use many of D2's features
 (why not just use D1)? Will this port be around in 1 year? 5 years? Will it
 have the same kind of momentum as the original D1 version, with as many
 developers working on it, fixing bugs etc.? Will the API always stay in sync
 with the developments in the original D1 version? What about all the
 existing documentation, tutorials, even book(s)?
There aren't a lot of additions to D1 Tango nowadays, partly because people seems to have other things to do, partly because most of it works pretty nice already. That said, I think the D2-version of Tango will be a one-time fork. Regarding how many people are working on the D2-fork, I think it's quite few (AFAICT only Marenz). The general consensus in Tango have been to wait on D2 to be "finalized" before investing effort into porting.
 Sorry, having more options is a good thing, but I think there is a lot more
 to a real "Tango for D2" than just someone fixing the code so it compiles
 and works.
Agreed, but it doesn't all have to happen at day1. Just being able to port Tango-apps over to D2 with minimal fuzz would is valuable in itself. Anyways, IMHO I think one of the most important advances in D2, is the separation of runtime from system-library, such that Phobos and Tango can co-exist more easily, reducing fragmentation.
Jan 03 2011
parent Trass3r <un known.com> writes:
 Agreed, but it doesn't all have to happen at day1. Just being able to
 port Tango-apps over to D2 with minimal fuzz would is valuable in
 itself.

 Anyways, IMHO I think one of the most important advances in D2, is the
 separation of runtime from system-library, such that Phobos and Tango
 can co-exist more easily, reducing fragmentation.
So true.
Jan 03 2011
prev sibling next sibling parent reply Adrian Mercieca <amercieca gmail.com> writes:
Hi,

One other question....

How does D square up, performance-wise, to C and C++ ?
Has anyone got any benchmark figures?

How does D compare in this area?

Also, is D more of a Windows oriented language?
Do the Linux and OSX versions get as much attention as the Windows one?

Thanks.
Adrian.

On Sun, 02 Jan 2011 10:15:49 +0000, Adrian Mercieca wrote:

 Hi everyone,
 
 I am currently mulling if I should be adopting D as my (and subsequently
 my company's) language of choice.
 
 We have great experience/investment in C++, so D seems - from what I've
 seen so far - as the logical step; D seems to me to be as C++ done
 right. I'm also looking at Go in the process, but Go seems to be more of
 a 'from C' progression, whilst D seems to be the 'from C++' progression.
 
 I am only worried about 2 things though - which I've read on the net:
 
 1. No 64 bit compiler
 2. The Phobos vs Tango issue: is this resolved now? This issue
 represents a major stumbling block for me.
 
 Any comments would be greatly appreciated.
 
 Thanks.
Jan 04 2011
parent reply bearophile <bearophileHUGS lycos.com> writes:
Adrian Mercieca:

 How does D square up, performance-wise, to C and C++ ?
 Has anyone got any benchmark figures?
DMD has an old back-end, it doesn't use SSE (or AVX) registers yet (64 bit version will use 8 or more SSE registers), and sometimes it's slower for integer programs too. I've seen DMD programs slow down if you nest two foreach inside each other. There is a collection of different slow microbenchmarks. But LDC1 is able to run D1 code that looks like C about equally fast as C or sometimes a bit faster. DMD2 uses thread local memory on default that in theory slows code down a bit if you use global data, but I have never seen a benchmark that shows this slowdown clearly (an there is __gshared too, but sometimes it seems a placebo). If you use higher level constructs your program will often go slower. Often one of the most important things for speed is memory management, D encourages to heap allocate a lot (class instances are usually on the heap), and this is very bad for performance, also because the built-in GC doesn't have an Eden generation managed as a stack. So if you want more performance you must program like in Pascal/Ada, stack-allocating a lot, or using memory pools, etc. It's a lot a matter of self-discipline while you program.
 Also, is D more of a Windows oriented language?
 Do the Linux and OSX versions get as much attention as the Windows one?
The Windows version is receiving enough attention, it's not ignored by Walter. But I think for some time the 64 bit version will not be Windows too. Bye, bearophile
Jan 05 2011
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"bearophile" <bearophileHUGS lycos.com> wrote in message 
news:ig1d3l$kts$1 digitalmars.com...
 Adrian Mercieca:

 How does D square up, performance-wise, to C and C++ ?
 Has anyone got any benchmark figures?
DMD has an old back-end, it doesn't use SSE (or AVX) registers yet (64 bit version will use 8 or more SSE registers), and sometimes it's slower for integer programs too. I've seen DMD programs slow down if you nest two foreach inside each other. There is a collection of different slow microbenchmarks. But LDC1 is able to run D1 code that looks like C about equally fast as C or sometimes a bit faster. DMD2 uses thread local memory on default that in theory slows code down a bit if you use global data, but I have never seen a benchmark that shows this slowdown clearly (an there is __gshared too, but sometimes it seems a placebo). If you use higher level constructs your program will often go slower. Often one of the most important things for speed is memory management, D encourages to heap allocate a lot (class instances are usually on the heap), and this is very bad for performance, also because the built-in GC doesn't have an Eden generation managed as a stack. So if you want more performance you must program like in Pascal/Ada, stack-allocating a lot, or using memory pools, etc. It's a lot a matter of self-discipline while you program.
OTOH, the design of D and Phobos2 strongly encourages fast techniques such as array slicing, pre-computation at compile-time, and appropriate use of things like caching and lazy evaluation. Many of these things probably can be done in C/C++, technically speaking, but D makes them far easier and more accessable, and thus more likely to actually get used. As an example, see how D's built-in array slicing helped Tango's XML lib beat the snot out of other language's fast-XML libs: http://dotnot.org/blog/archives/2008/03/12/why-is-dtango-so- ast-at-parsing-xml/ - and look at the two benchmarks the first paragraph links to.
 Also, is D more of a Windows oriented language?
 Do the Linux and OSX versions get as much attention as the Windows one?
Linux, Windows and OSX are all strongly supported. Sometimes OSX might lag *slightly* in one thing or another, but that's only because there aren't nearly as many people using D on Mac and giving it a good workout. And even at that, it's still only gotten better since Walter got his own Mac box to test on. And Linux is maybe *slightly* ahead of even Windows because, like bearophile said, it'll get 64-bit support first, and also because the Linux DMD uses the standard Linux object-file format while Windows DMD is still using a fairly uncommon object-file format (but that only matters if you want to link object files from different compilers, and if you do want to, I think there are object file converters out there). But yea, overall, all of the big 3 OSes get plenty of attention.
Jan 05 2011
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 1/5/11, Nick Sabalausky <a a.a> wrote:
 And Linux is maybe *slightly* ahead of even Windows because, like bearophile
 said, it'll get 64-bit support first..
I wonder if the reason for that is Optlink (iirc it doesn't support 64bit even for DMC, right?).
Jan 05 2011
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Wednesday, January 05, 2011 09:59:08 Andrej Mitrovic wrote:
 On 1/5/11, Nick Sabalausky <a a.a> wrote:
 And Linux is maybe *slightly* ahead of even Windows because, like
 bearophile said, it'll get 64-bit support first..
I wonder if the reason for that is Optlink (iirc it doesn't support 64bit even for DMC, right?).
I believe that it's that and the fact that apparenly 64-bit stuff or Windows is very different from 32-bit stuff, whereas on Linux, for the most part, it's the same. So, it's a much easier port. Of course, Walter would know the specifics on that better than I would. - Jonathan M Davis
Jan 05 2011
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-01-05 18:37, Nick Sabalausky wrote:
 "bearophile"<bearophileHUGS lycos.com>  wrote in message
 news:ig1d3l$kts$1 digitalmars.com...
 Adrian Mercieca:

 How does D square up, performance-wise, to C and C++ ?
 Has anyone got any benchmark figures?
DMD has an old back-end, it doesn't use SSE (or AVX) registers yet (64 bit version will use 8 or more SSE registers), and sometimes it's slower for integer programs too. I've seen DMD programs slow down if you nest two foreach inside each other. There is a collection of different slow microbenchmarks. But LDC1 is able to run D1 code that looks like C about equally fast as C or sometimes a bit faster. DMD2 uses thread local memory on default that in theory slows code down a bit if you use global data, but I have never seen a benchmark that shows this slowdown clearly (an there is __gshared too, but sometimes it seems a placebo). If you use higher level constructs your program will often go slower. Often one of the most important things for speed is memory management, D encourages to heap allocate a lot (class instances are usually on the heap), and this is very bad for performance, also because the built-in GC doesn't have an Eden generation managed as a stack. So if you want more performance you must program like in Pascal/Ada, stack-allocating a lot, or using memory pools, etc. It's a lot a matter of self-discipline while you program.
OTOH, the design of D and Phobos2 strongly encourages fast techniques such as array slicing, pre-computation at compile-time, and appropriate use of things like caching and lazy evaluation. Many of these things probably can be done in C/C++, technically speaking, but D makes them far easier and more accessable, and thus more likely to actually get used. As an example, see how D's built-in array slicing helped Tango's XML lib beat the snot out of other language's fast-XML libs: http://dotnot.org/blog/archives/2008/03/12/why-is-dtango-so- ast-at-parsing-xml/ - and look at the two benchmarks the first paragraph links to.
 Also, is D more of a Windows oriented language?
 Do the Linux and OSX versions get as much attention as the Windows one?
Linux, Windows and OSX are all strongly supported. Sometimes OSX might lag *slightly* in one thing or another, but that's only because there aren't nearly as many people using D on Mac and giving it a good workout. And even at that, it's still only gotten better since Walter got his own Mac box to test on. And Linux is maybe *slightly* ahead of even Windows because, like bearophile said, it'll get 64-bit support first, and also because the Linux DMD uses the standard Linux object-file format while Windows DMD is still using a fairly uncommon object-file format (but that only matters if you want to link object files from different compilers, and if you do want to, I think there are object file converters out there). But yea, overall, all of the big 3 OSes get plenty of attention.
And sometimes Mac OS X is *slightly* ahead of the other OSes, Tango has had support for dynamic libraries on Mac OS X using DMD for quite a while now. For D2 a patch is just sitting there in bugzilla waiting for the last part of it to be commited. I'm really pushing this because people seem to forget this. -- /Jacob Carlborg
Jan 05 2011
parent reply bearophile <bearophileHUGS lycos.com> writes:
Jacob Carlborg:

 And sometimes Mac OS X is *slightly* ahead of the other OSes, Tango has 
 had support for dynamic libraries on Mac OS X using DMD for quite a 
 while now. For D2 a patch is just sitting there in bugzilla waiting for 
 the last part of it to be commited. I'm really pushing this because 
 people seem to forget this.
A quotation from here: http://whatupdave.com/post/1170718843/leaving-net
Also stop using codeplex its not real open source! Real open source isnt
submitting a patch and waiting/hoping that one day it might be accepted and
merged into the main line.<
Bye, bearophile
Jan 05 2011
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"bearophile" <bearophileHUGS lycos.com> wrote in message 
news:ig2oe8$eki$1 digitalmars.com...
 Jacob Carlborg:

 And sometimes Mac OS X is *slightly* ahead of the other OSes, Tango has
 had support for dynamic libraries on Mac OS X using DMD for quite a
 while now. For D2 a patch is just sitting there in bugzilla waiting for
 the last part of it to be commited. I'm really pushing this because
 people seem to forget this.
A quotation from here: http://whatupdave.com/post/1170718843/leaving-net
Also stop using codeplex its not real open source! Real open source isnt 
submitting a patch and waiting/hoping that one day it might be accepted 
and merged into the main line.<
Automatically accepting all submissions immediately into the main line with no review isn't a good thing either. In that article he's complaining about MS, but MS is notorious for ignoring all non-MS input, period. D's already light-years ahead of that. Since D's purely volunteer effort, and with a lot of things to be done, sometimes things *are* going to tale a while to get in. But there's just no way around that without major risks to quality. And yea Walter could grant main-line DMD commit access to others, but then we'd be left with a situation where no single lead dev understands the whole program inside and out - and when that happens to projects, that's inevitably the point where it starts to go downhill.
Jan 05 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Nick Sabalausky wrote:
 Automatically accepting all submissions immediately into the main line with 
 no review isn't a good thing either. In that article he's complaining about 
 MS, but MS is notorious for ignoring all non-MS input, period. D's already 
 light-years ahead of that. Since D's purely volunteer effort, and with a lot 
 of things to be done, sometimes things *are* going to tale a while to get 
 in. But there's just no way around that without major risks to quality. And 
 yea Walter could grant main-line DMD commit access to others, but then we'd 
 be left with a situation where no single lead dev understands the whole 
 program inside and out - and when that happens to projects, that's 
 inevitably the point where it starts to go downhill.
That's pretty much what I'm afraid of, losing my grip on how the whole thing works if there are multiple dmd committers. On the bright (!) side, Brad Roberts has gotten the test suite in shape so that anyone developing a patch can run it through the full test suite, which is a prerequisite to getting it folded in. In the last release, most of the patches in the changelog were done by people other than myself, although yes, I vet and double check them all before committing them.
Jan 05 2011
next sibling parent reply Caligo <iteronvexor gmail.com> writes:
On Thu, Jan 6, 2011 at 12:28 AM, Walter Bright
<newshound2 digitalmars.com>wrote:

 That's pretty much what I'm afraid of, losing my grip on how the whole
 thing works if there are multiple dmd committers.

 Perhaps using a modern SCM like Git might help?  Everyone could have (and
should have) commit rights, and they would send pull requests. You or one of the managers would then review the changes and pull and merge with the main branch. It works great; just checkout out Rubinius on Github to see what I mean: https://github.com/evanphx/rubinius
Jan 06 2011
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Nick Sabalausky:

Automatically accepting all submissions immediately into the main line with no
review isn't a good thing either.<
I agree with all you have said, I was not suggesting a wild west :-) But maybe there are ways to improve the situation a little, I don't think the current situation is perfect. A better revision control system like Git or Mercury (they are not equal, but both are good enough) will be an improvement. ------------------ Caligo:
 Perhaps using a modern SCM like Git might help?  Everyone could have (and
 should have) commit rights, and they would send pull requests.  You or one
 of the managers would then review the changes and pull and merge with the
 main branch.  It works great; just checkout out Rubinius on Github to see
 what I mean: https://github.com/evanphx/rubinius
I agree. Such systems allow to find a middle point better than the current one between wild freedom and frozen proprietary control. Walter and few others are the only allowed to commit to the main trunk, so Walter has no risk in "losing grip on how the whole thing works", but freedom in submitting patches and creating branches allows people more experimentation, simpler review of patches and trunks, turning D/DMD in a more open source effort... So I suggest Walter to consider all this. Bye, bearophile
Jan 06 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Caligo" <iteronvexor gmail.com> wrote in message 
news:mailman.451.1294306555.4748.digitalmars-d puremagic.com...
 On Thu, Jan 6, 2011 at 12:28 AM, Walter Bright
 <newshound2 digitalmars.com>wrote:

 That's pretty much what I'm afraid of, losing my grip on how the whole
 thing works if there are multiple dmd committers.

 Perhaps using a modern SCM like Git might help?  Everyone could have (and
should have) commit rights, and they would send pull requests. You or one of the managers would then review the changes and pull and merge with the main branch. It works great; just checkout out Rubinius on Github to see what I mean: https://github.com/evanphx/rubinius
I'm not sure I see how that's any different from everyone having "create and submit a patch" rights, and then having Walter or one of the managers review the changes and merge/patch with the main branch.
Jan 06 2011
next sibling parent Ulrik Mikaelsson <ulrik.mikaelsson gmail.com> writes:
2011/1/6 Nick Sabalausky <a a.a>:
 "Caligo" <iteronvexor gmail.com> wrote in message
 news:mailman.451.1294306555.4748.digitalmars-d puremagic.com...
 Perhaps using a modern SCM like Git might help? =C2=A0Everyone could hav=
e (and
 should have) commit rights, and they would send pull requests. =C2=A0You=
or one
 of the managers would then review the changes and pull and merge with th=
e
 main branch. =C2=A0It works great; just checkout out Rubinius on Github =
to see
 what I mean: https://github.com/evanphx/rubinius
I'm not sure I see how that's any different from everyone having "create =
and
 submit a patch" rights, and then having Walter or one of the managers rev=
iew
 the changes and merge/patch with the main branch.
With the risk of starting yet another VCS-flamewar: It gives the downstream developers an easier option to work on multiple patches in patch-sets. Many non-trivial changes are too big to do in a single step, but requires series of changes. Sure, the downstream hacker could maintain import/conversion to VCS, but with added job, and when Walter or someone else gets to review they are no longer well-annotated patches. It also facilitates a setup where Walter (BDFL? ;) starts to trust some contributors (if he wants to) more than others, for them to work on private branches and submit larger series of patches for each release. Especially, when you detect a showstopper bug that blocks your progress, IMHO, it's easier using a DVCS to maintain a local patch for the needed fix, until upstream includes it. I've often used that strategy both in D-related and other projects just to remain sane and work-around upstream bugs, I just usually have to jump through a some hoops getting the source into DVCS in the first place. I think it was on this list I saw the comparison of VCS:es to the Blub-problem? http://en.wikipedia.org/wiki/Blub#Blub Although, I don't think the current setup have any _serious_ problems, I think there might be slight advantages to gain. OTOH, unless other current key contributors wants to push it, it's probably not worth the cost of change.
Jan 06 2011
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Nick Sabalausky wrote:
 "Caligo" <iteronvexor gmail.com> wrote in message 
 news:mailman.451.1294306555.4748.digitalmars-d puremagic.com...
 On Thu, Jan 6, 2011 at 12:28 AM, Walter Bright
 <newshound2 digitalmars.com>wrote:

 That's pretty much what I'm afraid of, losing my grip on how the whole
 thing works if there are multiple dmd committers.

 Perhaps using a modern SCM like Git might help?  Everyone could have (and
should have) commit rights, and they would send pull requests. You or one of the managers would then review the changes and pull and merge with the main branch. It works great; just checkout out Rubinius on Github to see what I mean: https://github.com/evanphx/rubinius
I'm not sure I see how that's any different from everyone having "create and submit a patch" rights, and then having Walter or one of the managers review the changes and merge/patch with the main branch.
I don't, either.
Jan 06 2011
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

 I don't, either.
Then it's a very good moment for starting to seeing/understanding this and similar things! Bye, bearophile
Jan 06 2011
prev sibling next sibling parent reply Russel Winder <russel russel.org.uk> writes:
On Thu, 2011-01-06 at 03:10 -0800, Walter Bright wrote:
 Nick Sabalausky wrote:
 "Caligo" <iteronvexor gmail.com> wrote in message=20
 news:mailman.451.1294306555.4748.digitalmars-d puremagic.com...
 On Thu, Jan 6, 2011 at 12:28 AM, Walter Bright
 <newshound2 digitalmars.com>wrote:

 That's pretty much what I'm afraid of, losing my grip on how the whol=
e
 thing works if there are multiple dmd committers.

 Perhaps using a modern SCM like Git might help?  Everyone could have =
(and
 should have) commit rights, and they would send pull requests.  You or=
one
 of the managers would then review the changes and pull and merge with =
the
 main branch.  It works great; just checkout out Rubinius on Github to =
see
 what I mean: https://github.com/evanphx/rubinius
=20 I'm not sure I see how that's any different from everyone having "creat=
e and=20
 submit a patch" rights, and then having Walter or one of the managers r=
eview=20
 the changes and merge/patch with the main branch.
=20 I don't, either.
Pity, because using one of Mercurial, Bazaar or Git instead of Subversion is likely the best and fastest way of getting more quality contributions to review. Although only anecdotal in every case where a team has switched to DVCS from CVCS -- except in the case of closed projects, obviously -- it has opened things up to far more people to provide contributions. Subversion is probably now the single biggest barrier to getting input on system evolution. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jan 06 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Russel Winder wrote:
 Pity, because using one of Mercurial, Bazaar or Git instead of
 Subversion is likely the best and fastest way of getting more quality
 contributions to review.  Although only anecdotal in every case where a
 team has switched to DVCS from CVCS -- except in the case of closed
 projects, obviously -- it has opened things up to far more people to
 provide contributions.  Subversion is probably now the single biggest
 barrier to getting input on system evolution.
A couple months back, I did propose moving to git on the dmd internals mailing list, and nobody was interested. One thing I like a lot about svn is this: http://www.dsource.org/projects/dmd/changeset/291 where the web view will highlight the revision's changes. Does git or mercurial do that? The other thing I like a lot about gif is it sends out emails for each checkin. One thing I would dearly like is to be able to merge branches using meld. http://meld.sourceforge.net/
Jan 06 2011
next sibling parent reply Jesse Phillips <jessekphillips+D gmail.com> writes:
Walter Bright Wrote:

 Russel Winder wrote:
 Pity, because using one of Mercurial, Bazaar or Git instead of
 Subversion is likely the best and fastest way of getting more quality
 contributions to review.  Although only anecdotal in every case where a
 team has switched to DVCS from CVCS -- except in the case of closed
 projects, obviously -- it has opened things up to far more people to
 provide contributions.  Subversion is probably now the single biggest
 barrier to getting input on system evolution.
A couple months back, I did propose moving to git on the dmd internals mailing list, and nobody was interested. One thing I like a lot about svn is this: http://www.dsource.org/projects/dmd/changeset/291
You mean this: https://github.com/braddr/dmd/commit/f1fde96227394f926da5841db4f0f4c608b2e7b2
 
 where the web view will highlight the revision's changes. Does git or
mercurial 
 do that? The other thing I like a lot about gif is it sends out emails for
each 
 checkin.
 
 One thing I would dearly like is to be able to merge branches using meld.
 
 http://meld.sourceforge.net/
Git does not have its own merge tool. You are free to use meld. Though there is gitmerge which can run meld as the merge tool.
Jan 06 2011
next sibling parent Jesse Phillips <jessekphillips+D gmail.com> writes:
Jesse Phillips Wrote:

 where the web view will highlight the revision's changes. Does git or
mercurial 
 do that? The other thing I like a lot about gif is it sends out emails for
each 
 checkin.
 
 One thing I would dearly like is to be able to merge branches using meld.
 
 http://meld.sourceforge.net/
Git does not have its own merge tool. You are free to use meld. Though there is gitmerge which can run meld as the merge tool.
Just realized you probably meant more than just resolving conflicts. And what you might be interested in git cherry picking. I haven't done it myself and don't know if meld could be used for it.
Jan 06 2011
prev sibling next sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2011-01-06 15:01:18 -0500, Jesse Phillips <jessekphillips+D gmail.com> said:

 Walter Bright Wrote:
 
 A couple months back, I did propose moving to git on the dmd internals mailing
 list, and nobody was interested.
I probably wasn't on the list at the time. I'm certainly interested, it'd certainly make it easier for me, as I'm using git locally to access that repo.
 One thing I like a lot about svn is this:
 
 http://www.dsource.org/projects/dmd/changeset/291
You mean this: https://github.com/braddr/dmd/commit/f1fde96227394f926da5841db4f0f4c608b2e7b2
That's
 
only if you're hosted on github. If you install on your own server, git comes with a web interface that looks like this (pointing to a specific diff): <http://repo.or.cz/w/LinuxKernelDevelopmentProcess.git/commitdiff/d7214dcb5be988a5c7d407f907c7e7e789872d24> Also when I want an overview with git I just type gitk on the command line to bring a window where I can browser the graph of forks, merges and commits and see the diff for each commit. Here's what gitk looks like: <http://michael-prokop.at/blog/img/gitk.png>
 where the web view will highlight the revision's changes. Does git or mercurial
 do that? The other thing I like a lot about gif is it sends out emails for each
 checkin.
 
 One thing I would dearly like is to be able to merge branches using meld.
 
 http://meld.sourceforge.net/
Git does not have its own merge tool. You are free to use meld. Though there is gitmerge which can run meld as the merge tool.
Looks like meld itself used git as it's repository. I'd be surprised if it doesn't work with git. :-) -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Jan 06 2011
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Michel Fortin wrote:
 On 2011-01-06 15:01:18 -0500, Jesse Phillips 
 <jessekphillips+D gmail.com> said:
 
 Walter Bright Wrote:

 A couple months back, I did propose moving to git on the dmd 
 internals mailing
 list, and nobody was interested.
I probably wasn't on the list at the time. I'm certainly interested, it'd certainly make it easier for me, as I'm using git locally to access that repo.
 One thing I like a lot about svn is this:

 http://www.dsource.org/projects/dmd/changeset/291
You mean this: https://github.com/braddr/dmd/commit/f1fde96227394f926da5841db4f0f4c608b2e7b2
That's

 only if you're hosted on github. If you install on your own server, git 
 comes with a web interface that looks like this (pointing to a specific 
 diff):
 <http://repo.or.cz/w/LinuxKernelDevelopmentProcess.git/commitdiff/d7214dcb5be988a5c7d40
f907c7e7e789872d24> 
Eh, that's inferior. The svn will will highlight what part of a line is different, rather than just the whole line.
 Looks like meld itself used git as it's repository. I'd be surprised if 
 it doesn't work with git. :-)
I use git for other projects, and meld doesn't work with it.
Jan 06 2011
parent reply Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes:
Walter Bright wrote:

 Looks like meld itself used git as it's repository. I'd be surprised if
 it doesn't work with git. :-)
I use git for other projects, and meld doesn't work with it.
What version are you on? I'm using 1.3.2 and its supports git and mercurial (also committing from inside meld & stuff, I take it this is what you mean with supporting a vcs).
Jan 08 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Lutger Blijdestijn wrote:
 Walter Bright wrote:
 
 Looks like meld itself used git as it's repository. I'd be surprised if
 it doesn't work with git. :-)
I use git for other projects, and meld doesn't work with it.
What version are you on? I'm using 1.3.2 and its supports git and mercurial (also committing from inside meld & stuff, I take it this is what you mean with supporting a vcs).
The one that comes with: sudo apt-get meld 1.1.5.1
Jan 08 2011
next sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2011-01-08 15:36:39 -0500, Walter Bright <newshound2 digitalmars.com> said:

 Lutger Blijdestijn wrote:
 Walter Bright wrote:
 
 Looks like meld itself used git as it's repository. I'd be surprised if
 it doesn't work with git. :-)
I use git for other projects, and meld doesn't work with it.
What version are you on? I'm using 1.3.2 and its supports git and mercurial (also committing from inside meld & stuff, I take it this is what you mean with supporting a vcs).
The one that comes with: sudo apt-get meld 1.1.5.1
I know you had your reasons, but perhaps it's time for you upgrade to a more recent version of Ubuntu? That version is what comes with Hardy Heron (april 2008). <https://launchpad.net/ubuntu/+source/meld> Or you could download the latest version from meld's website and compile it yourself. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Jan 08 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Michel Fortin wrote:
 I know you had your reasons, but perhaps it's time for you upgrade to a 
 more recent version of Ubuntu? That version is what comes with Hardy 
 Heron (april 2008).
 <https://launchpad.net/ubuntu/+source/meld>
I know. The last time I upgraded Ubuntu in place it f****d up my system so bad I had to wipe the disk and start all over. It still won't play videos correctly (the previous Ubuntu worked fine), the rhythmbox music player never worked again, it wiped out all my virtual boxes, I had to spend hours googling around trying to figure out how to reconfigure the display driver so the monitor worked again, etc. I learned my lesson! Yes, I'll eventually upgrade, but I'm not looking forward to it.
 Or you could download the latest version from meld's website and compile 
 it yourself.
Yeah, I could spend an afternoon doing that.
Jan 08 2011
next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Yeah, I could spend an afternoon doing that.
sudo apt-get build-dep meld wget http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 tar jxf meld-1.5.0.tar.bz2 cd meld-1.5.0 make sudo make install You're welcome ;) (Yes, I just tested it on a Ubuntu install, albeit 10.10. No, no ./configure needed. For anyone else who tries this and didn't already have meld, you may need to apt-get install python-gtk2 manually.) -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 08 2011
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 1/9/11, Vladimir Panteleev <vladimir thecybershadow.net> wrote:
 On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright
 <newshound2 digitalmars.com> wrote:

 Yeah, I could spend an afternoon doing that.
sudo apt-get build-dep meld wget http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 tar jxf meld-1.5.0.tar.bz2 cd meld-1.5.0 make sudo make install You're welcome ;) (Yes, I just tested it on a Ubuntu install, albeit 10.10. No, no ./configure needed. For anyone else who tries this and didn't already have meld, you may need to apt-get install python-gtk2 manually.) -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Now do it on Windows!! Now that *would* probably take an afternoon.
Jan 08 2011
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Sun, 09 Jan 2011 02:34:42 +0200, Andrej Mitrovic  
<andrej.mitrovich gmail.com> wrote:

 Now do it on Windows!!
Now that *would* probably take an afternoon.
Done! Just had to install PyGTK. (Luckily for me, meld is written in Python, so there was no need to mess with MinGW :P) From taking a quick look, I don't see meld's advantage over WinMerge (other than being cross-platform). -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 08 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Vladimir Panteleev wrote:
  From taking a quick look, I don't see meld's advantage over WinMerge 
 (other than being cross-platform).
Thanks for pointing me at winmerge. I've been looking for one to work on Windows.
Jan 08 2011
next sibling parent "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Sun, 09 Jan 2011 04:17:21 +0200, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Vladimir Panteleev wrote:
  From taking a quick look, I don't see meld's advantage over WinMerge  
 (other than being cross-platform).
Thanks for pointing me at winmerge. I've been looking for one to work on Windows.
Actually, I just noticed that WinMerge doesn't have three-way merge (in all instances when I needed it my SCM launched TortoiseMerge). That's probably a show-stopper for you. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 08 2011
prev sibling next sibling parent =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Walter Bright wrote:
 Vladimir Panteleev wrote:
  From taking a quick look, I don't see meld's advantage over WinMerge
 (other than being cross-platform).
=20 Thanks for pointing me at winmerge. I've been looking for one to work o=
n
 Windows.
I personally use kdiff3 [1] both on Linux and Windows. Jerome [1] http://kdiff3.sourceforge.net/ --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jan 09 2011
prev sibling parent "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:igb5uo$26af$1 digitalmars.com...
 Vladimir Panteleev wrote:
  From taking a quick look, I don't see meld's advantage over WinMerge 
 (other than being cross-platform).
Thanks for pointing me at winmerge. I've been looking for one to work on Windows.
Beyond Compare and Ultra Compare
Jan 11 2011
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Vladimir Panteleev wrote:
 On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright 
 <newshound2 digitalmars.com> wrote:
 
 Yeah, I could spend an afternoon doing that.
sudo apt-get build-dep meld wget http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 tar jxf meld-1.5.0.tar.bz2 cd meld-1.5.0 make sudo make install You're welcome ;)
Thanks, I'll give it a try!
Jan 08 2011
parent reply Christopher Nicholson-Sauls <ibisbasenji gmail.com> writes:
On 01/08/11 20:18, Walter Bright wrote:
 Vladimir Panteleev wrote:
 On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright
 <newshound2 digitalmars.com> wrote:

 Yeah, I could spend an afternoon doing that.
sudo apt-get build-dep meld wget http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 tar jxf meld-1.5.0.tar.bz2 cd meld-1.5.0 make sudo make install You're welcome ;)
Thanks, I'll give it a try!
I say you should consider moving away from *Ubuntu and to something more "developer-friendly" such as Gentoo, where the command to install meld is just: emerge meld ...done. And yes, that's an install from source. I just did it myself, and it took right at one minute. -- Chris N-S
Jan 09 2011
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday 09 January 2011 04:00:21 Christopher Nicholson-Sauls wrote:
 On 01/08/11 20:18, Walter Bright wrote:
 Vladimir Panteleev wrote:
 On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright
 
 <newshound2 digitalmars.com> wrote:
 Yeah, I could spend an afternoon doing that.
sudo apt-get build-dep meld wget http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 tar jxf meld-1.5.0.tar.bz2 cd meld-1.5.0 make sudo make install You're welcome ;)
Thanks, I'll give it a try!
I say you should consider moving away from *Ubuntu and to something more "developer-friendly" such as Gentoo, where the command to install meld is just: emerge meld ...done. And yes, that's an install from source. I just did it myself, and it took right at one minute.
Yeah well, much as I like gentoo, if he didn't like dealing with the pain of an Ubuntu upgrade messing with his machine, I doubt that he'll be enamoured with having to keep figuring out how to fix his machine because one of the builds didn't work on an update in Gentoo. Gentoo definitely has some great stuff going for it, but you have to be willing to deal with fixing your machine on a semi- regular basis. Personally, I got sick of it and moved on. Currently, I use Arch, which is _way_ more friendly for building non-repo packages yourself or otherwise messing repo packages. You _can_ choose to build from source but don't _have_ to, and you get a rolling release like you effectively get with Gentoo. So, I'm much happier with Arch than I was with Gentoo. But regardless, there's no need to start an argument over distros. They all have their pros and cons, and everyone is going to prefer one over another. Still, Gentoo is one of those distros where you have to expect to work at maintaining your machine, whereas Ubuntu really isn't. So, I wouldn't normally recommend Gentoo to someone who's using Ubuntu unless they're specifically looking for something like Gentoo. - Jonathan M Davis
Jan 09 2011
parent Gour <gour atmarama.net> writes:
On Sun, 9 Jan 2011 04:15:07 -0800
 "Jonathan" =3D=3D <jmdavisProg gmx.com> wrote:
Jonathan> Personally, I got sick of it and moved on. Currently, I use Jonathan> Arch, which is _way_ more friendly for building non-repo Jonathan> packages yourself or otherwise messing repo packages. You Jonathan> _can_ choose to build from source but don't _have_ to, and Jonathan> you get a rolling release like you effectively get with Jonathan> Gentoo. So, I'm much happier with Arch than I was with Jonathan> Gentoo. +1 (after spending 5yrs with Gentoo...and never looked back) Sincerely, Gour --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ----------------------------------------------------------------
Jan 09 2011
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
I'm keeping my eye on BeyondCompare. But it's not free. It's $80 for
the dual platform Linux+Windows and the Pro version which features
3-way merge. It's customization options are great though. There's a
trial version over at http://www.scootersoftware.com/ if you want to
give it a spin.
Jan 09 2011
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 1/9/11, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:
 I'm keeping my eye on BeyondCompare. But it's not free. It's $80 for
 the dual platform Linux+Windows and the Pro version which features
 3-way merge. It's customization options are great though. There's a
 trial version over at http://www.scootersoftware.com/ if you want to
 give it a spin.
There's at least one caveat though: it doesn't natively support D files. So the best thing to do is add *.d and *.di as file masks for its C++ parser.
Jan 09 2011
prev sibling parent reply retard <re tard.com.invalid> writes:
Sun, 09 Jan 2011 06:00:21 -0600, Christopher Nicholson-Sauls wrote:

 On 01/08/11 20:18, Walter Bright wrote:
 Vladimir Panteleev wrote:
 On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright
 <newshound2 digitalmars.com> wrote:

 Yeah, I could spend an afternoon doing that.
sudo apt-get build-dep meld wget http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 tar jxf meld-1.5.0.tar.bz2 cd meld-1.5.0 make sudo make install You're welcome ;)
Thanks, I'll give it a try!
I say you should consider moving away from *Ubuntu and to something more "developer-friendly" such as Gentoo, where the command to install meld is just: emerge meld ...done. And yes, that's an install from source. I just did it myself, and it took right at one minute.
Gentoo really needs a high-end computer to run fast. FWIW, the same meld takes 7 seconds to install on my ubuntu. That includes fetching the package from the internet (1-2 seconds). Probably even faster on Arch.
Jan 10 2011
parent Christopher Nicholson-Sauls <ibisbasenji gmail.com> writes:
On 01/10/11 21:14, retard wrote:
 Sun, 09 Jan 2011 06:00:21 -0600, Christopher Nicholson-Sauls wrote:
 
 On 01/08/11 20:18, Walter Bright wrote:
 Vladimir Panteleev wrote:
 On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright
 <newshound2 digitalmars.com> wrote:

 Yeah, I could spend an afternoon doing that.
sudo apt-get build-dep meld wget http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 tar jxf meld-1.5.0.tar.bz2 cd meld-1.5.0 make sudo make install You're welcome ;)
Thanks, I'll give it a try!
I say you should consider moving away from *Ubuntu and to something more "developer-friendly" such as Gentoo, where the command to install meld is just: emerge meld ...done. And yes, that's an install from source. I just did it myself, and it took right at one minute.
Gentoo really needs a high-end computer to run fast.
Tell that to the twelve year old machine here in our living room, running latest Gentoo profile with KDE 4.x all with no problem. FWIW, the same meld
 takes 7 seconds to install on my ubuntu. That includes fetching the 
 package from the internet (1-2 seconds). Probably even faster on Arch.
Sure, and my wife's Kubuntu machine would probably do the same -- since *Ubuntu installs pre-compiled binaries (some packages are available as source, as I recall, but very few). I acknowledge that you disclaimed your statement with a "FWIW" but I have to say it isn't much of a comparison: pre-compiled binaries versus locally built from source. I only really brought up how long it took because of Walter's "spend an afternoon" comment anyhow, so really we both "win" in this case. ;) And yes, I'm an unashamed Gentoo advocate to begin with. Been using it as both server and personal desktop OS for years now. (Of course half or more of what I love about it is portage, which can be used with other distros -- and BSD! -- although I know nothing about how one sets that up.) -- Chris N-S
Jan 12 2011
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Vladimir Panteleev wrote:
 On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright 
 <newshound2 digitalmars.com> wrote:
 
 Yeah, I could spend an afternoon doing that.
sudo apt-get build-dep meld wget http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 tar jxf meld-1.5.0.tar.bz2 cd meld-1.5.0 make sudo make install You're welcome ;) (Yes, I just tested it on a Ubuntu install, albeit 10.10. No, no ./configure needed. For anyone else who tries this and didn't already have meld, you may need to apt-get install python-gtk2 manually.)
It doesn't work: walter mercury:~$ ./buildmeld [sudo] password for walter: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to find a source package for meld --2011-01-18 21:35:07-- http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2%0D Resolving ftp.gnome.org... 130.239.18.163, 130.239.18.173 Connecting to ftp.gnome.org|130.239.18.163|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2011-01-18 21:35:08 ERROR 404: Not Found. tar: meld-1.5.0.tar.bz2\r: Cannot open: No such file or directory tar: Error is not recoverable: exiting now tar: Child returned status 2 tar: Error exit delayed from previous errors : No such file or directoryld-1.5.0 : command not found: make '. Stop. No rule to make target `install
Jan 18 2011
parent reply KennyTM~ <kennytm gmail.com> writes:
On Jan 19, 11 13:38, Walter Bright wrote:
 Vladimir Panteleev wrote:
 On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright
 <newshound2 digitalmars.com> wrote:

 Yeah, I could spend an afternoon doing that.
sudo apt-get build-dep meld wget http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 tar jxf meld-1.5.0.tar.bz2 cd meld-1.5.0 make sudo make install You're welcome ;) (Yes, I just tested it on a Ubuntu install, albeit 10.10. No, no ./configure needed. For anyone else who tries this and didn't already have meld, you may need to apt-get install python-gtk2 manually.)
It doesn't work: walter mercury:~$ ./buildmeld [sudo] password for walter: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to find a source package for meld --2011-01-18 21:35:07-- http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2%0D Resolving ftp.gnome.org... 130.239.18.163, 130.239.18.173 Connecting to ftp.gnome.org|130.239.18.163|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2011-01-18 21:35:08 ERROR 404: Not Found. tar: meld-1.5.0.tar.bz2\r: Cannot open: No such file or directory tar: Error is not recoverable: exiting now tar: Child returned status 2 tar: Error exit delayed from previous errors : No such file or directoryld-1.5.0 : command not found: make '. Stop. No rule to make target `install
You should use LF ending, not CRLF ending.
Jan 18 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
KennyTM~ wrote:
 You should use LF ending, not CRLF ending.
I never thought of that. Fixing that, it gets further, but still innumerable errors: walter mercury:~$ ./buildmeld [sudo] password for walter: Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: autoconf automake1.7 autotools-dev cdbs debhelper fdupes gettext gnome-pkg-tools html2text intltool intltool-debian libmail-sendmail-perl libsys-hostname-long-perl m4 po-debconf python-dev python2.5-dev 0 upgraded, 17 newly installed, 0 to remove and 0 not upgraded. Need to get 7387kB of archives. After this operation, 23.9MB of additional disk space will be used. Do you want to continue [Y/n]? Y WARNING: The following packages cannot be authenticated! m4 autoconf autotools-dev automake1.7 html2text gettext intltool-debian po-debconf debhelper fdupes intltool cdbs gnome-pkg-tools libsys-hostname-long-perl libmail-sendmail-perl python2.5-dev python-dev Install these packages without verification [y/N]? y Get:1 http://ca.archive.ubuntu.com intrepid/main m4 1.4.11-1 [263kB] Err http://ca.archive.ubuntu.com intrepid/main autoconf 2.61-7ubuntu1 404 Not Found [IP: 91.189.92.170 80] Err http://ca.archive.ubuntu.com intrepid/main autotools-dev 20080123.1 404 Not Found [IP: 91.189.92.170 80] Get:2 http://ca.archive.ubuntu.com intrepid/main automake1.7 1.7.9-9 [391kB] Get:3 http://ca.archive.ubuntu.com intrepid/main html2text 1.3.2a-5 [95.6kB] Err http://ca.archive.ubuntu.com intrepid/main gettext 0.17-3ubuntu2 404 Not Found [IP: 91.189.92.170 80] Get:4 http://ca.archive.ubuntu.com intrepid/main intltool-debian 0.35.0+20060710.1 [31.6kB] Get:5 http://ca.archive.ubuntu.com intrepid/main po-debconf 1.0.15ubuntu1 [237kB] Err http://ca.archive.ubuntu.com intrepid/main debhelper 7.0.13ubuntu1 404 Not Found [IP: 91.189.92.170 80] Get:6 http://ca.archive.ubuntu.com intrepid/main fdupes 1.50-PR2-1 [19.1kB] Err http://ca.archive.ubuntu.com intrepid/main intltool 0.40.5-0ubuntu1 404 Not Found [IP: 91.189.92.170 80] Err http://ca.archive.ubuntu.com intrepid/main cdbs 0.4.52ubuntu7 404 Not Found [IP: 91.189.92.170 80] Err http://ca.archive.ubuntu.com intrepid/main gnome-pkg-tools 0.13.6ubuntu1 404 Not Found [IP: 91.189.92.170 80] Get:7 http://ca.archive.ubuntu.com intrepid/main libsys-hostname-long-perl 1.4-2 [11.4kB] Err http://ca.archive.ubuntu.com intrepid/main libmail-sendmail-perl 0.79-5 404 Not Found [IP: 91.189.92.170 80] Err http://ca.archive.ubuntu.com intrepid-updates/main python2.5-dev 2.5.2-11.1ubuntu1.1 404 Not Found [IP: 91.189.92.170 80] Err http://ca.archive.ubuntu.com intrepid/main python-dev 2.5.2-1ubuntu1 404 Not Found [IP: 91.189.92.170 80] Err http://security.ubuntu.com intrepid-security/main python2.5-dev 2.5.2-11.1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Fetched 1050kB in 2s (403kB/s) Failed to fetch http://ca.archive.ubuntu.com/ubuntu/pool/main/a/autoconf/autoconf_2 61-7ubuntu1_all.deb 404 Not Found [IP: 91.189.92.170 80] Failed to fetch http://ca.archive.ubuntu.com/ubuntu/pool/main/a/autotools-dev/autotools-de _20080123.1_all.deb 404 Not Found [IP: 91.189.92.170 80] Failed to fetch http://ca.archive.ubuntu.com/ubuntu/pool/main/g/gettext/gettext_0.1 -3ubuntu2_amd64.deb 404 Not Found [IP: 91.189.92.170 80] Failed to fetch http://ca.archive.ubuntu.com/ubuntu/pool/main/d/debhelper/debhelper_7 0.13ubuntu1_all.deb 404 Not Found [IP: 91.189.92.170 80] Failed to fetch http://ca.archive.ubuntu.com/ubuntu/pool/main/i/intltool/intltool_0.4 .5-0ubuntu1_all.deb 404 Not Found [IP: 91.189.92.170 80] Failed to fetch http://ca.archive.ubuntu.com/ubuntu/pool/main/c/cdbs/cdbs_0.4.52ubuntu7_all.deb 404 Not Found [IP: 91.189.92.170 80] Failed to fetch http://ca.archive.ubuntu.com/ubuntu/pool/main/g/gnome-pkg-tools/gnome-pkg-tools_0 13.6ubuntu1_all.deb 404 Not Found [IP: 91.189.92.170 80] Failed to fetch http://ca.archive.ubuntu.com/ubuntu/pool/main/libm/libmail-sendmail-perl/libmail-sendmail perl_0.79-5_all.deb 404 Not Found [IP: 91.189.92.170 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/python2.5/python2.5-dev_2.5.2-11. ubuntu1.1_amd64.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://ca.archive.ubuntu.com/ubuntu/pool/main/p/python-defaults/python-dev_2. .2-1ubuntu1_all.deb 404 Not Found [IP: 91.189.92.170 80] E: Unable to fetch some archives, try running apt-get update or apt-get --fix-missing. E: Failed to process build dependencies --2011-01-19 03:07:16-- http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 Resolving ftp.gnome.org... 130.239.18.163, 130.239.18.173 Connecting to ftp.gnome.org|130.239.18.163|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 330845 (323K) [application/x-bzip2] Saving to: `meld-1.5.0.tar.bz2' 100%[=============================================================>] 330,845 179K/s in 1.8s 2011-01-19 03:07:19 (179 KB/s) - `meld-1.5.0.tar.bz2' saved [330845/330845] python tools/install_paths \ libdir=/usr/local/lib/meld \ localedir=/usr/local/share/locale \ helpdir=/usr/local/share/gnome/help/meld \ sharedir=/usr/local/share/meld \ < bin/meld > bin/meld.install python tools/install_paths \ libdir=/usr/local/lib/meld \ localedir=/usr/local/share/locale \ helpdir=/usr/local/share/gnome/help/meld \ sharedir=/usr/local/share/meld \ < meld/paths.py > meld/paths.py.install intltool-merge -d po data/meld.desktop.in data/meld.desktop make: intltool-merge: Command not found make: *** [meld.desktop] Error 127 intltool-merge -d po data/meld.desktop.in data/meld.desktop make: intltool-merge: Command not found make: *** [meld.desktop] Error 127 walter mercury:~$
Jan 19 2011
next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Wed, 19 Jan 2011 13:11:07 +0200, Walter Bright  
<newshound2 digitalmars.com> wrote:

 KennyTM~ wrote:
 You should use LF ending, not CRLF ending.
I never thought of that. Fixing that, it gets further, but still innumerable errors:
If apt-get update doesn't fix it, only an update will - looks like your Ubuntu version is so old, Canonical is no longer maintaining repositories for it. The only alternative is downloading and installing the components manually, and that probably will take half a day :P -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 19 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
Vladimir Panteleev wrote:
 On Wed, 19 Jan 2011 13:11:07 +0200, Walter Bright 
 <newshound2 digitalmars.com> wrote:
 
 KennyTM~ wrote:
 You should use LF ending, not CRLF ending.
I never thought of that. Fixing that, it gets further, but still innumerable errors:
If apt-get update doesn't fix it, only an update will - looks like your Ubuntu version is so old, Canonical is no longer maintaining repositories for it. The only alternative is downloading and installing the components manually, and that probably will take half a day :P
Yeah, I figured that. Thanks for the try, anyway!
Jan 19 2011
prev sibling parent reply retard <re tard.com.invalid> writes:
Wed, 19 Jan 2011 03:11:07 -0800, Walter Bright wrote:

 KennyTM~ wrote:
 You should use LF ending, not CRLF ending.
I never thought of that. Fixing that, it gets further, but still innumerable errors:
 [snip]
I already told you in message digitalmars.d:126586 "..your Ubuntu version isn't supported anymore. They might have already removed the package repositories for unsupported versions and that might indeed lead to problems" It's exactly like using Windows 3.11 now. Totally unsupported. I'd so sad the leader of the D language is so incompetent with open source technologies. If you really want to stick with outdated operating system versions, why don't you install all the "stable" and "important" services on some headless virtual server (on another machine) and update the latest Ubuntu on your main desktop. It's hard to believe making backups of your /home/walter is so hard. That ought to be everything you need to do with desktop Ubuntu..
Jan 19 2011
next sibling parent retard <re tard.com.invalid> writes:
Wed, 19 Jan 2011 19:15:54 +0000, retard wrote:

 Wed, 19 Jan 2011 03:11:07 -0800, Walter Bright wrote:
 
 KennyTM~ wrote:
 You should use LF ending, not CRLF ending.
I never thought of that. Fixing that, it gets further, but still innumerable errors: [snip]
I already told you in message digitalmars.d:126586 "..your Ubuntu version isn't supported anymore. They might have already removed the package repositories for unsupported versions and that might indeed lead to problems"
So.. the situation is so bad that you can't install ANY packages anymore. Accidently removing packages can make the system unbootable and those application are gone for good (unless you do a fresh reinstall). My bet is that if it isn't already impossible to upgrade to a new version, when they remove the repositories for the next Ubuntu version, you're completely fucked up.
Jan 19 2011
prev sibling parent reply Gour <gour atmarama.net> writes:
On Wed, 19 Jan 2011 19:15:54 +0000 (UTC)
retard <re tard.com.invalid> wrote:

 "..your Ubuntu version isn't supported anymore. They might have
 already removed the package repositories for unsupported versions and
 that might indeed lead to problems"
That's why we wrote it would be better to use some rolling release like Archlinux where distro cannot become so outdated that it's not possible to upgrade easily. Sincerely, Gour --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ----------------------------------------------------------------
Jan 19 2011
next sibling parent "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Wed, 19 Jan 2011 23:18:13 +0200, Gour <gour atmarama.net> wrote:

 On Wed, 19 Jan 2011 19:15:54 +0000 (UTC)
 retard <re tard.com.invalid> wrote:

 "..your Ubuntu version isn't supported anymore. They might have
 already removed the package repositories for unsupported versions and
 that might indeed lead to problems"
That's why we wrote it would be better to use some rolling release like Archlinux where distro cannot become so outdated that it's not possible to upgrade easily.
Walter needs something he can install and get on with compiler hacking. ArchLinux sounds quite far from that. I'd just recommend upgrading to an Ubuntu LTS (to also minimize the requirement of familiarizing yourself with a new distribution). -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 19 2011
prev sibling parent reply Jeff Nowakowski <jeff dilacero.org> writes:
On 01/19/2011 04:18 PM, Gour wrote:
 That's why we wrote it would be better to use some rolling release
 like Archlinux where distro cannot become so outdated that it's not
 possible to upgrade easily.
https://wiki.archlinux.org/index.php/FAQ : "Q) Why would I not want to use Arch? A) [...] you do not have the ability/time/desire for a 'do-ityourself' GNU/Linux distribution" I also don't see how Archlinux protects you from an outdated system. It's up to you to update your system. The longer you wait, the more chance incompatibilities creep in. However, the tradeoff is that if you update weekly or monthly, then you will spend more time encountering problems between upgrades. There's no silver bullet here. Personally, I think you should just suck it up, make a backup of your system (which you should be doing routinely anyways), and upgrade once a year. The worst case scenario is that you re-install from scratch. It's probably better to do that once in a while anyways, as cruft tends to accumulate when upgrading in place.
Jan 19 2011
next sibling parent reply Gary Whatmore <no spam.sp> writes:
Jeff Nowakowski Wrote:

 On 01/19/2011 04:18 PM, Gour wrote:
 That's why we wrote it would be better to use some rolling release
 like Archlinux where distro cannot become so outdated that it's not
 possible to upgrade easily.
https://wiki.archlinux.org/index.php/FAQ : "Q) Why would I not want to use Arch? A) [...] you do not have the ability/time/desire for a 'do-ityourself' GNU/Linux distribution"
This is something the Gentoo and Arch fanboys don't get. They don't have any idea how little time a typical Ubuntu user spends maintaining the system and installing updates. The best solution is to hire some familiar with computers (e.g. nephew with chocolate). It's almost free and they will want to spend hours configuring your system. This way you spend none of your own time maintaining. Another option is to turn on all automatic updates. Everything happens in the background. It might ask for a sudo password once in a week. In any case the Ubuntu user spends less than 10 minutes per month maintaining the system. It's possible but you need compatible hardware (Nvidia graphics and Wifi without a proprietary firmware, at least). You can't beat that.
 I also don't see how Archlinux protects you from an outdated system. 
 It's up to you to update your system. The longer you wait, the more 
 chance incompatibilities creep in.
I personally use CentOS for anything stable. I *Was* a huge Gentoo fanboy, but the compilation simply takes too much time, and something is constantly broken if you enable ~x86 packages. I've also tried Arch. All the cool kids use it, BUT it doesn't automatically handle any configuration files in /etc and even worse, if you enable the "unstable" community repositories, the packages won't stay there long in the repository - a few days! The replacement policy is nuts. One of the packages was already removed from the server before pacman (the package manager) started downloading it! Arch is a pure community based distro for hardcore enthusiastics. It's fundamentally incompatible with stability.
 
 However, the tradeoff is that if you update weekly or monthly, then you 
 will spend more time encountering problems between upgrades. There's no 
 silver bullet here.
Yes. Although I fail to see why upgrating Ubuntu is so hard. It only takes one hour or two every 6 months or every 3 years. The daily security updates should work automatically just like in Windows.
 
 Personally, I think you should just suck it up, make a backup of your 
 system (which you should be doing routinely anyways), and upgrade once a 
 year.
Dissing Walter has become a sad tradition here. I'm sure a long time software professional knows how to make backups and he has likely written his own backup software and RAID drivers before you were even born. The reason Waltzy feels so clumsy in Linux world is probably the Windows XP attitude we all long time Windows users suffer from. Many powerusers are still using Windows XP, and it has a long term support plan. The support might last forever. You've updated Windows XP only three times. Probably 20 versions of Ubuntu have appeared since Windows XP was launched. Ubuntu is stuck with the "we MUST release SOMETHING at least every 3 years" just like WIndows did before XP: Win 3.11 -> 95 -> 98 -> XP (all intervals exactly 3 years).
Jan 19 2011
parent Gour <gour atmarama.net> writes:
On Wed, 19 Jan 2011 21:57:46 -0500
Gary Whatmore <no spam.sp> wrote:

 This is something the Gentoo and Arch fanboys don't get.=20
First of all I spent >5yrs with Gentoo before jumping to Arch and those are really two different beasts. With Arch I practically have zero-admin time after I did my 1st install.
 They don't have any idea how little time a typical Ubuntu user
 spends maintaining the system and installing updates.
Moreover, I spent enough time servicing Ubuntu for new Linux users (refugees from Windows) and upgrading (*)Ubuntu from e.g. 8.10 to 10.10 was never easy and smooth, while with Arch there is no such thing as 'no packages for my version'.
 Another option is to turn on all automatic updates. Everything
 happens in the background. It might ask for a sudo password once in a
 week.
What if automatic update breaks something which happens? With Arch and without automatic update I can always wait few days to be sure that new stuff (e.g. kernel) do not bring some undesired regressions.
 I personally use CentOS for anything stable. I *Was* a huge Gentoo
 fanboy, but the compilation simply takes too much time, and something
 is constantly broken if you enable ~x86 packages.=20
/me nods having experience with ~amd64
 I've also tried Arch. All the cool kids use it, BUT it doesn't automatica=
lly handle
 any configuration files in /etc and even worse,=20
You can see what new config files are there (*.pacnew) and simple merge with e.g. meld/ediff is something what I'd always prefer than having my conf files automatically overwritten. ;)
 if you enable the "unstable" community repositories, the packages
 won't stay there long in the repository - a few days! The
 replacement policy is nuts. One of the packages was already removed
 from the server before pacman (the package manager) started
 downloading it! Arch is a pure community based distro for hardcore
 enthusiastics. It's fundamentally incompatible with stability.
You gott what you asked for. :-) What you say does not make sense: You speak about Ubuntu's stability and comparing it with using 'unstable' packages in Arch which means you're comparing apples with oranges... Unstable packages (now 'testing') are for devs & geeks, but normal users can have very decent system by using core/extra/community packages only without much hassle. Sincerely, Gour (satisfied with Arch, just offering friendly advice and not caring much what OS people are using as long as it's Linux) --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ----------------------------------------------------------------
Jan 19 2011
prev sibling parent reply Gour <gour atmarama.net> writes:
On Wed, 19 Jan 2011 20:28:43 -0500
Jeff Nowakowski <jeff dilacero.org> wrote:

 "Q) Why would I not want to use Arch?
=20
 A) [...] you do not have the ability/time/desire for a
 'do-ityourself' GNU/Linux distribution"
I've feeling that you just copied the above from FAQ and never actually tried Archlinux. The "do-it-yourself" from the above means that in Arch user is not forced to use specific DE, WM etc., can choose whether he prefers WiCD over NM etc. On the Ubuntu side, there are, afaik, at least 3 distros achieving the same thing (Ubuntu, KUbuntu, XUBuntu) with less flexibility. :-D
 I also don't see how Archlinux protects you from an outdated system.=20
 It's up to you to update your system. The longer you wait, the more=20
 chance incompatibilities creep in.
That's not true...In Arch there is simply no Arch-8.10 or Arch-10.10 which means that whenever you update your system package manager will simply pull all the packages which are required for desired kernel, gcc version etc. I service my father-in-law's machine and he is practically illiterate for computers and often I do not update his system for months knowing well he does not require bleeding edge stuff, so when there is time for the update it is simple: pacman -Syu with some more packages in a queue than on my machine. ;) Otoh, with Ubuntu, upgrade from 8.10 to 10.10 is always a major undertaking (I'm familiar with it since '99 when I used SuSE and had experience with deps hell.) Sincerely, Gour --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ----------------------------------------------------------------
Jan 19 2011
next sibling parent reply Jeff Nowakowski <jeff dilacero.org> writes:
On 01/20/2011 12:24 AM, Gour wrote:
 I've feeling that you just copied the above from FAQ and never
 actually tried Archlinux.
No, I haven't tried it. I'm not going to try every OS that comes down the pike. If the FAQ says that you're going to have to be more of an expert with your system, then I believe it. If it's wrong, then maybe you can push them to update it.
 The "do-it-yourself" from the above means that in Arch user is not
 forced to use specific DE, WM etc., can choose whether he prefers WiCD
 over NM etc.
So instead of giving you a bunch of sane defaults, you have to make a bunch of choices up front. That's a heavy investment of time, especially for somebody unfamiliar with Linux.
 That's not true...In Arch there is simply no Arch-8.10 or Arch-10.10
 which means that whenever you update your system package manager will
 simply pull all the packages which are required for desired kernel,
 gcc version etc.
The upgrade problems are still there. *Every package* you upgrade has a chance to be incompatible with the previous version. The longer you wait, the more incompatibilities there will be.
 Otoh, with Ubuntu, upgrade from 8.10 to 10.10 is always a major
 undertaking (I'm familiar with it since  '99 when I used SuSE and had
 experience with deps hell.)
Highlighting the problem of waiting too long to upgrade. You're skipping an entire release. I'd like to see you take a snapshot of Arch from 2008, use the system for 2 years without updating, and then upgrade to the latest packages. Do you think Arch is going to magically have no problems?
Jan 20 2011
next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday 20 January 2011 03:39:08 Jeff Nowakowski wrote:
 On 01/20/2011 12:24 AM, Gour wrote:
 I've feeling that you just copied the above from FAQ and never
 actually tried Archlinux.
No, I haven't tried it. I'm not going to try every OS that comes down the pike. If the FAQ says that you're going to have to be more of an expert with your system, then I believe it. If it's wrong, then maybe you can push them to update it.
 The "do-it-yourself" from the above means that in Arch user is not
 forced to use specific DE, WM etc., can choose whether he prefers WiCD
 over NM etc.
So instead of giving you a bunch of sane defaults, you have to make a bunch of choices up front. That's a heavy investment of time, especially for somebody unfamiliar with Linux.
 That's not true...In Arch there is simply no Arch-8.10 or Arch-10.10
 which means that whenever you update your system package manager will
 simply pull all the packages which are required for desired kernel,
 gcc version etc.
The upgrade problems are still there. *Every package* you upgrade has a chance to be incompatible with the previous version. The longer you wait, the more incompatibilities there will be.
 Otoh, with Ubuntu, upgrade from 8.10 to 10.10 is always a major
 undertaking (I'm familiar with it since  '99 when I used SuSE and had
 experience with deps hell.)
Highlighting the problem of waiting too long to upgrade. You're skipping an entire release. I'd like to see you take a snapshot of Arch from 2008, use the system for 2 years without updating, and then upgrade to the latest packages. Do you think Arch is going to magically have no problems?
There is no question that Arch takes more to manage than a number of other distros. However, it takes _far_ less than Gentoo. Things generally just work in Arch, whereas you often have to figure out how to fix problems when updating on Gentoo. I wouldn't suggest Arch to a beginner, but I'd be _far_ more likely to suggest it to someone than Gentoo. Arch really doesn't take all that much to maintain, but it does have a higher setup cost than your average distro, and you do have to do some level of manual configuration that I'd expect a more typical distro like OpenSuSE or Ubuntu to take care of for you. So, I'd say that your view of Arch is likely a bit skewed, because you haven't actually used it, but it still definitely isn't a distro where you just stick in the install disk, install it, and then go on your merry way either. - Jonathan M Davis
Jan 20 2011
prev sibling next sibling parent reply Gour <gour atmarama.net> writes:
On Thu, 20 Jan 2011 06:39:08 -0500
Jeff Nowakowski <jeff dilacero.org> wrote:


 No, I haven't tried it. I'm not going to try every OS that comes down=20
 the pike.=20
Then please, without any offense, do not give advises about something which you did not try. I did use Ubuntu...
 So instead of giving you a bunch of sane defaults, you have to make a=20
 bunch of choices up front.=20
Right. That's why there is no need for separate distro based on DE user wants to have, iow, by simple: pacman -Sy xfce4 you get XFCE environment installed...same wit GNOME & KDE.
 That's a heavy investment of time, especially for somebody
 unfamiliar with Linux.
Again, you're speaking without personal experience... Moreover, in TDPL's foreword, Walter speaks about himself as "..of an engineer..", so I'm sure he is capable to handle The Arch Way (see section Simplicity at https://wiki.archlinux.org/index.php/Arch_Linux) which says: "The Arch Way is a philosophy aimed at keeping it simple. The Arch Linux base system is quite simply the minimal, yet functional GNU/Linux environment; the Linux kernel, GNU toolchain, and a handful of optional, extra command line utilities like links and Vi. This clean and simple starting point provides the foundation for expanding the system into whatever the user requires." and from there install one of the major DEs (GNOME, KDE or XFCE) to name a few.
 The upgrade problems are still there. *Every package* you upgrade has
 a chance to be incompatible with the previous version. The longer you=20
 wait, the more incompatibilities there will be.
There are no incompatibilities...if I upgrade kernel, it means that package manager will figure out what components has to be updated... Remember: there are no packages 'tagged' for any specific release!
 Highlighting the problem of waiting too long to upgrade. You're
 skipping an entire release. I'd like to see you take a snapshot of
 Arch from 2008, use the system for 2 years without updating, and then
 upgrade to the latest packages. Do you think Arch is going to
 magically have no problems?
I did upgrade on my father-in-law's machine which was more then 1yr old without any problem. You think there must be some magic to handle it...ask some FreeBSD user how they do it. ;) Sincerely, Gour --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ----------------------------------------------------------------
Jan 20 2011
next sibling parent reply Jeff Nowakowski <jeff dilacero.org> writes:
On 01/20/2011 07:33 AM, Gour wrote:
 On Thu, 20 Jan 2011 06:39:08 -0500
 Jeff Nowakowski<jeff dilacero.org>  wrote:


 No, I haven't tried it. I'm not going to try every OS that comes down
 the pike.
Then please, without any offense, do not give advises about something which you did not try. I did use Ubuntu...
Please yourself. I quoted from the FAQ from the distribution's main site. If that's wrong, then Arch has a big public relations problem. I can make rational arguments without having used a system.
 That's a heavy investment of time, especially for somebody
 unfamiliar with Linux.
Again, you're speaking without personal experience...
From Jonathan M Davis in this thread: "There is no question that Arch takes more to manage than a number of other distros. [..] Arch really doesn't take all that much to maintain, but it does have a higher setup cost than your average distro, and you do have to do some level of manual configuration that I'd expect a more typical distro like OpenSuSE or Ubuntu to take care of for you."
 Moreover, in TDPL's foreword, Walter speaks about himself as "..of an
 engineer..", so I'm sure he is capable to handle The Arch Way
You're talking about somebody who is running a nearly 3 year old version of Ubuntu because he had one bad upgrade experience, and is probably running software full of security holes. If he can't spend a day a year to upgrade his OS, what makes you think he wants to spend time on a more demanding distro?
 There are no incompatibilities...if I upgrade kernel, it means that
 package manager will figure out what components has to be updated...
And what happens when the kernel, as it often does, changes the way it handles things like devices, and expects the administrator to do some tweaking to handle the upgrade? What happens when you upgrade X and it no longer supports your video chipset? What happens when you upgrade something as basic as the DNS library, and it reacts badly with your router? Is Arch going to maintain your config files for you? Is it going to handle jumping 2 or 3 versions for software that can only upgrade from one version ago? These are real world examples. Arch is not some magic distribution that will make upgrade problems go away.
 Remember: there are no packages 'tagged' for any specific release!
Yeah, I know. I also run Debian Testing, which is a "rolling release". I'm not some Ubuntu noob.
Jan 20 2011
parent Gour <gour atmarama.net> writes:
On Thu, 20 Jan 2011 09:19:54 -0500
Jeff Nowakowski <jeff dilacero.org> wrote:

 Please yourself. I quoted from the FAQ from the distribution's main=20
 site. If that's wrong, then Arch has a big public relations problem.
Arch simply does not offer false promises that system will "Just work". Still, I see the number of users has rapidly increased in last year or so...mostly Ubuntu 'refugees'.
 You're talking about somebody who is running a nearly 3 year old
 version of Ubuntu because he had one bad upgrade experience, and is
 probably running software full of security holes. If he can't spend a
 day a year to upgrade his OS, what makes you think he wants to spend
 time on a more demanding distro?
My point is that due to rolling-release nature, distro like Archlinux require less work in the case when one 'forgets' to update OS and has to do 'major upgrade'. It was my experience with both SuSE and Ubuntu.
 And what happens when the kernel, as it often does, changes the way
 it handles things like devices, and expects the administrator to do
 some tweaking to handle the upgrade? What happens when you upgrade X
 and it no longer supports your video chipset? What happens when you
 upgrade something as basic as the DNS library, and it reacts badly
 with your router?
In the above cases, there is no distro which can save you from some admin work...and the problem is that people expect such system where, often, the only admin work is re-install. :-)
 These are real world examples. Arch is not some magic distribution
 that will make upgrade problems go away.
Sure. But upgrade in rolling-release distro is simpler than in Ubuntu-like one.
 Yeah, I know. I also run Debian Testing, which is a "rolling
 release". I'm not some Ubuntu noob.
Heh, I could imagine you like 'bleeding edge' considering you lived with ~x86 and 'unstable' repos. ;) Now we may close this thread...at least, I do not have anything more to say. :-D Sincerely, Gour --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ----------------------------------------------------------------
Jan 20 2011
prev sibling parent retard <re tard.com.invalid> writes:
Thu, 20 Jan 2011 13:33:58 +0100, Gour wrote:

 On Thu, 20 Jan 2011 06:39:08 -0500
 Jeff Nowakowski <jeff dilacero.org> wrote:
 
 
 No, I haven't tried it. I'm not going to try every OS that comes down
 the pike.
Then please, without any offense, do not give advises about something which you did not try. I did use Ubuntu...
 So instead of giving you a bunch of sane defaults, you have to make a
 bunch of choices up front.
Right. That's why there is no need for separate distro based on DE user wants to have, iow, by simple: pacman -Sy xfce4 you get XFCE environment installed...same wit GNOME & KDE.
It's the same in Ubuntu. You can install the minimal server build and install the DE of your choice in similar way. The prebuilt images (Ubuntu, Kubuntu, Xubuntu, Lubuntu, ...) are for those who can't decide and don't want to fire up a terminal for writing down bash code. In Ubuntu you have even more choice. The huge metapackage or just the DE packages, with or without recommendations. A similar system just doesn't exist for Arch. For the lazy user Ubuntu is a dream come true - you never need to launch xterm if you don't want. There's a GUI for almost everything.
 
 That's a heavy investment of time, especially for somebody unfamiliar
 with Linux.
Again, you're speaking without personal experience...
You're apparently a Linux fan, but have you got any idea which BSD or Solaris distro to choose? The choice isn't as simple if you have zero experience with the system.
 
 Moreover, in TDPL's foreword, Walter speaks about himself as "..of an
 engineer..", so I'm sure he is capable to handle The Arch Way (see
 section Simplicity at https://wiki.archlinux.org/index.php/Arch_Linux)
 which says: "The Arch Way is a philosophy aimed at keeping it simple.
I think Walter's system isn't up to date because he is a lazy bitch. Has all the required competence but never bothers to update if it just works (tm). The same philosophy can be found in dmd/dmc. The code is sometimes hard to read and hard to maintain and buggy, but if it works, why fix it?
 The Arch Linux base system is quite simply the minimal, yet functional
 GNU/Linux environment; the Linux kernel, GNU toolchain, and a handful of
 optional, extra command line utilities like links and Vi. This clean and
 simple starting point provides the foundation for expanding the system
 into whatever the user requires." and from there install one of the
 major DEs (GNOME, KDE or XFCE) to name a few.
I'd give my vote for LFS. It's quite minimal.
 
 The upgrade problems are still there. *Every package* you upgrade has a
 chance to be incompatible with the previous version. The longer you
 wait, the more incompatibilities there will be.
There are no incompatibilities...if I upgrade kernel, it means that package manager will figure out what components has to be updated... Remember: there are no packages 'tagged' for any specific release!
Even if the package manager works perfectly, the repositories have bugs in their dependencies and other metadata.
 
 Highlighting the problem of waiting too long to upgrade. You're
 skipping an entire release. I'd like to see you take a snapshot of Arch
 from 2008, use the system for 2 years without updating, and then
 upgrade to the latest packages. Do you think Arch is going to magically
 have no problems?
I did upgrade on my father-in-law's machine which was more then 1yr old without any problem. You think there must be some magic to handle it...ask some FreeBSD user how they do it. ;)
There's usually a safe upgrade period. If you wait too much, package conflicts will appear. It's simply too much work to keep rules for all possible package transitions. For example libc update breaks kde, but it's now called kde4. The system needs to know how to first remove all kde4 packages and update them. Chromium was previously a game, but now it's a browser, the game becomes chromium-bsu or something. I have hard time believing the minimal Arch does all this.
Jan 20 2011
prev sibling parent Andrew Wiley <debio264 gmail.com> writes:
On Thu, Jan 20, 2011 at 5:39 AM, Jeff Nowakowski <jeff dilacero.org> wrote:

 On 01/20/2011 12:24 AM, Gour wrote:

 Otoh, with Ubuntu, upgrade from 8.10 to 10.10 is always a major
 undertaking (I'm familiar with it since  '99 when I used SuSE and had
 experience with deps hell.)
Highlighting the problem of waiting too long to upgrade. You're skipping an entire release. I'd like to see you take a snapshot of Arch from 2008, use the system for 2 years without updating, and then upgrade to the latest packages. Do you think Arch is going to magically have no problems?
Ironically, I did this a few years back with an Arch box that was setup, then banished to the TV room as a gaming system, then reconnected to the internet about two years later (I didn't have wifi at the time, and I still haven't put a wifi dongle on the box). It updated with no problems and is still operating happily. Now, I was expecting problems, but on the other hand, since *all* packages are in the rolling release model and individual packages contain specific version dependencies, problems are harder to find than you'd think.
Jan 20 2011
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Gour wrote:
 Otoh, with Ubuntu, upgrade from 8.10 to 10.10 is always a major
 undertaking (I'm familiar with it since  '99 when I used SuSE and had
 experience with deps hell.)
I finally did do it, but as a clean install. I found an old 160G drive, wiped it, and installed 10.10 on it. (Amusingly, the "About Ubuntu" box says it's version 11.04, and /etc/issue says it's 10.10.) I attached the old drive through a usb port, and copied everything on it into a subdirectory of the new drive. Then, file and directory by file and directory, I moved the files into place on my new home directory. The main difficulty was the . files, which litter the home directory and gawd knows what they do or are for. This is one reason why I tend to stick with all defaults. The only real problem I've run into (so far) is the sunbird calendar has been unceremoniously dumped from Ubuntu. The data file for it is in some crappy binary format, so poof, there goes all my calendar data. Why do I bother with this crap. I think I'll stick with the ipod calendar. Phobos1 on 10.10 is dying in its unit tests because Ubuntu changed how gcc's strtof() works. Erratic floating point is typical of C runtime library implementations (the transcendentals are often sloppily done), which is why more and more Phobos uses its own implementations that Don has put together.
Jan 21 2011
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/22/11 12:35 AM, Walter Bright wrote:
 Phobos1 on 10.10 is dying in its unit tests because Ubuntu changed how
 gcc's strtof() works. Erratic floating point is typical of C runtime
 library implementations (the transcendentals are often sloppily done),
 which is why more and more Phobos uses its own implementations that Don
 has put together.
I think we must change to our own routines anyway. One strategic advantage of native implementations of strtof (and the converse sprintf etc.) is that we can CTFE them, which opens the door to interesting applications. I have something CTFEable starting from your dmc code, but never got around to handling all of the small details. Andrei
Jan 21 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 On 1/22/11 12:35 AM, Walter Bright wrote:
 Phobos1 on 10.10 is dying in its unit tests because Ubuntu changed how
 gcc's strtof() works. Erratic floating point is typical of C runtime
 library implementations (the transcendentals are often sloppily done),
 which is why more and more Phobos uses its own implementations that Don
 has put together.
I think we must change to our own routines anyway. One strategic advantage of native implementations of strtof (and the converse sprintf etc.) is that we can CTFE them, which opens the door to interesting applications.
We can also make our own conversion routines consistent, pure, thread safe and locale-independent.
Jan 22 2011
prev sibling next sibling parent reply Gour <gour atmarama.net> writes:
On Fri, 21 Jan 2011 22:35:55 -0800
Walter Bright <newshound2 digitalmars.com> wrote:

Hello Walter,

 I finally did do it, but as a clean install. I found an old 160G
 drive, wiped it, and installed 10.10 on it. (Amusingly, the "About
 Ubuntu" box says it's version 11.04, and /etc/issue says it's 10.10.)
in last few days I did a little research about 'easy-to-admin OS-es' and the result of it is: PC-BSD (http://www.pcbsd.org/) or Ubuntu-like PC-BSD with a GUI installer. The possible advantage is that here OS means kernel+tools which are strictly separated fro the other 'add-on' packages which should guarantee smooth upgrade. Moreover, PC-BSD deploys so called PBI installer which installs every 'add-on' package with complete set of required libs preventing upgrade-breakages. Of course, some more HD space is wasted but this will be resolved in June/July 9.0 release where such add-on packages will use kind of spool of common-libs, but the main OS is still kept intact. I'm very seriously considering to put PC-BSD on my desktop and of several others in order to reduce my admin-time required to maint. all those machines. Finally, there is latest dmd2 available in 'ports' and having you on PC-BSD will make it even better. ;) Sincerely, Gour --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ----------------------------------------------------------------
Jan 21 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Gour wrote:
 I'm very seriously considering to put PC-BSD on my desktop and of
 several others in order to reduce my admin-time required to maint. all
 those machines.
OSX is the only OS (besides DOS) I've had that had painless upgrades. Windows upgrades never ever work in place (at least not for me). You have to wipe the disk, install from scratch, then reinstall all your apps and reconfigure them. You're hosed if you lose an install disk or the serial # for it. Ubuntu isn't much better, but at least you don't have to worry about install disks and serial numbers. I just keep a list of sudo apt-get commands! That works pretty good until the Ubuntu gods just decide to drop kick your apps (like sunbird) out of the repository.
Jan 22 2011
next sibling parent reply spir <denis.spir gmail.com> writes:
On 01/22/2011 09:58 AM, Walter Bright wrote:
 Gour wrote:
 I'm very seriously considering to put PC-BSD on my desktop and of
 several others in order to reduce my admin-time required to maint. all
 those machines.
OSX is the only OS (besides DOS) I've had that had painless upgrades. Windows upgrades never ever work in place (at least not for me). You have to wipe the disk, install from scratch, then reinstall all your apps and reconfigure them.
Same in my experience. I had to recently re-install from scratch my ubuntu box recently (recently why I have the same amusing info as Walter telling my machine runs ubuntu 11.04?) because 10.04 --> 10.10 upgrade miserably crashed (at the end of the procedure, indeed). And no, this is not due to me naughtily the system; instead while userland is highly personalised I do not touch the rest (mainly my brain cannot cope with the standard unix filesystem hierarchy). (I use linux only for philosophical reasons, else would happily switch to mac.) Denis _________________ vita es estrany spir.wikidot.com
Jan 22 2011
parent reply Christopher Nicholson-Sauls <ibisbasenji gmail.com> writes:
On 01/22/11 03:57, spir wrote:
 On 01/22/2011 09:58 AM, Walter Bright wrote:
 Gour wrote:
 I'm very seriously considering to put PC-BSD on my desktop and of
 several others in order to reduce my admin-time required to maint. all
 those machines.
OSX is the only OS (besides DOS) I've had that had painless upgrades. Windows upgrades never ever work in place (at least not for me). You have to wipe the disk, install from scratch, then reinstall all your apps and reconfigure them.
Same in my experience. I had to recently re-install from scratch my ubuntu box recently (recently why I have the same amusing info as Walter telling my machine runs ubuntu 11.04?) because 10.04 --> 10.10 upgrade miserably crashed (at the end of the procedure, indeed). And no, this is not due to me naughtily the system; instead while userland is highly personalised I do not touch the rest (mainly my brain cannot cope with the standard unix filesystem hierarchy). (I use linux only for philosophical reasons, else would happily switch to mac.) Denis _________________ vita es estrany spir.wikidot.com
Likewise I had occasional issues with Ubuntu/Kubuntu upgrades when I was using it. Moving to a "rolling release" style distribution (Gentoo) changed everything for me. I haven't had a single major issue since. (I put "major" in there because there have been issues, but of the "glance at the screen, notice the blocker, type out the one very short command that will fix it, continue updating" variety.) Heck, updating has proven so straight-forward that I check for updates almost daily. I originally went to Linux for "philosophical" reasons, as well, but now that I've had a taste of a "real distro" I really don't have any interest in toying around with anything else. I do have a Windows install for development/testing purposes though... running in a VM. ;) Amazingly enough, Windows seems to be perfectly happy running as a guest O/S. If it was possible to do the same with OS X, I would. (Anyone know a little trick for that, using VirtualBox?) -- Chris N-S
Jan 22 2011
parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 1/22/11, Christopher Nicholson-Sauls <ibisbasenji gmail.com> wrote:
  If it was possible to do the same with OS
 X, I would.  (Anyone know a little trick for that, using VirtualBox?)
No, that is illegal! But you might want to do a google search for *cough* iDeneb *cough* and download vmware player. :p
Jan 22 2011
parent Daniel Gibson <metalcaedes gmail.com> writes:
Am 22.01.2011 17:36, schrieb Andrej Mitrovic:
 On 1/22/11, Christopher Nicholson-Sauls<ibisbasenji gmail.com>  wrote:
   If it was possible to do the same with OS
 X, I would.  (Anyone know a little trick for that, using VirtualBox?)
No, that is illegal! But you might want to do a google search for *cough* iDeneb *cough* and download vmware player. :p
A google search for virtualbox osx takwing may be interesting as well.
Jan 22 2011
prev sibling parent reply retard <re tard.com.invalid> writes:
Sat, 22 Jan 2011 00:58:59 -0800, Walter Bright wrote:

 Gour wrote:
 I'm very seriously considering to put PC-BSD on my desktop and of
 several others in order to reduce my admin-time required to maint. all
 those machines.
OSX is the only OS (besides DOS) I've had that had painless upgrades. Windows upgrades never ever work in place (at least not for me). You have to wipe the disk, install from scratch, then reinstall all your apps and reconfigure them. You're hosed if you lose an install disk or the serial # for it. Ubuntu isn't much better, but at least you don't have to worry about install disks and serial numbers. I just keep a list of sudo apt-get commands! That works pretty good until the Ubuntu gods just decide to drop kick your apps (like sunbird) out of the repository.
Don't blame Ubuntu, http://en.wikipedia.org/wiki/Mozilla_Sunbird "It was developed as a standalone version of the Lightning calendar and scheduling extension for Mozilla Thunderbird. Development of Sunbird was ended with release 1.0 beta 1 to focus on development of Mozilla Lightning.[6][7]" Ubuntu doesn't drop support for widely used software. I'd use Google's Calendar instead.
Jan 22 2011
next sibling parent Daniel Gibson <metalcaedes gmail.com> writes:
Am 22.01.2011 13:21, schrieb retard:
 Sat, 22 Jan 2011 00:58:59 -0800, Walter Bright wrote:

 Gour wrote:
 I'm very seriously considering to put PC-BSD on my desktop and of
 several others in order to reduce my admin-time required to maint. all
 those machines.
OSX is the only OS (besides DOS) I've had that had painless upgrades. Windows upgrades never ever work in place (at least not for me). You have to wipe the disk, install from scratch, then reinstall all your apps and reconfigure them. You're hosed if you lose an install disk or the serial # for it. Ubuntu isn't much better, but at least you don't have to worry about install disks and serial numbers. I just keep a list of sudo apt-get commands! That works pretty good until the Ubuntu gods just decide to drop kick your apps (like sunbird) out of the repository.
Don't blame Ubuntu, http://en.wikipedia.org/wiki/Mozilla_Sunbird "It was developed as a standalone version of the Lightning calendar and scheduling extension for Mozilla Thunderbird. Development of Sunbird was ended with release 1.0 beta 1 to focus on development of Mozilla Lightning.[6][7]" Ubuntu doesn't drop support for widely used software. I'd use Google's Calendar instead.
Ubuntu doesn't include Lightning, either. Walter: You could add the lightning plugin to your thunderbird from the mozilla page: http://www.mozilla.org/projects/calendar/lightning/index.html Hopefully it automatically imports your sunbird data or is at least able to import it manually.
Jan 22 2011
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
retard wrote:
 Ubuntu doesn't drop support for widely used software. I'd use Google's 
 Calendar instead.
I'm really not interested in Google owning my private data.
Jan 22 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/22/11 3:03 PM, Walter Bright wrote:
 retard wrote:
 Ubuntu doesn't drop support for widely used software. I'd use Google's
 Calendar instead.
I'm really not interested in Google owning my private data.
Google takes email privacy very seriously. Only last week they fired an employee for snooping through someone else's email. http://techcrunch.com/2010/09/14/google-engineer-spying-fired/ Of course, that could be framed either as a success or a failure of Google's privacy enforcement. Several companies are using gmail for their email infrastructure. Andrei
Jan 22 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Google takes email privacy very seriously. Only last week they fired an 
 employee for snooping through someone else's email.
 
 http://techcrunch.com/2010/09/14/google-engineer-spying-fired/
That's good to know. On the other hand, Google keeps information forever. Ownership, management, policies, and practices change. And to be frank, the fact that some of Google's employees are not authorized to look at emails means that others are. And those others are subject to the usual human weaknesses of bribery, blackmail, temptation, voyeurism, etc. Heck, the White House is famous for being a leaky organization, despite extensive security. I rent storage on Amazon's servers, but the stuff I send there is encrypted before Amazon ever sees it. I don't have to depend at all on Amazon having a privacy policy or airtight security. Google could implement their Calendar, etc., stuff the same way. I'd even pay for it (like I pay Amazon).
Jan 22 2011
prev sibling next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Sat, 22 Jan 2011 08:35:55 +0200, Walter Bright  
<newshound2 digitalmars.com> wrote:

 The only real problem I've run into (so far) is the sunbird calendar has  
 been unceremoniously dumped from Ubuntu. The data file for it is in some  
 crappy binary format, so poof, there goes all my calendar data.
Hi Walter, have you seen this yet? It's an article on how to import your calendar data in Lightning, the official Thunderbird calendar extension. I hope it'll help you: http://brizoma.wordpress.com/2010/05/04/sunbird-and-lightning-removed-from-ubuntu-10-04-lucid-lynx/ -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 22 2011
next sibling parent spir <denis.spir gmail.com> writes:
On 01/22/2011 10:34 AM, Vladimir Panteleev wrote:
 On Sat, 22 Jan 2011 08:35:55 +0200, Walter Bright
 <newshound2 digitalmars.com> wrote:

 The only real problem I've run into (so far) is the sunbird calendar
 has been unceremoniously dumped from Ubuntu. The data file for it is
 in some crappy binary format, so poof, there goes all my calendar data.
Hi Walter, have you seen this yet? It's an article on how to import your calendar data in Lightning, the official Thunderbird calendar extension. I hope it'll help you: http://brizoma.wordpress.com/2010/05/04/sunbird-and-lightning-removed-from-ubuntu-10-04-lucid-lynx/
Yes, lightning seems to have been the successor mozilla project to sunbird (wikipedia would probably tell you more). Denis _________________ vita es estrany spir.wikidot.com
Jan 22 2011
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Vladimir Panteleev wrote:
 http://brizoma.wordpress.com/2010/05/04/sunbird-and-lightning-removed-from-ubun
u-10-04-lucid-lynx/ 
Thanks for finding that. But I think I'll stick for now with the ipod's calendar. It's more useful anyway, as it moves with me.
Jan 22 2011
parent reply retard <re tard.com.invalid> writes:
Sat, 22 Jan 2011 13:12:26 -0800, Walter Bright wrote:

 Vladimir Panteleev wrote:
 http://brizoma.wordpress.com/2010/05/04/sunbird-and-lightning-removed-
from-ubuntu-10-04-lucid-lynx/
 
 Thanks for finding that. But I think I'll stick for now with the ipod's
 calendar. It's more useful anyway, as it moves with me.
Does the new Ubuntu overall work better than the old one? Would be amazing if the media players are still all broken.
Jan 22 2011
next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 22.01.2011 22:31, schrieb retard:
 Sat, 22 Jan 2011 13:12:26 -0800, Walter Bright wrote:

 Vladimir Panteleev wrote:
 http://brizoma.wordpress.com/2010/05/04/sunbird-and-lightning-removed-
from-ubuntu-10-04-lucid-lynx/
 Thanks for finding that. But I think I'll stick for now with the ipod's
 calendar. It's more useful anyway, as it moves with me.
Does the new Ubuntu overall work better than the old one? Would be amazing if the media players are still all broken.
And is the support for the graphics chip better, i.e. can you use full resolution?
Jan 22 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
Daniel Gibson wrote:
 And is the support for the graphics chip better, i.e. can you use full 
 resolution?
Yes, it recognized my resolution automatically. That's a nice improvement.
Jan 22 2011
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
retard wrote:
 Does the new Ubuntu overall work better than the old one? Would be 
 amazing if the media players are still all broken.
I haven't tried the sound yet, but the video playback definitely is better. Though the whole screen flashes now and then, like the video mode is being reset badly. This is new behavior.
Jan 22 2011
parent retard <re tard.com.invalid> writes:
Sat, 22 Jan 2011 14:47:48 -0800, Walter Bright wrote:

 retard wrote:
 Does the new Ubuntu overall work better than the old one? Would be
 amazing if the media players are still all broken.
I haven't tried the sound yet, but the video playback definitely is better. Though the whole screen flashes now and then, like the video mode is being reset badly. This is new behavior.
Ubuntu probably uses Compiz if you have enabled desktop effects. This might not work with ati's (open source) drivers. Turning Compiz off makes it use a "safer" 2d engine. In Gnome the setting can be changed here http://www.howtoforge.com/enabling-compiz-fusion-on-an-ubuntu-10.10- desktop-nvidia-geforce-8200-p2 It's the "none" option in the second figure.
Jan 22 2011
prev sibling parent spir <denis.spir gmail.com> writes:
On 01/22/2011 07:35 AM, Walter Bright wrote:
 I finally did do it, but as a clean install. I found an old 160G drive,
 wiped it, and installed 10.10 on it. (Amusingly, the "About Ubuntu" box
 says it's version 11.04, and /etc/issue says it's 10.10.)
Same for me ;-) _________________ vita es estrany spir.wikidot.com
Jan 22 2011
prev sibling next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday 08 January 2011 14:34:19 Walter Bright wrote:
 Michel Fortin wrote:
 I know you had your reasons, but perhaps it's time for you upgrade to a
 more recent version of Ubuntu? That version is what comes with Hardy
 Heron (april 2008).
 <https://launchpad.net/ubuntu/+source/meld>
I know. The last time I upgraded Ubuntu in place it f****d up my system so bad I had to wipe the disk and start all over. It still won't play videos correctly (the previous Ubuntu worked fine), the rhythmbox music player never worked again, it wiped out all my virtual boxes, I had to spend hours googling around trying to figure out how to reconfigure the display driver so the monitor worked again, etc. I learned my lesson! Yes, I'll eventually upgrade, but I'm not looking forward to it.
A while back I took to putting /home on a separate partition from the root directory, and I never upgrade in place. I replace the whole thing every time. Maybe it's because I've never trusted Windows to do it correctly, but I've never thought that it was a good idea to upgrade in place. I never do it on any OS. And by having /home on its own partition, it doesn't affect my data. Sometimes, config files can be an issue, but worse case, that's fixed by blowing them away. Of course, I use neither Ubuntu nor Gnome, so I don't know what the exact caveats are with those. And at the moment, I'm primarily using Arch, which has rolling releases, so unless I screw up my machine, I pretty much don't have to worry about updating the OS to a new release. The pieces get updated as you go, and it works just fine (unlike Gentoo, where you can be screwed on updates because a particular package didn't build). Of course, I'd have got nuts having an installation as old as yours appears to be, so we're obviously of very different mindsets when dealing with upgrades. Still, I'd advise making /home its own partition and then doing clean installs of the OS whenever you upgrade. - Jonathan M Davis
Jan 08 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Jonathan M Davis wrote:
 Of course, I'd have got nuts having an installation as old as yours appears to 
 be,
I think it's less than a year old.
Jan 08 2011
parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday 08 January 2011 20:16:05 Walter Bright wrote:
 Jonathan M Davis wrote:
 Of course, I'd have got nuts having an installation as old as yours
 appears to be,
I think it's less than a year old.
Hmm. I thought that someone said that the version you were running was from 2008. But if it's less than a year old, that generally isn't a big deal unless there's a particular package that you really want updated, and there are usually ways to deal with one package. I do quite like the rolling release model though. - Jonathan M Davis
Jan 08 2011
prev sibling next sibling parent reply Russel Winder <russel russel.org.uk> writes:
On Sat, 2011-01-08 at 18:22 -0800, Jonathan M Davis wrote:
 On Saturday 08 January 2011 14:34:19 Walter Bright wrote:
 Michel Fortin wrote:
 I know you had your reasons, but perhaps it's time for you upgrade to=
a
 more recent version of Ubuntu? That version is what comes with Hardy
 Heron (april 2008).
 <https://launchpad.net/ubuntu/+source/meld>
=20 I know. The last time I upgraded Ubuntu in place it f****d up my system=
so
 bad I had to wipe the disk and start all over. It still won't play vide=
os
 correctly (the previous Ubuntu worked fine), the rhythmbox music player
 never worked again, it wiped out all my virtual boxes, I had to spend
 hours googling around trying to figure out how to reconfigure the displ=
ay
 driver so the monitor worked again, etc.
Personally I have never had an in-place Ubuntu upgrade f*** up any of my machines -- server, workstation, laptops. However, I really feel your pain about video and audio tools on Ubuntu, these have regularly been screwed over by an upgrade. There are also other niggles: my current beef is that the 10.10 upgrade stopped my Lenovo T500 from going to sleep when closing the lid. On my laptops I have two system partitions so as to dual boot between Debian Testing and the latest released Ubuntu. This way I find I always have a reasonably up to date system that works as I want it. Currently I am having a Debian Testing period pending 11.04 being released.
 I learned my lesson! Yes, I'll eventually upgrade, but I'm not looking
 forward to it.
=20 A while back I took to putting /home on a separate partition from the roo=
t=20
 directory, and I never upgrade in place. I replace the whole thing every =
time.=20
 Maybe it's because I've never trusted Windows to do it correctly, but I'v=
e never=20
 thought that it was a good idea to upgrade in place. I never do it on any=
OS.=20
 And by having /home on its own partition, it doesn't affect my data. Some=
times,=20
 config files can be an issue, but worse case, that's fixed by blowing the=
m away. Of=20
 course, I use neither Ubuntu nor Gnome, so I don't know what the exact ca=
veats=20
 are with those. And at the moment, I'm primarily using Arch, which has ro=
lling=20
 releases, so unless I screw up my machine, I pretty much don't have to wo=
rry=20
 about updating the OS to a new release. The pieces get updated as you go,=
and it=20
 works just fine (unlike Gentoo, where you can be screwed on updates becau=
se a=20
 particular package didn't build).
I always have /home as a separate partition as I dual boot between Debian and Ubuntu from two distinct / partitions. But I always upgrade in place -- but having the dual boot makes for trivially easy recovery from problems. Debian Testing is really a rolling release but it tends to be behind Ubuntu is some versions of things and ahead in others. Also Ubuntu has non-free stuff that is forbidden on Debian. Not to mention the F$$$F$$ fiasco! =20
 Of course, I'd have got nuts having an installation as old as yours appea=
rs to=20
 be, so we're obviously of very different mindsets when dealing with upgra=
des.=20
 Still, I'd advise making /home its own partition and then doing clean ins=
talls=20
 of the OS whenever you upgrade.
I have to agree about being two years behind, this is too far to be comfortable. I would definitely recommend an upgrade to Walter's machines --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jan 09 2011
parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 09.01.2011 12:16, schrieb Russel Winder:
 Debian Testing is really a rolling release but it tends to be behind
 Ubuntu is some versions of things and ahead in others.  Also Ubuntu has
 non-free stuff that is forbidden on Debian.  Not to mention the F$$$F$$
 fiasco!
That's why debian has contrib and non-free repos. Ok, lame and libdvdcss are missing, but can be easily obtained from debian-multimedia.org (the latter is missing in ubuntu as well). What's F$$$F$$? Cheers, - Daniel
Jan 09 2011
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Sun, 09 Jan 2011 21:02:13 +0200, Daniel Gibson <metalcaedes gmail.com>  
wrote:

 What's F$$$F$$?
FireFox/IceWeasel: http://en.wikipedia.org/wiki/Mozilla_Corporation_software_rebranded_by_the_Debian_project -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 09 2011
parent Daniel Gibson <metalcaedes gmail.com> writes:
Am 09.01.2011 22:16, schrieb Vladimir Panteleev:
 On Sun, 09 Jan 2011 21:02:13 +0200, Daniel Gibson <metalcaedes gmail.com>
wrote:

 What's F$$$F$$?
FireFox/IceWeasel: http://en.wikipedia.org/wiki/Mozilla_Corporation_software_rebranded_by_the_Debian_project
Oh that. I couldn't care less if my browser is called Firefox or Iceweasel. Firefox plugins/extensions work with Iceweasel without any problems and I'm not aware of any issues caused by that rebranding. Also you're free to not install Iceweasel and install the Firefox binaries from mozilla.com. (The same is true for Thunderbird/Icedove) Cheers, - Daniel
Jan 09 2011
prev sibling parent reply retard <re tard.com.invalid> writes:
Sat, 08 Jan 2011 14:34:19 -0800, Walter Bright wrote:

 Michel Fortin wrote:
 I know you had your reasons, but perhaps it's time for you upgrade to a
 more recent version of Ubuntu? That version is what comes with Hardy
 Heron (april 2008).
 <https://launchpad.net/ubuntu/+source/meld>
I know. The last time I upgraded Ubuntu in place it f****d up my system so bad I had to wipe the disk and start all over. It still won't play videos correctly (the previous Ubuntu worked fine), the rhythmbox music player never worked again, it wiped out all my virtual boxes, I had to spend hours googling around trying to figure out how to reconfigure the display driver so the monitor worked again, etc. I learned my lesson! Yes, I'll eventually upgrade, but I'm not looking forward to it.
Ubuntu has a menu entry for "restricted drivers". It provides support for both ATI/AMD (Radeon 8500 or better, appeared in 1998 or 1999!) and NVIDIA cards (Geforce 256 or better, appeared in 1999!) and I think it automatically suggests (a pop-up window) correct drivers in the latest releases right after the first install. Intel chips are automatically supported by the open source drivers. VIA and S3 may or may not work out of the box. I'm just a bit curious to know what GPU you have? If it's some ancient VLB (vesa local bus) or ISA card, I can donate $15 for buying one that uses AGP or PCI Express. Ubuntu doesn't support all video formats out of the box, but the media players and browsers automatically suggest loading missing drivers. At least in the 3 or 4 latest releases. Maybe the problem isn't the encoder, it might be the Linux incompatible web site.
 Or you could download the latest version from meld's website and
 compile it yourself.
Yeah, I could spend an afternoon doing that.
Another one of these jokes? Probably one of the best compiler authors in the whole world uses a whole afternoon doing something (compiling a program) that total Linux noobs do in less than 30 minutes with the help of Google search.
Jan 10 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
retard wrote:
 Ubuntu has a menu entry for "restricted drivers". It provides support for 
 both ATI/AMD (Radeon 8500 or better, appeared in 1998 or 1999!) and 
 NVIDIA cards (Geforce 256 or better, appeared in 1999!) and I think it 
 automatically suggests (a pop-up window) correct drivers in the latest 
 releases right after the first install.
 
 Intel chips are automatically supported by the open source drivers. VIA 
 and S3 may or may not work out of the box. I'm just a bit curious to know 
 what GPU you have? If it's some ancient VLB (vesa local bus) or ISA card, 
 I can donate $15 for buying one that uses AGP or PCI Express.
 
 Ubuntu doesn't support all video formats out of the box, but the media 
 players and browsers automatically suggest loading missing drivers. At 
 least in the 3 or 4 latest releases. Maybe the problem isn't the encoder, 
 it might be the Linux incompatible web site.
My mobo is an ASUS M2A-VM. No graphics cards, or any other cards plugged into it. It's hardly weird or wacky or old (it was new at the time I bought it to install Ubuntu). My display is 1920 x 1200. That just seems to cause grief for Ubuntu. Windows has no issues at all with it.
 Or you could download the latest version from meld's website and
 compile it yourself.
Yeah, I could spend an afternoon doing that.
Another one of these jokes? Probably one of the best compiler authors in the whole world uses a whole afternoon doing something (compiling a program)
On the other hand, I regularly get emails from people with 10 years of coding experience who are flummoxed by a "symbol not defined" message from the linker. :-)
 that total Linux noobs do in less than 30 minutes with the help 
 of Google search.
Yeah, I've spent a lot of time googling for solutions to problems with Linux. You know what? I get pages of results from support forums - every solution is different and comes with statements like "seems to work", "doesn't work for me", etc. The advice is clearly from people who do not know what they are doing, and randomly stab at things, and these are the first page of google results.
Jan 11 2011
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 1/11/11, Walter Bright <newshound2 digitalmars.com> wrote:
 Yeah, I've spent a lot of time googling for solutions to problems with
 Linux.
 You know what? I get pages of results from support forums - every solution
 is
 different and comes with statements like "seems to work", "doesn't work for
 me",
 etc. The advice is clearly from people who do not know what they are doing,
 and
 randomly stab at things, and these are the first page of google results.
That's my biggest problem with Linux. Having technical problems is not the issue, finding the right solution in the sea of forum posts is the problem. When I have a problem with something breaking down on Windows, most of the time a single google search reveals the solution in one of the very first results (it's either on an MSDN page or one of the more popular forums). This probably has to do with the fact that regular users have either XP or Vista/7 installed. So there's really not much searching you have to do. Once someone posts a solution, that's the end of the story (more often than not). I remember a few years ago I got a copy of Ubuntu, and I wanted to disable antialiased fonts (they looked really bad on the screen). So I simply disabled antialised fonts in one of the display property panels, and thought that would be the end of the story. But guess what? Firefox and other applications don't want to follow the OS settings, and they will override your settings and render websites with antialised fonts. So now I had to search for half an hour to find a solution. I finally find a guide where the instructions are to edit the etc/fonts.conf file. So I do that. But antialised fonts were still active. So I spend another 30 minutes looking for more information. Then I run into another website where the instructions are to delete a couple of fonts from the system. OK. I run the command in the terminal, I reset the system, but then on boot x-org crashes. So now I'm left with a blinking cursor on a black background, with no knowledge whatsover of how to fix x-org or reset its settings. Instinctively I run "help" and I get back a list of 100 commands, but I can only read the last 20 and I've no idea how to scroll up to read more. So, hours wasted and a broken Linux system all because I wanted to disable antialiased fonts. But that's just one example. I have plenty more. GRUB failing to install properly, GRUB failing to detect all of my windows installations, and then there's that "wubi" which *does not* work. Of course there are numerous guides on how to fix wubi as well but those fail too. Bleh. I like open-source, Linux - the kernel might be awesome for all I know, but the distributions plain-simple *suck*.
Jan 11 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Andrej Mitrovic wrote:
 That's my biggest problem with Linux. Having technical problems is not
 the issue, finding the right solution in the sea of forum posts is the
 problem.
The worst ones begin with "you might try this..." or "I think this might work, but YMMV..." How do these wind up being the top ranked results by google? Who embeds links to that stuff? My experience with Windows is, like yours, the opposite. The top ranked result will be correct and to the point. No weasel wording.
Jan 11 2011
next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 11.01.2011 22:36, schrieb Walter Bright:
 Andrej Mitrovic wrote:
 That's my biggest problem with Linux. Having technical problems is not
 the issue, finding the right solution in the sea of forum posts is the
 problem.
The worst ones begin with "you might try this..." or "I think this might work, but YMMV..." How do these wind up being the top ranked results by google? Who embeds links to that stuff? My experience with Windows is, like yours, the opposite. The top ranked result will be correct and to the point. No weasel wording.
Those results are often in big forums like ubuntuforums.org that get a lot of links etc, so even if one thread doesn't have many incoming links, it may still get a top ranking. Also my blog entries (hosted at wordpress.com) get on the google frontpage when looking for the specific topic, even though my blog is mostly unknown, has 2-20 visitors per day and almost no incoming links.. Googles algorithms often do seem like voodoo ;) Also: Many problems (and their correct solutions) heavily depend on your system. What desktop environment is used, what additional stuff (dbus, hal, ...) is used, what are the versions of this stuff (and X.org), what distribution is used, ... There may be different default configurations shipped depending on what distribution (and what version of that distribution) you use, ... So there often is no single correct answer that will work for anyone. Still, in my experience those HOWTOs often work (it may help to look at multiple HOWTOs and compare them if you're not sure, if it applies to your system) or at least push you in the right direction. Cheers, - Daniel
Jan 11 2011
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
Google does seem to take into account whatever information it has on
you, which might explain why your own blog is a top result for you.

If I log out of Google and delete my preferences, searching for "D"
won't find anything about the D language in the top results. But if I
log in and search "D" again, the D website will be the top result.
Jan 11 2011
parent Jesse Phillips <jessekphillips+D gmail.com> writes:
Andrej Mitrovic Wrote:

 Google does seem to take into account whatever information it has on
 you, which might explain why your own blog is a top result for you.
 
 If I log out of Google and delete my preferences, searching for "D"
 won't find anything about the D language in the top results. But if I
 log in and search "D" again, the D website will be the top result.
Best place to go for ranking information on your website: https://www.google.com/webmasters/tools/home?hl=en&pli=1 Need to show you own the site though.
Jan 11 2011
prev sibling parent "Nick Sabalausky" <a a.a> writes:
"Daniel Gibson" <metalcaedes gmail.com> wrote in message 
news:igijc7$27pv$4 digitalmars.com...
 Am 11.01.2011 22:36, schrieb Walter Bright:
 Andrej Mitrovic wrote:
 That's my biggest problem with Linux. Having technical problems is not
 the issue, finding the right solution in the sea of forum posts is the
 problem.
The worst ones begin with "you might try this..." or "I think this might work, but YMMV..." How do these wind up being the top ranked results by google? Who embeds links to that stuff? My experience with Windows is, like yours, the opposite. The top ranked result will be correct and to the point. No weasel wording.
Those results are often in big forums like ubuntuforums.org that get a lot of links etc, so even if one thread doesn't have many incoming links, it may still get a top ranking. Also my blog entries (hosted at wordpress.com) get on the google frontpage when looking for the specific topic, even though my blog is mostly unknown, has 2-20 visitors per day and almost no incoming links.. Googles algorithms often do seem like voodoo ;) Also: Many problems (and their correct solutions) heavily depend on your system. What desktop environment is used, what additional stuff (dbus, hal, ...) is used, what are the versions of this stuff (and X.org), what distribution is used, ... There may be different default configurations shipped depending on what distribution (and what version of that distribution) you use, ... So there often is no single correct answer that will work for anyone. Still, in my experience those HOWTOs often work (it may help to look at multiple HOWTOs and compare them if you're not sure, if it applies to your system) or at least push you in the right direction.
That's probably one of the biggest things that's always bothered me about linux (not that there aren't plenty of other things that bother me about every other OS in existence). For something that's considered so standards-compliant/standards-friendly (compared to, say MS), it's painfully *un*standardized.
Jan 11 2011
prev sibling parent Christopher Nicholson-Sauls <ibisbasenji gmail.com> writes:
On 01/11/11 15:36, Walter Bright wrote:
 Andrej Mitrovic wrote:
 That's my biggest problem with Linux. Having technical problems is not
 the issue, finding the right solution in the sea of forum posts is the
 problem.
The worst ones begin with "you might try this..." or "I think this might work, but YMMV..." How do these wind up being the top ranked results by google? Who embeds links to that stuff?
Nobody. The first "secret" of Linux tech-help is that most real help is dished out via IRC channels. One just has to visit their distro of choice's website and there will inevitably be a listing for an IRC channel or two -- often with one specifically for new users. It may sound like a lot of trouble, but getting help from the source and live is worlds above scanning forum posts hoping the people posting know more than you do. And thanks to the global scale of most FOSS communities, there's always someone around -- and it didn't cost you a dime. That said, a little more integrity in the forums that do exist would be nice. LinuxQuestions.org seems to be one of the better ones, from what I've seen of it. -- Chris N-S
Jan 12 2011
prev sibling parent Russel Winder <russel russel.org.uk> writes:
On Tue, 2011-01-11 at 11:53 -0800, Walter Bright wrote:
[ . . . ]
 My display is 1920 x 1200. That just seems to cause grief for Ubuntu. Win=
dows=20
 has no issues at all with it.
[ . . . ] My 1900x1200 screen is fine with Ubuntu. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jan 11 2011
prev sibling parent reply retard <re tard.com.invalid> writes:
Sat, 08 Jan 2011 12:36:39 -0800, Walter Bright wrote:

 Lutger Blijdestijn wrote:
 Walter Bright wrote:
 
 Looks like meld itself used git as it's repository. I'd be surprised
 if it doesn't work with git. :-)
I use git for other projects, and meld doesn't work with it.
What version are you on? I'm using 1.3.2 and its supports git and mercurial (also committing from inside meld & stuff, I take it this is what you mean with supporting a vcs).
The one that comes with: sudo apt-get meld 1.1.5.1
One thing came to my mind. Unless you're using Ubuntu 8.04 LTS, your Ubuntu version isn't supported anymore. They might have already removed the package repositories for unsupported versions and that might indeed lead to problems with graphics and video players as you said. The support for desktop 8.04 and 9.10 is also nearing its end (April this year). I'd recommend backing up your /home and installing 10.04 LTS or 10.10 instead.
Jan 10 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
retard wrote:
 One thing came to my mind. Unless you're using Ubuntu 8.04 LTS,
I'm using 8.10, and I've noticed that no more updates are coming.
 your 
 Ubuntu version isn't supported anymore. They might have already removed 
 the package repositories for unsupported versions and that might indeed 
 lead to problems with graphics and video players as you said.
What annoyed the heck out of me was the earlier (7.xx) version of Ubuntu *did* work.
 The support for desktop 8.04 and 9.10 is also nearing its end (April this 
 year). I'd recommend backing up your /home and installing 10.04 LTS or 
 10.10 instead.
Yeah, I know I'll be forced to upgrade soon. One thing that'll make it easier is I abandoned using Ubuntu for multimedia. For example, to play Pandora I now just plug my ipod into my stereo <g>. I just stopped using youtube on Ubuntu, as I got tired of the video randomly going black, freezing, etc.
Jan 11 2011
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-01-06 21:12, Michel Fortin wrote:
 On 2011-01-06 15:01:18 -0500, Jesse Phillips
 <jessekphillips+D gmail.com> said:

 Walter Bright Wrote:

 A couple months back, I did propose moving to git on the dmd
 internals mailing
 list, and nobody was interested.
I probably wasn't on the list at the time. I'm certainly interested, it'd certainly make it easier for me, as I'm using git locally to access that repo.
 One thing I like a lot about svn is this:

 http://www.dsource.org/projects/dmd/changeset/291
You mean this: https://github.com/braddr/dmd/commit/f1fde96227394f926da5841db4f0f4c608b2e7b2
That's

 only if you're hosted on github. If you install on your own server, git
 comes with a web interface that looks like this (pointing to a specific
 diff):
 <http://repo.or.cz/w/LinuxKernelDevelopmentProcess.git/commitdiff/d7214dcb5be988a5c7d407f907c7e7e789872d24>


 Also
 when I want an overview with git I just type gitk on the command line to
 bring a window where I can browser the graph of forks, merges and
 commits and see the diff for each commit. Here's what gitk looks like:
 <http://michael-prokop.at/blog/img/gitk.png>
Have you heard of gitx? I suggest you take a look at it: http://gitx.frim.nl/index.html . It's a Mac OS X GUI for git.
 where the web view will highlight the revision's changes. Does git or
 mercurial
 do that? The other thing I like a lot about gif is it sends out
 emails for each
 checkin.

 One thing I would dearly like is to be able to merge branches using
 meld.

 http://meld.sourceforge.net/
Git does not have its own merge tool. You are free to use meld. Though there is gitmerge which can run meld as the merge tool.
Looks like meld itself used git as it's repository. I'd be surprised if it doesn't work with git. :-)
-- /Jacob Carlborg
Jan 08 2011
next sibling parent reply Russel Winder <russel russel.org.uk> writes:
On Sat, 2011-01-08 at 15:38 +0100, Jacob Carlborg wrote:
 On 2011-01-06 21:12, Michel Fortin wrote:
[ . . . ]
 Also
 when I want an overview with git I just type gitk on the command line t=
o
 bring a window where I can browser the graph of forks, merges and
 commits and see the diff for each commit. Here's what gitk looks like:
 <http://michael-prokop.at/blog/img/gitk.png>
gitk uses the Tk widget set which looks hideous -- at least on my Ubuntu and Debian systems. I now use gitg which appears to have the same functionality, but looks almost acceptable. There is also git-gui. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jan 08 2011
next sibling parent Jesse Phillips <jessekphillips+D gmail.com> writes:
Russel Winder Wrote:

 gitk uses the Tk widget set which looks hideous -- at least on my Ubuntu
 and Debian systems.  I now use gitg which appears to have the same
 functionality, but looks almost acceptable.  There is also git-gui.
Funny thing, gitk looks better on Windows. I don't care though. My friend ends up with font that is barely readable. Also there is giggle: http://live.gnome.org/giggle I like the name, but I still prefer gitk.
Jan 08 2011
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-01-08 16:01, Russel Winder wrote:
 On Sat, 2011-01-08 at 15:38 +0100, Jacob Carlborg wrote:
 On 2011-01-06 21:12, Michel Fortin wrote:
[ . . . ]
 Also
 when I want an overview with git I just type gitk on the command line to
 bring a window where I can browser the graph of forks, merges and
 commits and see the diff for each commit. Here's what gitk looks like:
 <http://michael-prokop.at/blog/img/gitk.png>
gitk uses the Tk widget set which looks hideous -- at least on my Ubuntu and Debian systems. I now use gitg which appears to have the same functionality, but looks almost acceptable. There is also git-gui.
Doesn't the Tk widget set look hideous on all platforms. I can't understand why both Mercurial and git have chosen to use Tk for the GUI. -- /Jacob Carlborg
Jan 08 2011
parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday 08 January 2011 10:39:39 Jacob Carlborg wrote:
 On 2011-01-08 16:01, Russel Winder wrote:
 On Sat, 2011-01-08 at 15:38 +0100, Jacob Carlborg wrote:
 On 2011-01-06 21:12, Michel Fortin wrote:
[ . . . ]
 Also
 when I want an overview with git I just type gitk on the command line
 to bring a window where I can browser the graph of forks, merges and
 commits and see the diff for each commit. Here's what gitk looks like:
 <http://michael-prokop.at/blog/img/gitk.png>
gitk uses the Tk widget set which looks hideous -- at least on my Ubuntu and Debian systems. I now use gitg which appears to have the same functionality, but looks almost acceptable. There is also git-gui.
Doesn't the Tk widget set look hideous on all platforms. I can't understand why both Mercurial and git have chosen to use Tk for the GUI.
Probably because you don't need much installed for them to work. About all you need is X. Now, I'd still rather that they'd picked a decent-looking GUI toolkit and just required it (_most_ people are running full desktop systems with the proper requirements installed and which will install them if a package needs them and they're not installed), but they were probably trying to make it work in pretty minimal environments. - Jonathan M Davis
Jan 08 2011
prev sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
Git Extensions looks pretty sweet for use on Windows (I haven't tried
it yet though): https://code.google.com/p/gitextensions/
Jan 08 2011
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Jesse Phillips wrote:
 Walter Bright Wrote:
 One thing I like a lot about svn is this:
 
 http://www.dsource.org/projects/dmd/changeset/291
You mean this: https://github.com/braddr/dmd/commit/f1fde96227394f926da5841db4f0f4c608b2e7b2
Yes, exactly. Good.
 One thing I would dearly like is to be able to merge branches using meld.
 
 http://meld.sourceforge.net/
Git does not have its own merge tool. You are free to use meld. Though there is gitmerge which can run meld as the merge tool.
Jan 06 2011
prev sibling next sibling parent Jesse Phillips <jessekphillips+D gmail.com> writes:
Walter Bright Wrote:

 Eh, that's inferior. The svn will will highlight what part of a line is 
 different, rather than just the whole line.
As others have mentioned, it really isn't the CVS, I don't think the default SVN web server does the highlighting you want, it might not even do any highlighting. Trac should be able to provide the same functionality as found on github, though github will provide a lot more then just highlighting.
 Looks like meld itself used git as it's repository. I'd be surprised if 
 it doesn't work with git. :-)
I use git for other projects, and meld doesn't work with it.
Found these links: http://www.muhuk.com/2008/11/adding-git-support-to-meld/ http://nathanhoad.net/how-to-meld-for-git-diffs-in-ubuntu-hardy So maybe that is what's missing.
Jan 06 2011
prev sibling next sibling parent Jean Crystof <news news.com> writes:
Walter Bright Wrote:

 retard wrote:
 One thing came to my mind. Unless you're using Ubuntu 8.04 LTS,
I'm using 8.10, and I've noticed that no more updates are coming.
Huh! You should seriously consider upgrading. If you are running any kind of services in the system or browsing the web, you're exposed to both remote and local attacks. I know at least one local root exploit 8.10 is vulnerable to. It's just plainly stupid to use a distro after the support has died. Are you running Windows 98 still too? If you upgrade Ubuntu, do a clean install. Upgrading 8.10 in-place goes via -> 9.04 -> 9.10 -> 10.4 -> 10.10. Each one takes 1 or 2 hours. Clean install of Ubuntu 10.10 or 11.04 (soon available) will only take less than 30 minutes.
 The support for desktop 8.04 and 9.10 is also nearing its end (April this 
 year). I'd recommend backing up your /home and installing 10.04 LTS or 
 10.10 instead.
Yeah, I know I'll be forced to upgrade soon.
Soon? Your system already sounds like it's broken.
 One thing that'll make it easier is 
 I abandoned using Ubuntu for multimedia. For example, to play Pandora I now
just 
 plug my ipod into my stereo <g>. I just stopped using youtube on Ubuntu, as I 
 got tired of the video randomly going black, freezing, etc.
I'm using Amarok and Spotify. Both work fine.
Jan 11 2011
prev sibling parent reply Jean Crystof <news news.com> writes:
Walter Bright Wrote:

 My mobo is an ASUS M2A-VM. No graphics cards, or any other cards plugged into 
 it. It's hardly weird or wacky or old (it was new at the time I bought it to 
 install Ubuntu).
ASUS M2A-VM has 690G chipset. Wikipedia says: http://en.wikipedia.org/wiki/AMD_690_chipset_series#690G "AMD recently dropped support for Windows and Linux drivers made for Radeon X1250 graphics integrated in the 690G chipset, stating that users should use the open-source graphics drivers instead. The latest available AMD Linux driver for the 690G chipset is fglrx version 9.3, so all newer Linux distributions using this chipset are unsupported." Fast forward to this day: http://www.phoronix.com/scan.php?page=article&item=amd_driver_q111&num=2 Benchmark page says: the only available driver for your graphics gives only about 10-20% of the real performance. Why? ATI sucks on Linux. Don't buy ATI. Buy Nvidia instead: http://geizhals.at/a466974.html This is 3rd latest Nvidia GPU generation. How long support lasts? Ubuntu 10.10 still supports all Geforce 2+ which is 10 years old. I foretell Ubuntu 19.04 is last one supporting this. Use Nvidia and your problems are gone.
Jan 11 2011
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
Did you hear that, Walter? Just buy a 500$ video card so you can watch
youtube videos on Linux. Easy. :D
Jan 11 2011
parent reply Jean Crystof <news news.com> writes:
Andrej Mitrovic Wrote:

 Did you hear that, Walter? Just buy a 500$ video card so you can watch
 youtube videos on Linux. Easy. :D
Dear Sir, did you even open the link? It's the cheapest Nvidia card I could find by googling for 30 seconds. 28,58 euros translates to $37. I can't promise that very old Geforce chips support 1920x1200 but at least the ones compatible with his PCI-express bus work perfectly. Maybe You were trying to be funny?
Jan 11 2011
parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
Notice the smiley face -> :D

Yeah I didn't check the price, it's only 30$. But there's no telling
if that would work either. Also, dirt cheap video cards are almost
certainly going to cause problems. Even if the drivers worked
perfectly, a year down the road things will start breaking down. Cheap
hardware is cheap for a reason.
Jan 11 2011
next sibling parent reply Jean Crystof <news news.com> writes:
Andrej Mitrovic Wrote:

 Notice the smiley face -> :D
 
 Yeah I didn't check the price, it's only 30$. But there's no telling
 if that would work either. 
I can tell from our hobbyist group's experience with Compiz, native Linux games, Wine, multiplatform OpenGL game development on Linux, and hardware accelerated video that all of these tasks had problems on our ATI hardware and no problems with Nvidia.
 Also, dirt cheap video cards are almost
 certainly going to cause problems. Even if the drivers worked
 perfectly, a year down the road things will start breaking down. Cheap
 hardware is cheap for a reason.
That's not true. I suggested a low end card because if he's using integrated graphics now, there's no need for high end hardware. The reason why the price is lower is cheaper cards have smaller heatsinks, less fans or none at all, no advanced features (SLI), low frequency cores with most shaders disabled (They've sidestepped manufacturing defects by disabling broken cores), smaller memory bandwidth, less & cheaper memory modules without heatsinks. Just look at the circuit board. A high end graphics card is physically at least twice as large or even more. No wonder it costs more. The price goes up $100 just by buying the bigger heatsinks are fans. Claiming that low end components have shorter lifespan is ridiculous. Why does Ubuntu 10.10 still support cheap Geforce 2 MX then?
Jan 11 2011
parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 1/12/11, Jean Crystof <news news.com> wrote:
 Claiming that low end components have shorter lifespan is ridiculous.
You've never had computer equipment fail on you?
Jan 12 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Andrej Mitrovic wrote:
 On 1/12/11, Jean Crystof <news news.com> wrote:
 Claiming that low end components have shorter lifespan is ridiculous.
You've never had computer equipment fail on you?
I've had a lot of computer equipment. Failures I've had, ranked in order of most failures to least: keyboards power supplies hard drives fans monitors I've never had a CPU, memory, or mobo failure. Which is really kind of amazing. I did have a 3DFX board once, which failed after a couple years. Never bought another graphics card. The keyboards fail so often I keep a couple spares around. I buy cheap, bottom of the line equipment. I don't overclock them and I make sure there's plenty of airflow around the boxes.
Jan 12 2011
next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Thu, 13 Jan 2011 05:43:27 +0200, Walter Bright  
<newshound2 digitalmars.com> wrote:

 The keyboards fail so often I keep a couple spares around.
Let me guess, all cheap rubber-domes? Maybe you should have a look at some professional keyboards. Mechanical keyboards are quite durable, and feel much nicer to type on. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 12 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Vladimir Panteleev wrote:
 On Thu, 13 Jan 2011 05:43:27 +0200, Walter Bright 
 <newshound2 digitalmars.com> wrote:
 
 The keyboards fail so often I keep a couple spares around.
Let me guess, all cheap rubber-domes? Maybe you should have a look at some professional keyboards. Mechanical keyboards are quite durable, and feel much nicer to type on.
Yup, the $9.99 ones. They also get things spilled on them, why ruin an expensive one? <g>
Jan 12 2011
next sibling parent Caligo <iteronvexor gmail.com> writes:
On Wed, Jan 12, 2011 at 11:33 PM, Walter Bright
<newshound2 digitalmars.com>wrote:

 Vladimir Panteleev wrote:

 On Thu, 13 Jan 2011 05:43:27 +0200, Walter Bright <
 newshound2 digitalmars.com> wrote:

  The keyboards fail so often I keep a couple spares around.

 Let me guess, all cheap rubber-domes? Maybe you should have a look at some
 professional keyboards. Mechanical keyboards are quite durable, and feel
 much nicer to type on.
Yup, the $9.99 ones. They also get things spilled on them, why ruin an expensive one? <g>
http://www.daskeyboard.com/ or http://steelseries.com/us/products/keyboards/steelseries-7g expensive, I know, but who cares. You only live once!
Jan 13 2011
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:igm2um$2omg$1 digitalmars.com...
 Vladimir Panteleev wrote:
 On Thu, 13 Jan 2011 05:43:27 +0200, Walter Bright 
 <newshound2 digitalmars.com> wrote:

 The keyboards fail so often I keep a couple spares around.
Let me guess, all cheap rubber-domes? Maybe you should have a look at some professional keyboards. Mechanical keyboards are quite durable, and feel much nicer to type on.
Yup, the $9.99 ones. They also get things spilled on them, why ruin an expensive one? <g>
I've got a $6 one I've been using for years, and I frequently beat the shit out of it. And I mean literally just pounding on it, not to type, but just to beat :) With all the physical abuse I give this ultra-cheapie thing, I honestly can't believe it still works fine after all these years. "AOpen" gets my approval for keyboards :) (Heh, I actually had to turn it over to check the brand. I had no idea what it was.) I never spill anything on it, though.
Jan 13 2011
parent Stanislav Blinov <blinov loniir.ru> writes:
14.01.2011 3:12, Nick Sabalausky :
 "Walter Bright"<newshound2 digitalmars.com>  wrote in message
 news:igm2um$2omg$1 digitalmars.com...
 Vladimir Panteleev wrote:
 On Thu, 13 Jan 2011 05:43:27 +0200, Walter Bright
 <newshound2 digitalmars.com>  wrote:

 The keyboards fail so often I keep a couple spares around.
Let me guess, all cheap rubber-domes? Maybe you should have a look at some professional keyboards. Mechanical keyboards are quite durable, and feel much nicer to type on.
Yup, the $9.99 ones. They also get things spilled on them, why ruin an expensive one?<g>
I've got a $6 one I've been using for years, and I frequently beat the shit out of it. And I mean literally just pounding on it, not to type, but just to beat :) With all the physical abuse I give this ultra-cheapie thing, I honestly can't believe it still works fine after all these years. "AOpen" gets my approval for keyboards :) (Heh, I actually had to turn it over to check the brand. I had no idea what it was.) I never spill anything on it, though.
I felt very depressed when my first keyboard failed - the rubber shocks got tired and started to tear. It served me for more than 10 years in everything from gaming to writing university reports to programming (pounding, dropping and spilling/sugaring included). And it was an old one - without all those annoying win-keys and stuff. Never got another one that would last at least a year. One of the recent ones died taking with it a USB port on the mobo (or maybe it was vice-versa, I don't know).
Jan 14 2011
prev sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 13.01.2011 06:33, schrieb Walter Bright:
 Vladimir Panteleev wrote:
 On Thu, 13 Jan 2011 05:43:27 +0200, Walter Bright <newshound2 digitalmars.com>
 wrote:

 The keyboards fail so often I keep a couple spares around.
Let me guess, all cheap rubber-domes? Maybe you should have a look at some professional keyboards. Mechanical keyboards are quite durable, and feel much nicer to type on.
Yup, the $9.99 ones. They also get things spilled on them, why ruin an expensive one? <g>
There are washable keyboards, e.g. http://h30094.www3.hp.com/product/sku/5110581/mfg_partno/VF097AA Cheers, - Daniel
Jan 13 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Daniel Gibson wrote:
 There are washable keyboards, e.g. 
 http://h30094.www3.hp.com/product/sku/5110581/mfg_partno/VF097AA
I know. But what I do works for me. I happen to like the action on the cheapo keyboards, and the key layout. I'll also throw one in my suitcase for a trip, 'cuz I hate my laptop keyboard. And I don't care if they get lost/destroyed on the trip.
Jan 13 2011
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
Lol Walter you're like me. I keep buying cheap keyboards all the time.
I'm almost becoming one of those people that collect things all the
time (well.. the difference being I throw the old ones in the trash).
Right now I'm sporting this dirt-cheap Genius keyboard, I've just
looked up the price and it's 5$. My neighbor gave it to me for free
because he got two for some reason. You would think a 5$ keyboard
sucks, but it's pretty sweet actually. The keys have a nice depth, and
they're real easy to hit. The downside? They've put the freakin' sleep
button right above the right cursor key. Now *that's* genius, Genius..
So I had to disable sleep mode. LOL!

*However*, my trusty Logitech MX518 is standing strong with over 5
years of use. Actually, I did cut the cable by accident once. But I
had a spare 10$ Logitech mouse which happened to have the same
connector that plugs in that little PCI board, so I just swapped the
cables. (yay for hardware design reuse!).
Jan 13 2011
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Andrej Mitrovic wrote:
 Lol Walter you're like me. I keep buying cheap keyboards all the time.
 I'm almost becoming one of those people that collect things all the
 time (well.. the difference being I throw the old ones in the trash).
 Right now I'm sporting this dirt-cheap Genius keyboard, I've just
 looked up the price and it's 5$. My neighbor gave it to me for free
 because he got two for some reason. You would think a 5$ keyboard
 sucks, but it's pretty sweet actually. The keys have a nice depth, and
 they're real easy to hit. The downside? They've put the freakin' sleep
 button right above the right cursor key. Now *that's* genius, Genius..
 So I had to disable sleep mode. LOL!
My preferred keyboard layout has the \ key right above the Enter key. The problem is those ^%%^&*^*&^&*^ keyboards that have the \ key somewhere else, and the Enter key is extra large and in that spot. So guess what happens? If I want to delete foo\bar.c, I type in: del foo Enter Yikes! There goes my directory contents! I've done this too many times. I freakin hate those keyboards. I always check to make sure I'm not buying one, though they seem to be most of 'em.
Jan 14 2011
prev sibling parent Daniel Gibson <metalcaedes gmail.com> writes:
Am 14.01.2011 04:46, schrieb Andrej Mitrovic:
 Lol Walter you're like me. I keep buying cheap keyboards all the time.
 I'm almost becoming one of those people that collect things all the
 time (well.. the difference being I throw the old ones in the trash).
 Right now I'm sporting this dirt-cheap Genius keyboard, I've just
 looked up the price and it's 5$. My neighbor gave it to me for free
 because he got two for some reason. You would think a 5$ keyboard
 sucks, but it's pretty sweet actually. The keys have a nice depth, and
 they're real easy to hit. The downside? They've put the freakin' sleep
 button right above the right cursor key. Now *that's* genius, Genius..
 So I had to disable sleep mode. LOL!
Had something like that once, too. I just removed the key from the keyboard ;)
Jan 14 2011
prev sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
I forgot to mention though, do *not* open up a MX518 unless you want
to spend your day figuring out where all the tiny little pieces go.
When I opened it the first time, all the pieces went flying in all
directions. I've found all the pieces but putting them back together
was a nightmare. Which piece goes where with which other piece and in
what order.. Luckily I found a forum where someone else already took
apart and assembled the same mouse, and even took pictures of it.
There was really only this one final frustrating piece that I couldn't
figure out which held the scroll wheel together and made that
"clikclick" sound when you scroll.
Jan 13 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
Andrej Mitrovic wrote:
 I've found all the pieces but putting them back together
 was a nightmare. Which piece goes where with which other piece and in
 what order..
No prob. I've got some tools in the basement that will take care of that.
Jan 14 2011
prev sibling next sibling parent Jesse Phillips <jessekphillips+D gmail.com> writes:
Walter Bright Wrote:

 The keyboards fail so often I keep a couple spares around.
 
 I buy cheap, bottom of the line equipment. I don't overclock them and I make 
 sure there's plenty of airflow around the boxes.
Wow, I have never had a keyboard fail. I'm stilling using my first keyboard from 1998. Hell, I haven't even rubbed off any of the letters. I guess the only components I've had fail on me has been hard drive and CD/DVD drive. Monitor was about to go.
Jan 12 2011
prev sibling next sibling parent spir <denis.spir gmail.com> writes:
On 01/13/2011 04:43 AM, Walter Bright wrote:
 Andrej Mitrovic wrote:
 On 1/12/11, Jean Crystof <news news.com> wrote:
 Claiming that low end components have shorter lifespan is ridiculous.
You've never had computer equipment fail on you?
I've had a lot of computer equipment. Failures I've had, ranked in order of most failures to least: keyboards power supplies hard drives fans monitors I've never had a CPU, memory, or mobo failure. Which is really kind of amazing. I did have a 3DFX board once, which failed after a couple years. Never bought another graphics card. The keyboards fail so often I keep a couple spares around. I buy cheap, bottom of the line equipment. I don't overclock them and I make sure there's plenty of airflow around the boxes.
Same for me. Cheap hardware as well; and as standard as possible. I've never had any pure electronic failure (graphic card including)! I would just put fan & power supply before keyboard, and add mouse to the list just below keyboard. My keyboards do not break as often as yours: you must be a brutal guy ;-) An exception is for wireless keyboards and mice, which I quickly abandoned. Denis _________________ vita es estrany spir.wikidot.com
Jan 13 2011
prev sibling next sibling parent Sean Kelly <sean invisibleduck.org> writes:
Walter Bright Wrote:
 
 I buy cheap, bottom of the line equipment. I don't overclock them and I make 
 sure there's plenty of airflow around the boxes.
I don't overclock any more after a weird experience I had overclocking an Athlon ages ago. It ran fine except that unzipping something always failed with a CRC error. Before that I expected that an overclocked CPU would either work or fail spectacularly. I'm not willing to risk data silently being corrupted in the background, particularly when even mid-range CPUs these days are more than enough for nearly everything.
Jan 13 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:iglsge$2evs$1 digitalmars.com...
 Andrej Mitrovic wrote:
 On 1/12/11, Jean Crystof <news news.com> wrote:
 Claiming that low end components have shorter lifespan is ridiculous.
You've never had computer equipment fail on you?
I've had a lot of computer equipment. Failures I've had, ranked in order of most failures to least: keyboards power supplies hard drives fans monitors I've never had a CPU, memory, or mobo failure. Which is really kind of amazing. I did have a 3DFX board once, which failed after a couple years. Never bought another graphics card. The keyboards fail so often I keep a couple spares around. I buy cheap, bottom of the line equipment. I don't overclock them and I make sure there's plenty of airflow around the boxes.
My failure list from most to least would be this: 1. power supply / printer 2. optical drive / floppies (the disks, not the drives) 3. hard drive 4. monitor / mouse / fan Never really had probems with anything else as far as I can remember. I had a few 3dfx cards back in the day and never had the slightest bit of trouble with any of them. I used to go through a ton of power supplies until I finally stopped buying the cheap ones. Printers kept giving me constant trouble, but the fairly modern HP I have now seems to work ok (although the OEM software/driver is complete and utter shit, but then OEM software usually is.)
Jan 13 2011
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Nick Sabalausky wrote:
 My failure list from most to least would be this:
 
 1. power supply / printer
 2. optical drive / floppies (the disks, not the drives)
 3. hard drive
 4. monitor / mouse / fan
 
 Never really had probems with anything else as far as I can remember. I had 
 a few 3dfx cards back in the day and never had the slightest bit of trouble 
 with any of them.
 
 I used to go through a ton of power supplies until I finally stopped buying 
 the cheap ones. Printers kept giving me constant trouble, but the fairly 
 modern HP I have now seems to work ok (although the OEM software/driver is 
 complete and utter shit, but then OEM software usually is.)
My printer problems ended (mostly) when I finally spent the bux and got a laser printer. The (mostly) bit is because neither Windows nor Ubuntu support an HP 2300 printer. Sigh.
Jan 13 2011
parent reply Robert Clipsham <robert octarineparrot.com> writes:
On 14/01/11 03:53, Walter Bright wrote:
 My printer problems ended (mostly) when I finally spent the bux and got
 a laser printer. The (mostly) bit is because neither Windows nor Ubuntu
 support an HP 2300 printer. Sigh.
Now this surprises me, printing has been the least painless thing I've ever encountered - it's the one area I'd say Linux excels. In OS X or Windows if I want to access my networked printer there's at least 5 clicks involved - on Linux there was a grand total of 0 - it detected my printer and installed it with no intervention from me, I just clicked print and it worked. Guess that's the problem with hardware though, it could have a few thousand good reviews and you could still manage to get something you run into endless issues with! -- Robert http://octarineparrot.com/
Jan 14 2011
parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 14.01.2011 16:48, schrieb Robert Clipsham:
 On 14/01/11 03:53, Walter Bright wrote:
 My printer problems ended (mostly) when I finally spent the bux and got
 a laser printer. The (mostly) bit is because neither Windows nor Ubuntu
 support an HP 2300 printer. Sigh.
Now this surprises me, printing has been the least painless thing I've ever encountered - it's the one area I'd say Linux excels. In OS X or Windows if I want to access my networked printer there's at least 5 clicks involved - on Linux there was a grand total of 0 - it detected my printer and installed it with no intervention from me, I just clicked print and it worked. Guess that's the problem with hardware though, it could have a few thousand good reviews and you could still manage to get something you run into endless issues with!
This really depends on your printer, some have good Linux support and some don't. Postscript-support (mostly seen in better Laser printers) is probably most painless (just supply a PPD - if CUPS doesn't have one for your printer anyway - and you're done). But also many newer inkjet printers have Linux support, but many need a proprietary library from the vendor to work. But a few years ago it was a lot worse, especially with cheap inkjets. Many supported only GDI printing which naturally is best supported on Windows (GDI is a windows interface).
Jan 14 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Daniel Gibson wrote:
 But a few years ago it was a lot worse, especially with cheap inkjets. 
 Many supported only GDI printing which naturally is best supported on 
 Windows (GDI is a windows interface).
Yeah, but I bought an *HP* laserjet, because I thought everyone supported them well. Turns out I probably have the only orphaned HP LJ model.
Jan 14 2011
next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 14.01.2011 20:50, schrieb Walter Bright:
 Daniel Gibson wrote:
 But a few years ago it was a lot worse, especially with cheap inkjets.
 Many supported only GDI printing which naturally is best supported on
 Windows (GDI is a windows interface).
Yeah, but I bought an *HP* laserjet, because I thought everyone supported them well. Turns out I probably have the only orphaned HP LJ model.
Yes, the HP Laserjets usually have really good support with PCL and sometimes even Postscript. You said you've got a HP (Laserjet?) 2300? On http://www.openprinting.org/printer/HP/HP-LaserJet_2300 it says that printer "works perfectly" and supports PCL 5e, PCL6 and Postscript level 3. Generally http://www.openprinting.org/printers is a really good page to see if a printer has Linux-support and where to get drivers etc. Cheers, - Daniel
Jan 14 2011
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Daniel Gibson wrote:
 Am 14.01.2011 20:50, schrieb Walter Bright:
 Daniel Gibson wrote:
 But a few years ago it was a lot worse, especially with cheap inkjets.
 Many supported only GDI printing which naturally is best supported on
 Windows (GDI is a windows interface).
Yeah, but I bought an *HP* laserjet, because I thought everyone supported them well. Turns out I probably have the only orphaned HP LJ model.
Yes, the HP Laserjets usually have really good support with PCL and sometimes even Postscript. You said you've got a HP (Laserjet?) 2300?
Yup. Do you want a picture? <g>
 On http://www.openprinting.org/printer/HP/HP-LaserJet_2300 it says that 
 printer "works perfectly" and supports PCL 5e, PCL6 and Postscript level 3.
Nyuk nyuk nyuk
 Generally http://www.openprinting.org/printers is a really good page to 
 see if a printer has Linux-support and where to get drivers etc.
Jan 14 2011
parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 14.01.2011 22:54, schrieb Walter Bright:
 Daniel Gibson wrote:
 Am 14.01.2011 20:50, schrieb Walter Bright:
 Daniel Gibson wrote:
 But a few years ago it was a lot worse, especially with cheap inkjets.
 Many supported only GDI printing which naturally is best supported on
 Windows (GDI is a windows interface).
Yeah, but I bought an *HP* laserjet, because I thought everyone supported them well. Turns out I probably have the only orphaned HP LJ model.
Yes, the HP Laserjets usually have really good support with PCL and sometimes even Postscript. You said you've got a HP (Laserjet?) 2300?
Yup. Do you want a picture? <g>
No, I believe you ;)
 On http://www.openprinting.org/printer/HP/HP-LaserJet_2300 it says that
 printer "works perfectly" and supports PCL 5e, PCL6 and Postscript level 3.
Nyuk nyuk nyuk
The hplip version in Ubuntu 8.04 and in Ubuntu 9.10 support your printer - I don't know about your version, because Ubuntu doesn't list it anymore, but I'd be surprised if it didn't support it as well ;) hplips docs say that the printer is supported when connected via USB or "Network or JetDirect" (but not Parallel port, but probably the printer doesn't have one). It may be that Ubuntu doesn't install hplip (HPs driver for all kinds of printers - including the LaserJet 2300 ;)) by default. That could be fixed by "sudo apt-get install hplip hpijs-ppds" and then trying to add the printer again (if there's no Voodoo to do that automatically). Cheers, - Daniel
Jan 14 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Daniel Gibson wrote:
 The hplip version in Ubuntu 8.04 and in Ubuntu 9.10 support your printer 
 - I don't know about your version,
8.10
 because Ubuntu doesn't list it 
 anymore, but I'd be surprised if it didn't support it as well ;)
 hplips docs say that the printer is supported when connected via USB or 
 "Network or JetDirect" (but not Parallel port, but probably the printer 
 doesn't have one).
The HP 2300D is parallel port. (The "D" stands for duplex, an extra cost option on the 2300.)
 It may be that Ubuntu doesn't install hplip (HPs driver for all kinds of 
 printers - including the LaserJet 2300 ;)) by default.
 That could be fixed by
 "sudo apt-get install hplip hpijs-ppds"
 and then trying to add the printer again (if there's no Voodoo to do 
 that automatically).
How I installed the printer is I just, more or less at random, said it was a different HP laserjet, and then it worked. The duplex doesn't work, though, nor any of the other variety of special features it has.
Jan 14 2011
next sibling parent Daniel Gibson <metalcaedes gmail.com> writes:
Am 15.01.2011 01:23, schrieb Walter Bright:
 Daniel Gibson wrote:
 The hplip version in Ubuntu 8.04 and in Ubuntu 9.10 support your printer - I
 don't know about your version,
8.10
 because Ubuntu doesn't list it anymore, but I'd be surprised if it didn't
 support it as well ;)
 hplips docs say that the printer is supported when connected via USB or
 "Network or JetDirect" (but not Parallel port, but probably the printer
 doesn't have one).
The HP 2300D is parallel port. (The "D" stands for duplex, an extra cost option on the 2300.)
HP says[1] it has also got USB, if their docs are correct for your version (and the USB port is just somehow hidden) it may be worth a try :) Also, http://www.openprinting.org/printer/HP/HP-LaserJet_2300 links (under "Postscript") a PPD that supports duplex. CUPS supports adding a printer and providing a custom PPD. (In my experience Postscript printers do support the parallel port, you can even just cat a PS file to /dev/lp0 if it has the right format) However, *maybe* performance (especially for pictures) is not as great as with HPs own PCL. As a Bonus: There are generic Postscript driver for Windows as well, so with that PPD your Duplex may even work on Windows :)
 It may be that Ubuntu doesn't install hplip (HPs driver for all kinds of
 printers - including the LaserJet 2300 ;)) by default.
 That could be fixed by
 "sudo apt-get install hplip hpijs-ppds"
 and then trying to add the printer again (if there's no Voodoo to do that
 automatically).
How I installed the printer is I just, more or less at random, said it was a different HP laserjet, and then it worked. The duplex doesn't work, though, nor any of the other variety of special features it has.
Maybe CUPS didn't list the LJ2300 as supported because (according to that outdated list I found in the Ubuntu 8.04 driver) it isn't supported at the parport. [1] http://h10010.www1.hp.com/wwpc/us/en/sm/WF06b/18972-236251-236263-14638-f51-238800-238808-238809.html
Jan 14 2011
prev sibling parent reply Jean Crystof <news news.com> writes:
Walter Bright Wrote:

 Daniel Gibson wrote:
 The hplip version in Ubuntu 8.04 and in Ubuntu 9.10 support your printer 
 - I don't know about your version,
8.10
This thread sure was interesting. Now what I'd like is if Walter could please try a Nvidia Geforce on Linux if the problems won't go away by upgrading his Ubuntu. Unfortunately that particular Ati graphics driver is constantly changing and it might take 1-2 years to make it work in Ubuntu: http://www.phoronix.com/vr.php?view=15614 The second thing is upgrading the Ubuntu. Telling how Linux sucks by using Ubuntu 8.10 is like telling how Windows 7 sucks when you're actually using Windows ME or 98. These have totally different software stacks, just to name a few: openoffice 2 vs 3 ext3 vs ext4 filesystem usb2 vs usb3 nowadays kde3 vs kde4 (kde4 in 8.10 was badly broken) gcc 4.3 vs 4.5 old style graphics drivers vs kernel mode switch faster bootup thousands of new features and drivers tens of thousands of bugfixes and so on. It makes no sense to discuss "Linux". It's constantly changing.
Jan 14 2011
next sibling parent Jean Crystof <news news.com> writes:
Jean Crystof Wrote:

 Walter Bright Wrote:
 
 Daniel Gibson wrote:
 The hplip version in Ubuntu 8.04 and in Ubuntu 9.10 support your printer 
 - I don't know about your version,
8.10
This thread sure was interesting. Now what I'd like is if Walter could please try a Nvidia Geforce on Linux if the problems won't go away by upgrading his Ubuntu. Unfortunately that particular Ati graphics driver is constantly changing and it might take 1-2 years to make it work in Ubuntu: http://www.phoronix.com/vr.php?view=15614 The second thing is upgrading the Ubuntu. Telling how Linux sucks by using Ubuntu 8.10 is like telling how Windows 7 sucks when you're actually using Windows ME or 98. These have totally different software stacks, just to name a few: openoffice 2 vs 3 ext3 vs ext4 filesystem usb2 vs usb3 nowadays kde3 vs kde4 (kde4 in 8.10 was badly broken) gcc 4.3 vs 4.5 old style graphics drivers vs kernel mode switch faster bootup thousands of new features and drivers tens of thousands of bugfixes and so on. It makes no sense to discuss "Linux". It's constantly changing.
I tried to find the package lists for Ubuntu 8.10 (intrepid), but they're not online anymore. Using it is *totally* crazy. Do apt-get update and apt-get upgrade even work anymore? The Ubuntu idea was to provide a simple graphical tool for dist-upgrades. If I had designed it, I wouldn't even let you log in before upgrading. No wonder DMD binaries depended on legacy libraries some time ago. The compiler author should be using VAX or something similar like all dinosaurs do.
Jan 14 2011
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/14/11 6:48 PM, Jean Crystof wrote:
 Walter Bright Wrote:

 Daniel Gibson wrote:
 The hplip version in Ubuntu 8.04 and in Ubuntu 9.10 support your printer
 - I don't know about your version,
8.10
This thread sure was interesting. Now what I'd like is if Walter could please try a Nvidia Geforce on Linux if the problems won't go away by upgrading his Ubuntu. Unfortunately that particular Ati graphics driver is constantly changing and it might take 1-2 years to make it work in Ubuntu: http://www.phoronix.com/vr.php?view=15614 The second thing is upgrading the Ubuntu. Telling how Linux sucks by using Ubuntu 8.10 is like telling how Windows 7 sucks when you're actually using Windows ME or 98. These have totally different software stacks, just to name a few: openoffice 2 vs 3 ext3 vs ext4 filesystem usb2 vs usb3 nowadays kde3 vs kde4 (kde4 in 8.10 was badly broken) gcc 4.3 vs 4.5 old style graphics drivers vs kernel mode switch faster bootup thousands of new features and drivers tens of thousands of bugfixes and so on. It makes no sense to discuss "Linux". It's constantly changing.
The darndest thing is I have Ubuntu 8.10 on my laptop with KDE 3.5 on top... and love it. But this all is exciting - I think I'll make the switch, particularly now that I have a working backup solution. Andrei
Jan 14 2011
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Jean Crystof wrote:
 The second thing is upgrading the Ubuntu. Telling how Linux sucks by using
Ubuntu 8.10
To be fair, it was about the process of upgrading in place to Ubuntu 8.10 that sucked. It broke everything, and made me leery of upgrading again.
Jan 14 2011
parent Gour <gour atmarama.net> writes:
On Fri, 14 Jan 2011 22:40:11 -0800
Walter Bright <newshound2 digitalmars.com> wrote:

 To be fair, it was about the process of upgrading in place to Ubuntu
 8.10 that sucked. It broke everything, and made me leery of upgrading
 again.
<shameful plugin> /me likes Archlinux - all the hardware work and there is no 'upgrade' process like in Ubuntu 'cause it is 'rolling release', iow. one can update whenever and as often one desire. Moreover, it's very simple to build from the source if one wants/needs. </shameful plugin> Sincerely, Gour --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ----------------------------------------------------------------
Jan 14 2011
prev sibling parent retard <re tard.com.invalid> writes:
Fri, 14 Jan 2011 21:02:38 +0100, Daniel Gibson wrote:

 Am 14.01.2011 20:50, schrieb Walter Bright:
 Daniel Gibson wrote:
 But a few years ago it was a lot worse, especially with cheap inkjets.
 Many supported only GDI printing which naturally is best supported on
 Windows (GDI is a windows interface).
Yeah, but I bought an *HP* laserjet, because I thought everyone supported them well. Turns out I probably have the only orphaned HP LJ model.
Yes, the HP Laserjets usually have really good support with PCL and sometimes even Postscript. You said you've got a HP (Laserjet?) 2300? On http://www.openprinting.org/printer/HP/HP-LaserJet_2300 it says that printer "works perfectly" and supports PCL 5e, PCL6 and Postscript level 3. Generally http://www.openprinting.org/printers is a really good page to see if a printer has Linux-support and where to get drivers etc.
I'm not sure if Walter's Ubuntu version already has this, but the latest Ubuntus automatically install all CUPS supported (USB) printers. I haven't tried this autodetection with parallel or network printers. The "easiest" way to configure CUPS is via the CUPS network interface ( http://localhost:631 ). In some early Ubuntu versions the printer configuration was broken. You had to add yourself to the lpadmin group and whatnot. My experiences with printers are: Linux (Ubuntu) 1. Plug in the cable 2. Print Mac OS X 1. Plug in the cable 2. Print Windows 1. Plug in the cable. 2. Driver wizard appears, fails to install 3. Insert driver cd (preferably download the latest drivers from the internet) 4. Save your work 4. Reboot 5. Close the HP/Canon/whatever ad dialog 6. Restart the programs and load your work 7. Print
Jan 14 2011
prev sibling parent reply Russel Winder <russel russel.org.uk> writes:
On Fri, 2011-01-14 at 11:50 -0800, Walter Bright wrote:
 Daniel Gibson wrote:
 But a few years ago it was a lot worse, especially with cheap inkjets.=
=20
 Many supported only GDI printing which naturally is best supported on=
=20
 Windows (GDI is a windows interface).
=20 Yeah, but I bought an *HP* laserjet, because I thought everyone supported=
them well.
=20
 Turns out I probably have the only orphaned HP LJ model.
I have an HP LJ 4000N and whilst it is perfectly functional, printing systems have decided it is too old to work with properly -- this is a Windows, Linux and Mac OS X problem. Backward compatibility is a three-edged sword. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jan 14 2011
parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 14.01.2011 21:16, schrieb Russel Winder:
 On Fri, 2011-01-14 at 11:50 -0800, Walter Bright wrote:
 Daniel Gibson wrote:
 But a few years ago it was a lot worse, especially with cheap inkjets.
 Many supported only GDI printing which naturally is best supported on
 Windows (GDI is a windows interface).
Yeah, but I bought an *HP* laserjet, because I thought everyone supported them well. Turns out I probably have the only orphaned HP LJ model.
I have an HP LJ 4000N and whilst it is perfectly functional, printing systems have decided it is too old to work with properly -- this is a Windows, Linux and Mac OS X problem. Backward compatibility is a three-edged sword.
hplip on Linux should support it when connected via Parallel Port (but, according to a maybe outdated list, not USB or Network/Jetdirect). See also http://www.openprinting.org/printer/HP/HP-LaserJet_4000 :-)
Jan 14 2011
parent Russel Winder <russel russel.org.uk> writes:
On Sat, 2011-01-15 at 00:26 +0100, Daniel Gibson wrote:
[ . . . ]
 hplip on Linux should support it when connected via Parallel Port (but,=
=20
 according to a maybe outdated list, not USB or Network/Jetdirect). See al=
so=20
 http://www.openprinting.org/printer/HP/HP-LaserJet_4000 :-)
The problem is not the spooling per se, Linux, Windows and Mac OS X are all happy to talk to JetDirect, the problem is that the printer only has 7MB of memory and no disc, and operating systems seem now to think that printers have gigbytes of memory and make no allowances. The worst of it is though that the LJ 4000 has quite an old version of PostScript compared to that in use today and all the application and/or drivers that render to PostScript are not willing (or able) to code generate for such an old PostScript interpreter. Together this leads to a huge number of stack fails on print jobs. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jan 15 2011
prev sibling parent reply retard <re tard.com.invalid> writes:
Thu, 13 Jan 2011 19:04:59 -0500, Nick Sabalausky wrote:

 My failure list from most to least would be this:
 
 1. power supply / printer
 2. optical drive / floppies (the disks, not the drives)
 3. hard drive
 4. monitor / mouse / fan
My list is pretty much the same. I bought a (Toshiba IIRC) dot matrix printer (the price was insane) in 1980s. It STILL works fine when printing ASCII text, but it's "a bit" noisy and slow. Another thing is, after upgrading from DOS, haven't found any drivers for printing graphics. On DOS, only some programs had specially crafted drivers for this printer and some had drivers for some other proprietary protocol the printer "emulates" :-) My second printer was some Canon LBP in the early 90s. STILL works without any problems (still connected to my Ubuntu CUPS server), but it's also relatively slow and physically huge. I used to replace the toner and drums, toner every ~2 years (prints 1500-3000 pages of 5% text) and drum every 5-6 years. We bought it as used from a company. It had been repaired once by the official Canon service. After that, almost 20 years without repair. I also bought a faster (USB!) laser printer from Brother couple of years ago. I've replaced the drum once and replaced the toner three times with some cheapo 3rd party stuff. It was a bit risky to buy a set of 10 toner kits along with the printers (even the laser printers are so cheap now), but it was an especially cheap offer and we thought the spare part prices go up anyway. The amortized printing costs are probably less than 3 cents per page. Now, I've also bought Canon, HP, and Epson inkjets. What can I say.. The printers are cheap. The ink is expensive. They're slow, and result looks like shit (not very photo-realistic) compared to the online printing services. AND I've "broken" about 8 of them in 15 years. It's way too expensive to start buying spare parts (e.g. when the dry ink gets stuck in the ink "tray" in Canon printers). Nowadays I print photos using some online service. The inkjet printer quality still sucks IMO. Don't buy them. PSUs: Never ever buy the cheap models. There's a list of bad manufacturers in the net. They make awful shit. The biggest problem is, if the PSU breaks, it might also break other parts which makes all PSU failures really expensive. I've bought <ad>Seasonic, Fortron, and Corsair</ad> PSUs since the late 1990s. They work perfectly. If some part fails, it's the PSU fan (or sometimes the fuse when switching the PSU on causes a surge). Fuses are cheap. Fans last much longer if you replace the engine oil every 2-4 years. Scrap off the sticker in the center of the fan and pour in appropriate oil. I'm not kidding! I've got one 300W PSU from 1998 and it still works and the fan is almost as quiet as if it was new. Optical drives: Number 1 reason for breakage, I forget to close the tray and kick it off! Currently I don't use internal optical drives anymore. There's one external dvd burner. I rarely use it. And it's safe from my feet on the table :D Hard drives: these always fail, sooner or later. There's nothing you can do except RAID and backups (labs.google.com/papers/disk_failures.pdf). I've successfully terminated all (except those in use) hard drives so far by using them normally. Monitors: The CRTs used to break every 3-5 years. Even the high quality Sony monitors :-| I've used TFT panels since 2003. The inverter of the first 14" TFT broke after 5 years of use. Three others are still working, after 1-6 years of use. Mice: I've always bought Logitech mice. NEVER had any failures. The current one is MX 510 (USB). Previous ones used the COM port. The bottom of the MX510 shows signs of hardcore use, but the internal parts haven't fallen off yet and the LED "eye" works :-D Fans: If you want reliability, buy fans with ball bearings. They make more noise than sleeve bearings. I don't believe in expensive high quality fans. Sure, there are differences in the airflow and noise levels, but the max reliability won't be any better. The normal PC stores don't sell any fans with industrial quality bearings. Like I said before, remember to replace the oil http://www.dansdata.com/fanmaint.htm -- I still have high quality fans from the 1980s in 24/7 use. The only problem is, I couldn't anticipate how much the power consumption grows. The old ones are 40-80 mm fans. Now (at least gaming) computers have 120mm or 140mm or even bigger fans.
Jan 14 2011
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 1/14/11, retard <re tard.com.invalid> wrote:
 Like I said before,
 remember to replace the oil http://www.dansdata.com/fanmaint.htm
I've never thought of this. I did have a couple of failed fans over the years but I always had a bunch of spares from the older equipment which I've replaced. Still, that is a cool tip, thanks! And yes, avoid cheap PSU's or at least get one from a good manufacturer. It's also important to have a PSU that can actually power your PC.
Jan 14 2011
prev sibling next sibling parent Daniel Gibson <metalcaedes gmail.com> writes:
Am 14.01.2011 15:21, schrieb retard:

 PSUs: Never ever buy the cheap models.
Yup, one should never cheap out on PSUs. Also cheap PSUs usually are less efficient.
 Optical drives: Number 1 reason for breakage, I forget to close the tray
 and kick it off! Currently I don't use internal optical drives anymore.
 There's one external dvd burner. I rarely use it. And it's safe from my
 feet on the table :D
If you don't trash them yourself (:P) optical drives sometimes fail because a rubber band in it that rotates the disk (or something) becomes brittle or worn after some years. These can usually be replaced.
 Hard drives: these always fail, sooner or later. There's nothing you can
 do except RAID and backups (labs.google.com/papers/disk_failures.pdf).
 I've successfully terminated all (except those in use) hard drives so far
 by using them normally.
Not kicking/hitting your PC and cooling them appropriately helps, but in the end modern HDDs die anyway. I've had older (4GB) HDDs run for for over 10 years, much of the time even 24/7, without failing.
 Mice: I've always bought Logitech mice. NEVER had any failures. The
 current one is MX 510 (USB). Previous ones used the COM port. The bottom
 of the MX510 shows signs of hardcore use, but the internal parts haven't
 fallen off yet and the LED "eye" works :-D
I often had mouse buttons failing in logitech mice. Sometimes I removed the corresponding switches in the mouse and soldered one from another old cheap mouse into it, which fixed it until it broke again.. Now I'm using microsoft mice and they seem more reliable so far.
 Fans: If you want reliability, buy fans with ball bearings. They make
 more noise than sleeve bearings. I don't believe in expensive high
 quality fans. Sure, there are differences in the airflow and noise
 levels, but the max reliability won't be any better. The normal PC stores
 don't sell any fans with industrial quality bearings. Like I said before,
 remember to replace the oil http://www.dansdata.com/fanmaint.htm -- I
 still have high quality fans from the 1980s in 24/7 use. The only problem
 is, I couldn't anticipate how much the power consumption grows. The old
 ones are 40-80 mm fans. Now (at least gaming) computers have 120mm or
 140mm or even bigger fans.
Thanks for the tip :-)
Jan 14 2011
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Thanks for the fan info. I'm going to go oil my fans!
Jan 14 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"retard" <re tard.com.invalid> wrote in message 
news:igpm5t$26so$1 digitalmars.com...
 Now, I've also bought Canon, HP, and Epson inkjets. What can I say.. The
 printers are cheap. The ink is expensive. They're slow, and result looks
 like shit (not very photo-realistic) compared to the online printing
 services. AND I've "broken" about 8 of them in 15 years. It's way too
 expensive to start buying spare parts (e.g. when the dry ink gets stuck
 in the ink "tray" in Canon printers). Nowadays I print photos using some
 online service. The inkjet printer quality still sucks IMO. Don't buy
 them.
A long time ago we got, for free, an old Okidata printer that some school or company or something was getting rid of. It needed a new, umm, something really really expensive (I forget offhand), so there was a big black streak across each page. And it didn't do color. But I absolutely loved that printer. Aside from the black streak, everything about it worked flawlessly every time. *Never* jammed once, blazing fast, good quality. Used that thing for years. Eventually we did need something that could print without that streak and we went through a ton of inkjets. Every one of them was total shit until about 2 or 3 years ago we got an HP Photosmart C4200 printer/scanner combo which isn't as good as the old Okidata, but it's the only inkjet I've ever used that I'd consider "not shit". The software/drivers for it, though, still fall squarely into the "pure shit" category, though. Oh well...Maybe there's Linux drivers for it that are better...
 PSUs: Never ever buy the cheap models. There's a list of bad
 manufacturers in the net. They make awful shit.
Another problem is that, as places like Sharky Extreme and Tom's Hardware found out while testing, it seems to be common practice for PSU manufacturers to outright lie about the wattage.
 Optical drives: Number 1 reason for breakage, I forget to close the tray
 and kick it off!
Very much related to that: I truly, truly *hate* all software that decides it makes sense to eject the tray directly. And even worse: OSes not having a universal setting for "Disable *all* software-triggered ejects". That option should be standard and default. I've seriously tried to learn how to make Windows rootkits *just* so I could hook into the right dll/function and disable it system-wide once-and-for-all. (Never actually got anywhere with it though, and eventually just gave up.)
 Hard drives: these always fail, sooner or later. There's nothing you can
 do except RAID and backups
And SMART monitors: I've had a total of two HDD's fail, and in both cases I really lucked out. The first one was in my Mac, but it was after I was already getting completely fed up with OSX and Apple, so I didn't really care much - I was mostly back on Windows again by that point. The second failure just happened to be the least important of the three HDDs in my system. I was still pretty upset about it though, so it was a big wakeup call: I *will not* have a primary system anymore that doesn't have a SMART monitoring program, with temperature readouts, always running. And yes, it can't always predict a failure, but sometimes it can so IMO there's no good reason not to have it. That's actually one of the things I don't like about Linux, nothing like that seems to exist for Linux. Sure, there's a cmd line program you can poll, but that doesn't remotely cut it.
 Monitors: The CRTs used to break every 3-5 years. Even the high quality
 Sony monitors :-| I've used TFT panels since 2003. The inverter of the
 first 14" TFT broke after 5 years of use. Three others are still working,
 after 1-6 years of use.
I still use CRTs (one big reason being that I hate the idea of only being able to use one resolution), and for a long time I've always had either a dual-monitor setup or dual systems with one monitor on each, so I've had a lot of monitors. But I've only ever had *one* CRT go bad, and I definitely use them for more than 5 years. Also, FWIW, I'm convinced that Sony is *not* as good as people generally think. Maybe they were in the 70's or 80's, I don't know, but they're frequently no better than average. It's common for their high end DVD players to have problems or limitations that the cheap bargain-brands (like CyberHome) don't have. I had an expensive portable Sony CD player and the buttons quickly crapped out rendering it unusable (not that I care anymore since I have a Toshiba Gigabeat F with the Rockbox firmware - iPod be damned). The PS2 was reining champ for "most unreliable video game hardware in history" until the 360 stole the title by a landslide. And I've *never* found a pair of Sony headphones that sounded even *remotely* as good as a pair from Koss of comparable price and model. Sony is the Buick/Catallac/Oldsmobile of consumer electronics, *not* the Lexus/Benz as most people seem to think.
 Mice: I've always bought Logitech mice. NEVER had any failures. The
 current one is MX 510 (USB). Previous ones used the COM port. The bottom
 of the MX510 shows signs of hardcore use, but the internal parts haven't
 fallen off yet and the LED "eye" works :-D
MS and Logitech mice are always the best. I've never come across any other brand that put out anything but garbage (that does include Apple, except that in Apple's case it's because of piss-poor design rather than the piss-poor engineering of all the other non-MS/Logitech brands). I've been using this Logitech Trackball for probably over five years and I absolutely love it: http://www.amazon.com/Logitech-Trackman-Wheel-Optical-Silver/dp/B00005NIMJ/ In fact, I have two of them. The older one has been starting to get a bad connection between the cord and the trackball, but that's probably my fault. And heck, the MS mouse my mom uses has left-button that's been acting up, so nothing's perfect no matter what brand. But MS/Logitech are definitely still worlds ahead of anyone else. (Which is kind of weird because, along with keyboards, mice are the *only* hardware I trust MS with. Every other piece of MS hardware either has reliability problems or, in the case of all their game controllers going all they way back to the Sidewinders in the pre-XBox days, a completely worthless D-Pad.)
Jan 15 2011
next sibling parent retard <re tard.com.invalid> writes:
Sat, 15 Jan 2011 03:23:41 -0500, Nick Sabalausky wrote:

 "retard" <re tard.com.invalid> wrote in message
 PSUs: Never ever buy the cheap models. There's a list of bad
 manufacturers in the net. They make awful shit.
Another problem is that, as places like Sharky Extreme and Tom's Hardware found out while testing, it seems to be common practice for PSU manufacturers to outright lie about the wattage.
That's true. But it's also true that PSU efficiency and power have improved drastically. And their quality overall. In 1990s it was pretty common that computer stores mostly sold those shady brands with a more or less lethal design. There are lots of reliable brands now. If you're not into gaming, it hardly matters which (good) PSU you buy. They all provide 300+ Watts and your system might consume 70-200 Watts, even under full load.
 Monitors: The CRTs used to break every 3-5 years. Even the high quality
 Sony monitors :-| I've used TFT panels since 2003. The inverter of the
 first 14" TFT broke after 5 years of use. Three others are still
 working, after 1-6 years of use.
I still use CRTs (one big reason being that I hate the idea of only being able to use one resolution), and for a long time I've always had either a dual-monitor setup or dual systems with one monitor on each, so I've had a lot of monitors. But I've only ever had *one* CRT go bad, and I definitely use them for more than 5 years. Also, FWIW, I'm convinced that Sony is *not* as good as people generally think. Maybe they were in the 70's or 80's, I don't know, but they're frequently no better than average.
I've disassembled couple of CRT monitors. The Sony monitors have had aluminium cased "modules" inside them. So replacing these should be relatively easy. They also had detachtable wires between these units. Cheaper monitors have three circuit boards (one for the front panel, one in the back of the tube and one in the bottom). It's usually the board in the bottom of the monitor that breaks, which means that you need to cut all wires to remove it in cheaper monitors. It's just this high level design that I like in Sony's monitors. Probably other high quality brands like Eizo also do this. Sony may also use bad quality discrete components like capacitors and ICs. I can't say anything about that.
Jan 15 2011
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/15/11 2:23 AM, Nick Sabalausky wrote:
 I still use CRTs (one big reason being that I hate the idea of only being
 able to use one resolution)
I'd read some post of Nick and think "hmm, now that's a guy who follows only his own beat" but this has to take the cake. From here on, I wouldn't be surprised if you found good reasons to use whale fat powered candles instead of lightbulbs. Andrei
Jan 15 2011
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:igt2pl$2u6e$1 digitalmars.com...
 On 1/15/11 2:23 AM, Nick Sabalausky wrote:
 I still use CRTs (one big reason being that I hate the idea of only being
 able to use one resolution)
I'd read some post of Nick and think "hmm, now that's a guy who follows only his own beat" but this has to take the cake. From here on, I wouldn't be surprised if you found good reasons to use whale fat powered candles instead of lightbulbs.
Heh :) Well, I can spend no money and stick with my current 21" CRT that already suits my needs (that I only paid $25 for in the first place), or I can spend a hundred or so dollars to lose the ability to have a decent looking picture at more than one resolution and then say "Gee golly whiz! That sure is a really flat panel!!". Whoop-dee-doo. And popularity and trendyness are just non-issues.
Jan 15 2011
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday 15 January 2011 19:11:26 Nick Sabalausky wrote:
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message
 news:igt2pl$2u6e$1 digitalmars.com...
 
 On 1/15/11 2:23 AM, Nick Sabalausky wrote:
 I still use CRTs (one big reason being that I hate the idea of only
 being able to use one resolution)
I'd read some post of Nick and think "hmm, now that's a guy who follows only his own beat" but this has to take the cake. From here on, I wouldn't be surprised if you found good reasons to use whale fat powered candles instead of lightbulbs.
Heh :) Well, I can spend no money and stick with my current 21" CRT that already suits my needs (that I only paid $25 for in the first place), or I can spend a hundred or so dollars to lose the ability to have a decent looking picture at more than one resolution and then say "Gee golly whiz! That sure is a really flat panel!!". Whoop-dee-doo. And popularity and trendyness are just non-issues.
Why would you _want_ more than one resolution? What's the use case? I'd expect that you'd want the highest resolution that you could get and be done with it. - Jonathan M Davis
Jan 15 2011
parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 16.01.2011 04:33, schrieb Jonathan M Davis:
 On Saturday 15 January 2011 19:11:26 Nick Sabalausky wrote:
 "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org>  wrote in message
 news:igt2pl$2u6e$1 digitalmars.com...

 On 1/15/11 2:23 AM, Nick Sabalausky wrote:
 I still use CRTs (one big reason being that I hate the idea of only
 being able to use one resolution)
I'd read some post of Nick and think "hmm, now that's a guy who follows only his own beat" but this has to take the cake. From here on, I wouldn't be surprised if you found good reasons to use whale fat powered candles instead of lightbulbs.
Heh :) Well, I can spend no money and stick with my current 21" CRT that already suits my needs (that I only paid $25 for in the first place), or I can spend a hundred or so dollars to lose the ability to have a decent looking picture at more than one resolution and then say "Gee golly whiz! That sure is a really flat panel!!". Whoop-dee-doo. And popularity and trendyness are just non-issues.
Why would you _want_ more than one resolution? What's the use case? I'd expect that you'd want the highest resolution that you could get and be done with it. - Jonathan M Davis
Maybe for games (if your PC isn't fast enough for full resolution or the game doesn't support it).. but that is no problem at all: flatscreens can interpolate other resolutions and while the picture may not be good enough for text (like when programming) and stuff it *is* good enough for games on decent flatscreens. For non-games-usage I never had the urge to change the resolution of my flatscreens. And I really prefer them to any CRT I've ever used. OTOH when he has a good CRT (high resolution, good refresh rate) there may be little reason to replace it, as long as it's working.. apart from the high power consumption and the size maybe. Cheers, - Daniel
Jan 15 2011
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Daniel Gibson wrote:
 OTOH when he has a good CRT (high resolution, good refresh rate) there 
 may be little reason to replace it, as long as it's working.. apart from 
 the high power consumption and the size maybe.
The latter two issues loomed large for me. I was very glad to upgrade to an LCD.
Jan 15 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Daniel Gibson" <metalcaedes gmail.com> wrote in message 
news:igtq08$2m1c$1 digitalmars.com...
 Am 16.01.2011 04:33, schrieb Jonathan M Davis:
 On Saturday 15 January 2011 19:11:26 Nick Sabalausky wrote:
 "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org>  wrote in message
 news:igt2pl$2u6e$1 digitalmars.com...

 On 1/15/11 2:23 AM, Nick Sabalausky wrote:
 I still use CRTs (one big reason being that I hate the idea of only
 being able to use one resolution)
I'd read some post of Nick and think "hmm, now that's a guy who follows only his own beat" but this has to take the cake. From here on, I wouldn't be surprised if you found good reasons to use whale fat powered candles instead of lightbulbs.
Heh :) Well, I can spend no money and stick with my current 21" CRT that already suits my needs (that I only paid $25 for in the first place), or I can spend a hundred or so dollars to lose the ability to have a decent looking picture at more than one resolution and then say "Gee golly whiz! That sure is a really flat panel!!". Whoop-dee-doo. And popularity and trendyness are just non-issues.
Why would you _want_ more than one resolution? What's the use case? I'd expect that you'd want the highest resolution that you could get and be done with it. - Jonathan M Davis
Maybe for games (if your PC isn't fast enough for full resolution or the game doesn't support it).. but that is no problem at all: flatscreens can interpolate other resolutions and while the picture may not be good enough for text (like when programming) and stuff it *is* good enough for games on decent flatscreens.
There's two reasons it's good for games: 1. Like you indicated, to get a better framerate. Framerate is more important in most games than resolution. 2. For games that aren't really designed for multiple resolutions, particularly many 2D ones, and especially older games (which are often some of the best, but they look like shit on an LCD).
 For non-games-usage I never had the urge to change the resolution of my 
 flatscreens. And I really prefer them to any CRT I've ever used.
For non-games, just off-the-top-of-my-head: Bumping up to a higher resolution can be good when dealing with images, or whenever you're doing anything that could use more screen real-estate at the cost of smaller UI elements. And CRTs are more likely to go up to really high resolutions than non-CRTs. For instance, 1600x1200 is common on even the low-end CRT monitors (and that was true even *before* televisions started going HD - which is *still* lower-rez than 1600x1200). Yea, you can get super high resolution non-CRTs, but they're much more expensive. And even then, you lose the ability to do any real desktop work at a more typical resolution. Which is bad because for many things I do want to limit my resolution so the UI isn't overly-small. And yea, there are certian things you can do to scale up the UI, but I've never seen an OS, Win/Lin/Mac, that actually handled that sort of thing reasonably well. So CRTs give you all that flexibility at a sensible price. And if I'm doing some work on the computer, and it *is* set at a sensible resolution that works for both the given monitor and the task at hand, I've never noticed a real impromevent with LCD versus CRT. Yea, it is a *little* bit better, but I've never noticed any difference while actually *doing* anything on a computer: only when I stop and actually look for differences. Also, it can be good when mirroring the display to TV-out or, better yet, using the "cinema mode" where any video-playback is sent fullscreen to the TV (which I'll often do), because those things tend to not work very well when the monitor isn't reduced to the same resolution as the TV.
 OTOH when he has a good CRT (high resolution, good refresh rate) there may 
 be little reason to replace it, as long as it's working.. apart from the 
 high power consumption and the size maybe.
I've actually compared the rated power consumpsion between CRTs and LCDs of similar size and was actually surprised to find that there was little, if any, real difference at all on the sets I compared.
Jan 15 2011
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:igttbt$16hu$1 digitalmars.com...
 OTOH when he has a good CRT (high resolution, good refresh rate) there 
 may be little reason to replace it, as long as it's working.. apart from 
 the high power consumption and the size maybe.
I've actually compared the rated power consumpsion between CRTs and LCDs of similar size and was actually surprised to find that there was little, if any, real difference at all on the sets I compared.
As for size, well, I have enough space, so at least for me that's a non-issue.
Jan 15 2011
parent Adam Ruppe <destructionator gmail.com> writes:
I stuck with my CRT for a long time. What I really liked about it
was the bright colors. I've never seen an LCD match that.

But, my CRT started to give out. It'd go to a bright line in the
middle and darkness everywhere else at random. It started doing
it just every few hours, then it got to the point where it'd do
it every 20 minutes or so.

I found if I give it a nice pound on the side, it'd go back to
normal for a while. I was content for that for months.

... but the others living with me weren't. *WHACK* OH MY GOD
JUST BUY A NEW ONE ALREADY!


So I gave in and looked for a replacement CRT with the same specs.
But couldn't find one. I gave in and bought an LCD in the same
price range (~$150) with the same resolution (I liked what I had!)

Weighed less, left room on the desk for my keyboard, and best of all,
I haven't had to hit it yet. But colors haven't looked quite the same
since and VGA text mode just looks weird. Alas.
Jan 16 2011
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Nick Sabalausky wrote:
 And CRTs are more likely to go up to really
 high resolutions than non-CRTs. For instance, 1600x1200 is common on even 
 the low-end CRT monitors (and that was true even *before* televisions 
 started going HD - which is *still* lower-rez than 1600x1200).
 
 Yea, you can get super high resolution non-CRTs, but they're much more 
 expensive.
I have 1900x1200 on LCD, and I think it was around $325. It's a Hanns-G thing, from Amazon. Of course, I don't use it for games. I got thoroughly burned out on that when I had a job in college developing/testing them.
Jan 15 2011
parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
For games? I just switch to software rendering. I get almost the same
quality as a CRT on low resolutions. It's still not perfect, but it's
close.

Soo.. what are you playing that needs low resolutions and high
framerates, Nick? Quake? :D
Jan 15 2011
prev sibling next sibling parent reply retard <re tard.com.invalid> writes:
Sat, 15 Jan 2011 23:47:09 -0500, Nick Sabalausky wrote:

 Bumping up to a higher resolution can be good when dealing with images,
 or whenever you're doing anything that could use more screen real-estate
 at the cost of smaller UI elements. And CRTs are more likely to go up to
 really high resolutions than non-CRTs. For instance, 1600x1200 is common
 on even the low-end CRT monitors (and that was true even *before*
 televisions started going HD - which is *still* lower-rez than
 1600x1200).
The standard resolution for new flat panels has been 1920x1080 or 1920x1200 for a long time now. The panel size has slowly improved from 12-14" to 21.5" and 24", the price has gone down to about $110-120. Many of the applications have been tuned for 1080p. When I abandoned CRTs, the most common size was 17" or 19". Those monitors indeed supported resolutions up to 1600x1200 or more. However the best resolution was about 1024x768 or 1280x1024 for 17" monitors and 1280x1024 or a step up for 19" monitors. I also had one 22" or 23" Sony monitor which had the optimal resolution of 1600x1200 or at most one step bigger. It's much less than what the low-end models offer now. It's hard to believe you're using anything larger than 1920x1200 because the legacy graphics cards don't support very high resolutions, especially via DVI. For example I recently noticed a top of the line Geforce 6 card only supports resolutions up to 2048x1536 85 Hz. Guess how it works with a 30" Cinema display HD  2560x1600. Another thing is subpixel antialiasing. You can't really do it without a TFT panel and digital video output.
 Yea, you can get super high resolution non-CRTs, but they're much more
 expensive. And even then, you lose the ability to do any real desktop
 work at a more typical resolution. Which is bad because for many things
 I do want to limit my resolution so the UI isn't overly-small. And yea,
 there are certian things you can do to scale up the UI, but I've never
 seen an OS, Win/Lin/Mac, that actually handled that sort of thing
 reasonably well. So CRTs give you all that flexibility at a sensible
 price.
You mean DPI settings?
 Also, it can be good when mirroring the display to TV-out or, better
 yet, using the "cinema mode" where any video-playback is sent fullscreen
 to the TV (which I'll often do), because those things tend to not work
 very well when the monitor isn't reduced to the same resolution as the
 TV.
But my TV happily accepts 1920x1080? Sending the same digital signal to both works fine here. YMMV
 OTOH when he has a good CRT (high resolution, good refresh rate) there
 may be little reason to replace it, as long as it's working.. apart
 from the high power consumption and the size maybe.
I've actually compared the rated power consumpsion between CRTs and LCDs of similar size and was actually surprised to find that there was little, if any, real difference at all on the sets I compared.
How much do the CRTs consume power? The max power consumption for LED powered panels has gone down considerably and you never use their max brightness. Typical power consumption of a modern 21.5" panel might stay between 20 and 30 Watts when you're just typing text.
Jan 16 2011
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
I need to get a better LCD/LED display one of these days. Right now
I'm sporting a Samsung 2232BW, it's a 22" screen with a native
1680x1050 resolution (16:10). But it has horrible text rendering when
antialiasing is enabled. I've tried a bunch of screen calibration
software, changing DPI settings, but nothing worked. I know it's not
my eyes to blame since antialised fonts look perfectly fine for me on
a few laptops that I've seen.
Jan 16 2011
prev sibling next sibling parent Russel Winder <russel russel.org.uk> writes:
On Sun, 2011-01-16 at 16:55 +0100, Andrej Mitrovic wrote:
 I need to get a better LCD/LED display one of these days. Right now
 I'm sporting a Samsung 2232BW, it's a 22" screen with a native
 1680x1050 resolution (16:10). But it has horrible text rendering when
 antialiasing is enabled. I've tried a bunch of screen calibration
 software, changing DPI settings, but nothing worked. I know it's not
 my eyes to blame since antialised fonts look perfectly fine for me on
 a few laptops that I've seen.
It may not be the monitor, it may be the operating system setting. In particular what level of smoothing and hinting do you have set for the fonts on LCD screen? Somewhat counter-intuitively, font rendering gets worse if you have no hinting or you have full hinting. It is much better to set "slight hinting". Assuming you have sub-pixel smoothing set of course. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jan 16 2011
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 1/16/11, Russel Winder <russel russel.org.uk> wrote:
 It may not be the monitor, it may be the operating system setting.  In
 particular what level of smoothing and hinting do you have set for the
 fonts on LCD screen?  Somewhat counter-intuitively, font rendering gets
 worse if you have no hinting or you have full hinting.  It is much
 better to set "slight hinting".   Assuming you have sub-pixel smoothing
 set of course.
Yes, I know about those. Linux has arguably more settings to choose from, but it didn't help out. There's also RGB>BGR>GBR switches and contrast settings, and the ones you've mentioned like font hinting. It just doesn't seem to work on this screen no matter what I choose. Also, this screen has very poor yellows. When you have a solid yellow picture displayed you can actually see the color having a gradient from a darkish yellow to very bright yellow (almost white) from the top to the bottom of the screen, without even moving your head. But I bought this screen because it was rather cheap at the time and it's pretty good for games, which is what I cared for a few years ago. (low input lag + no tearing, no blurry screen when moving rapidly). I've read a few forum posts around the web and it seems other people have problems with this model and antialising as well. I'll definitely look into buying a quality screen next time though.
Jan 16 2011
prev sibling parent "Nick Sabalausky" <a a.a> writes:
"retard" <re tard.com.invalid> wrote in message 
news:igv3p3$2n4k$2 digitalmars.com...
 Sat, 15 Jan 2011 23:47:09 -0500, Nick Sabalausky wrote:

 Yea, you can get super high resolution non-CRTs, but they're much more
 expensive. And even then, you lose the ability to do any real desktop
 work at a more typical resolution. Which is bad because for many things
 I do want to limit my resolution so the UI isn't overly-small. And yea,
 there are certian things you can do to scale up the UI, but I've never
 seen an OS, Win/Lin/Mac, that actually handled that sort of thing
 reasonably well. So CRTs give you all that flexibility at a sensible
 price.
You mean DPI settings?
I just mean uniformly scaled UI elements. For instance, you can usually adjust a UI's font size, but the results tend to be like selectively scaling up just the nose, mouth and hands on a picture of a human. And then parts of it still end up too small. And, especially on Linux, those sorts of settings doesn't always get obeyed by all software anyway.
 Also, it can be good when mirroring the display to TV-out or, better
 yet, using the "cinema mode" where any video-playback is sent fullscreen
 to the TV (which I'll often do), because those things tend to not work
 very well when the monitor isn't reduced to the same resolution as the
 TV.
But my TV happily accepts 1920x1080? Sending the same digital signal to both works fine here. YMMV
Mine's an SD...which I suppose I have to defend...Never felt a need to replace it. Never cared whether or not I could see athlete's drops of sweat or individual blades of grass. And I have a lot of SD content that's never going to magcally change to HD, and that stuff looks far better on an SD set anyway than on any HD set I've ever seen no matter what fancy delay-introducing filter it had (except maybe the CRT HDTVs that don't exist anymore). Racing games, FPSes and Pikmin are the *only* things for which I have any interest in HD (since, for those, it actually matters if you're able to see small things in the distance). But then I'd be spending money (which I'm very short on), and making all my other SD content look worse, *AND* since I'm into games, it would be absolutely essential to get one without any input->display lag, which is very difficult since 1. The manufacturers only seem to care about movies and 2. From what I've seen, they never seem to actually tell you how much lag there is. So it's a big bother, costs money, and has drawbacks. Maybe someday (like when I get rich and the downsides improve) but not right now.
Jan 16 2011
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/15/11 10:47 PM, Nick Sabalausky wrote:
 "Daniel Gibson"<metalcaedes gmail.com>  wrote in message
 news:igtq08$2m1c$1 digitalmars.com...
 There's two reasons it's good for games:

 1. Like you indicated, to get a better framerate. Framerate is more
 important in most games than resolution.

 2. For games that aren't really designed for multiple resolutions,
 particularly many 2D ones, and especially older games (which are often some
 of the best, but they look like shit on an LCD).
It's a legacy issue. Clearly everybody except you is using CRTs for gaming and whatnot. Therefore graphics hardware producers and game vendors are doing what it takes to adapt to a fixed resolution.
 For non-games-usage I never had the urge to change the resolution of my
 flatscreens. And I really prefer them to any CRT I've ever used.
For non-games, just off-the-top-of-my-head: Bumping up to a higher resolution can be good when dealing with images, or whenever you're doing anything that could use more screen real-estate at the cost of smaller UI elements. And CRTs are more likely to go up to really high resolutions than non-CRTs. For instance, 1600x1200 is common on even the low-end CRT monitors (and that was true even *before* televisions started going HD - which is *still* lower-rez than 1600x1200). Yea, you can get super high resolution non-CRTs, but they're much more expensive. And even then, you lose the ability to do any real desktop work at a more typical resolution. Which is bad because for many things I do want to limit my resolution so the UI isn't overly-small. And yea, there are certian things you can do to scale up the UI, but I've never seen an OS, Win/Lin/Mac, that actually handled that sort of thing reasonably well. So CRTs give you all that flexibility at a sensible price.
It's odd how everybody else can put up with LCDs for all kinds of work.
 And if I'm doing some work on the computer, and it *is* set at a sensible
 resolution that works for both the given monitor and the task at hand, I've
 never noticed a real impromevent with LCD versus CRT. Yea, it is a *little*
 bit better, but I've never noticed any difference while actually *doing*
 anything on a computer: only when I stop and actually look for differences.
Meanwhile, you are looking at a gamma gun shooting atcha.
 Also, it can be good when mirroring the display to TV-out or, better yet,
 using the "cinema mode" where any video-playback is sent fullscreen to the
 TV (which I'll often do), because those things tend to not work very well
 when the monitor isn't reduced to the same resolution as the TV.


 OTOH when he has a good CRT (high resolution, good refresh rate) there may
 be little reason to replace it, as long as it's working.. apart from the
 high power consumption and the size maybe.
I've actually compared the rated power consumpsion between CRTs and LCDs of similar size and was actually surprised to find that there was little, if any, real difference at all on the sets I compared.
Absolutely. There's a CRT brand that consumes surprisingly close to an LCD. It's called "Confirmation Bias". Andrei
Jan 16 2011
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:igvhj9$mri$1 digitalmars.com...
 On 1/15/11 10:47 PM, Nick Sabalausky wrote:
 There's two reasons it's good for games:

 1. Like you indicated, to get a better framerate. Framerate is more
 important in most games than resolution.

 2. For games that aren't really designed for multiple resolutions,
 particularly many 2D ones, and especially older games (which are often 
 some
 of the best, but they look like shit on an LCD).
It's a legacy issue. Clearly everybody except you is using CRTs for gaming and whatnot. Therefore graphics hardware producers and game vendors are doing what it takes to adapt to a fixed resolution.
Wow, you really seem to be taking a lot of this personally. First, I asume you meant "...everybody except you is using non-CRTs..." Second, how exacty is the modern-day work of graphics hardware producers and game vendors that you speak of going to affect games from more than a few years ago? What?!? You're still watching movies that were filmed in the 80's?!? Dude, you need to upgrade!!!
 It's odd how everybody else can put up with LCDs for all kinds of work.
Strawman. I never said anything remotely resembling "LCDs are unusable." What I've said is that 1. They have certain benefits that get overlooked, and 2. Why should *I* spend the money to replace something that already works fine for me?
 And if I'm doing some work on the computer, and it *is* set at a sensible
 resolution that works for both the given monitor and the task at hand, 
 I've
 never noticed a real impromevent with LCD versus CRT. Yea, it is a 
 *little*
 bit better, but I've never noticed any difference while actually *doing*
 anything on a computer: only when I stop and actually look for 
 differences.
Meanwhile, you are looking at a gamma gun shooting atcha.
You can't see anything at all without electromagnetic radiation shooting into your eyeballs.
 I've actually compared the rated power consumpsion between CRTs and LCDs 
 of
 similar size and was actually surprised to find that there was little, if
 any, real difference at all on the sets I compared.
Absolutely. There's a CRT brand that consumes surprisingly close to an LCD. It's called "Confirmation Bias".
I'm pretty sure I did point out the limitations of my observation: "...on all the sets I compared". And it's pretty obvious I wasn't undertaking a proper extensive study. There's no need for sarcasm.
Jan 16 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/16/11 2:07 PM, Nick Sabalausky wrote:
 "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org>  wrote in message
 news:igvhj9$mri$1 digitalmars.com...
 On 1/15/11 10:47 PM, Nick Sabalausky wrote:
 There's two reasons it's good for games:

 1. Like you indicated, to get a better framerate. Framerate is more
 important in most games than resolution.

 2. For games that aren't really designed for multiple resolutions,
 particularly many 2D ones, and especially older games (which are often
 some
 of the best, but they look like shit on an LCD).
It's a legacy issue. Clearly everybody except you is using CRTs for gaming and whatnot. Therefore graphics hardware producers and game vendors are doing what it takes to adapt to a fixed resolution.
Wow, you really seem to be taking a lot of this personally.
Not at all!
 First, I asume you meant "...everybody except you is using non-CRTs..."

 Second, how exacty is the modern-day work of graphics hardware producers and
 game vendors that you speak of going to affect games from more than a few
 years ago? What?!? You're still watching movies that were filmed in the
 80's?!? Dude, you need to upgrade!!!
You have a good point if playing vintage games is important to you.
 It's odd how everybody else can put up with LCDs for all kinds of work.
Strawman. I never said anything remotely resembling "LCDs are unusable." What I've said is that 1. They have certain benefits that get overlooked,
The benefits of CRTs are not being overlooked. They are insignificant or illusory. If they were significant, CRTs would still be in significant use. Donning a flat panel is not a display of social status. Most people need computers to get work done, and they'd use CRTs if CRTs would have them do better work. A 30" 2560x1280 monitor is sitting on my desk. (My employer bought it for me without asking; I "only" had a 26". They thought making me more productive at the cost of a monitor is simple business sense.) My productivity would be seriously impaired if I replaced either monitor with even the best CRT out there.
 and 2. Why should *I* spend the money to replace something that
 already works fine for me?
If it works for you, fine. I doubt you wouldn't be more productive with a larger monitor. But at any rate entering money as an essential part of the equation is (within reason) misguided. This is your livelihood, your core work. Save on groceries, utilities, cars, luxury... but don't "save" on what impacts your real work.
 And if I'm doing some work on the computer, and it *is* set at a sensible
 resolution that works for both the given monitor and the task at hand,
 I've
 never noticed a real impromevent with LCD versus CRT. Yea, it is a
 *little*
 bit better, but I've never noticed any difference while actually *doing*
 anything on a computer: only when I stop and actually look for
 differences.
Meanwhile, you are looking at a gamma gun shooting atcha.
You can't see anything at all without electromagnetic radiation shooting into your eyeballs.
Nonono. Gamma = electrons. CRT monitors have what's literally called a gamma gun. It's aimed straight at your eyes.
 Absolutely. There's a CRT brand that consumes surprisingly close to an
 LCD. It's called "Confirmation Bias".
I'm pretty sure I did point out the limitations of my observation: "...on all the sets I compared". And it's pretty obvious I wasn't undertaking a proper extensive study. There's no need for sarcasm.
There is. It would take anyone two minutes of online research to figure that your comparison is wrong. Andrei
Jan 16 2011
parent so <so so.do> writes:
 You have a good point if playing vintage games is important to you.
He was quite clear on that i think, this is not like natural selection. I don't know Nick, but like the new generation movies, new generation games mostly suck. If i had to, i would definitely pick the old ones for both of them.
 The benefits of CRTs are not being overlooked. They are insignificant or  
 illusory. If they were significant, CRTs would still be in significant  
 use. Donning a flat panel is not a display of social status. Most people  
 need computers to get work done, and they'd use CRTs if CRTs would have  
 them do better work.
Well you can't value things like that, you know better than that. It is not just about how significant or insignificant? How is it watching things in only one angle? How is it reading a text, or i should say trying to read? How about colors or refresh rate? Yes, LCD has its own benefits too, and quite a bit of them. You forget the biggest factor, cost, for both user and the mainly producer.
Jan 16 2011
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/16/11 1:38 PM, Andrei Alexandrescu wrote:
 On 1/15/11 10:47 PM, Nick Sabalausky wrote:
 "Daniel Gibson"<metalcaedes gmail.com> wrote in message
 news:igtq08$2m1c$1 digitalmars.com...
 There's two reasons it's good for games:

 1. Like you indicated, to get a better framerate. Framerate is more
 important in most games than resolution.

 2. For games that aren't really designed for multiple resolutions,
 particularly many 2D ones, and especially older games (which are often
 some
 of the best, but they look like shit on an LCD).
It's a legacy issue. Clearly everybody except you is using CRTs for gaming and whatnot.
s/is using/is not using/ Andrie
Jan 16 2011
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Meanwhile, you are looking at a gamma gun shooting atcha.
I always worried about that. Nobody actually found anything wrong, but still.
Jan 16 2011
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
With CRTs I could spend a few hours in front of the PC, but after that
my eyes would get really tired and I'd have to take a break. Since I
switched to LCDs I've never had this problem anymore, I could spend a
day staring at screen if I wanted to. Of course, it's still best to
take some time off regardless of the screen type.

Anyway.. how about that Git thing, then? :D
Jan 16 2011
next sibling parent "Nick Sabalausky" <a a.a> writes:
"Andrej Mitrovic" <andrej.mitrovich gmail.com> wrote in message 
news:mailman.652.1295210795.4748.digitalmars-d puremagic.com...
 With CRTs I could spend a few hours in front of the PC, but after that
 my eyes would get really tired and I'd have to take a break. Since I
 switched to LCDs I've never had this problem anymore, I could spend a
 day staring at screen if I wanted to. Of course, it's still best to
 take some time off regardless of the screen type.
I use a light-on-dark color scheme. Partly because I like the way it looks, but also partly because it's easier on my eyes. If I were using a scheme with blazing-white everywhere, I can imagine a CRT might be a bit harsh.
 Anyway.. how about that Git thing, then? :D
I'd been holding on to SVN for a while, but that discussion did convince me to give DVCSes an honest try (haven't gotten around to it yet though, but plan to).
Jan 16 2011
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Andrej Mitrovic wrote:
 With CRTs I could spend a few hours in front of the PC, but after that
 my eyes would get really tired and I'd have to take a break. Since I
 switched to LCDs I've never had this problem anymore, I could spend a
 day staring at screen if I wanted to. Of course, it's still best to
 take some time off regardless of the screen type.
I need reading glasses badly, but fortunately not for reading a screen. I never had eye fatigue problems with it. I did buy a 28" LCD for my desktop, which is so nice that I can no longer use my laptop screen for dev. :-(
 Anyway.. how about that Git thing, then? :D
We'll be moving dmd, phobos, druntime, and the docs to Github shortly. The accounts are set up, it's just a matter of getting the svn repositories moved and figuring out how it all works. I know very little about git and github, but the discussions about it here and elsewhere online have thoroughly convinced me (and the other devs) that this is the right move for D.
Jan 16 2011
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday 16 January 2011 14:07:57 Walter Bright wrote:
 Andrej Mitrovic wrote:
 Anyway.. how about that Git thing, then? :D
We'll be moving dmd, phobos, druntime, and the docs to Github shortly. The accounts are set up, it's just a matter of getting the svn repositories moved and figuring out how it all works. I know very little about git and github, but the discussions about it here and elsewhere online have thoroughly convinced me (and the other devs) that this is the right move for D.
Great! That will make it _much_ easier to make check-ins while working on other stuff in parallel. That's a royal pain with svn, and while it's slightly better when using git-svn to talk to an svn repository, it isn't much better, because the git branching stuff doesn't understand that you can't reorder commits to svn, so you can't merge in branches after having committed to the svn repository. But having it be pure git fixes all of that. So, this is great news. And I don't think that there's anything wrong with being a bit slow about the transition if taking our time means that we get it right, though obviously, the sooner we transition over, the sooner we get the benefits. - Jonathan M Davis
Jan 16 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Jonathan M Davis wrote:
 That will make it _much_ easier to make check-ins while working on other 
 stuff in parallel.
Yes. And there's the large issue that being on github simply makes contributing to the D project more appealing to a wide group of excellent developers.
Jan 16 2011
parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 17.01.2011 06:12, schrieb Walter Bright:
 Jonathan M Davis wrote:
 That will make it _much_ easier to make check-ins while working on
 other stuff in parallel.
Yes. And there's the large issue that being on github simply makes contributing to the D project more appealing to a wide group of excellent developers.
How will the licensing issue (forks of the dmd backend are only allowed with your permission) be solved?
Jan 16 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Daniel Gibson wrote:
 How will the licensing issue (forks of the dmd backend are only allowed 
 with your permission) be solved?
It shouldn't be a problem as long as those forks are for the purpose of developing patches to the main branch, as is done now in svn. I view it like people downloading the source from digitalmars.com. Using the back end to develop a separate compiler, or set oneself up as a distributor of dmd, incorporate it into some other product, etc., please ask for permission. Basically, anyone using it has to agree not to sue Symantec or Digital Mars, and conform to: http://www.digitalmars.com/download/dmcpp.html
Jan 16 2011
parent reply Robert Clipsham <robert octarineparrot.com> writes:
On 17/01/11 06:25, Walter Bright wrote:
 Daniel Gibson wrote:
 How will the licensing issue (forks of the dmd backend are only
 allowed with your permission) be solved?
It shouldn't be a problem as long as those forks are for the purpose of developing patches to the main branch, as is done now in svn. I view it like people downloading the source from digitalmars.com. Using the back end to develop a separate compiler, or set oneself up as a distributor of dmd, incorporate it into some other product, etc., please ask for permission. Basically, anyone using it has to agree not to sue Symantec or Digital Mars, and conform to: http://www.digitalmars.com/download/dmcpp.html
Speaking of which, are you able to remove the "The Software was not designed to operate after December 31, 1999" sentence at all, or does that require you to mess around contacting symantec? Not that anyone reads it, it is kind of off putting to see that over a decade later though for anyone who bothers reading it :P -- Robert http://octarineparrot.com/
Jan 17 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Robert Clipsham wrote:
 Speaking of which, are you able to remove the "The Software was not 
 designed to operate after December 31, 1999" sentence at all, or does 
 that require you to mess around contacting symantec? Not that anyone 
 reads it, it is kind of off putting to see that over a decade later 
 though for anyone who bothers reading it :P
Consider it like the DNA we all still carry around for fish gills!
Jan 17 2011
next sibling parent Robert Clipsham <robert octarineparrot.com> writes:
On 17/01/11 20:29, Walter Bright wrote:
 Robert Clipsham wrote:
 Speaking of which, are you able to remove the "The Software was not
 designed to operate after December 31, 1999" sentence at all, or does
 that require you to mess around contacting symantec? Not that anyone
 reads it, it is kind of off putting to see that over a decade later
 though for anyone who bothers reading it :P
Consider it like the DNA we all still carry around for fish gills!
I don't know about you, but I take full advantage of my gills! -- Robert http://octarineparrot.com/
Jan 17 2011
prev sibling parent reply Brad Roberts <braddr slice-2.puremagic.com> writes:
On Mon, 17 Jan 2011, Walter Bright wrote:

 Robert Clipsham wrote:
 Speaking of which, are you able to remove the "The Software was not designed
 to operate after December 31, 1999" sentence at all, or does that require
 you to mess around contacting symantec? Not that anyone reads it, it is kind
 of off putting to see that over a decade later though for anyone who bothers
 reading it :P
Consider it like the DNA we all still carry around for fish gills!
In all seriousness, the backend license makes dmd look very strange. It threw the lawyers I consulted for a serious loop. At a casual glance it gives the impression of software that's massively out of date and out of touch with the real world. I know that updating it would likely be very painful, but is it just painful or impossible? Is it something that money could solve? I'd chip in to a fund to replace the license with something less... odd. Later, Brad
Jan 17 2011
parent Robert Clipsham <robert octarineparrot.com> writes:
On 18/01/11 01:09, Brad Roberts wrote:
 On Mon, 17 Jan 2011, Walter Bright wrote:

 Robert Clipsham wrote:
 Speaking of which, are you able to remove the "The Software was not designed
 to operate after December 31, 1999" sentence at all, or does that require
 you to mess around contacting symantec? Not that anyone reads it, it is kind
 of off putting to see that over a decade later though for anyone who bothers
 reading it :P
Consider it like the DNA we all still carry around for fish gills!
In all seriousness, the backend license makes dmd look very strange. It threw the lawyers I consulted for a serious loop. At a casual glance it gives the impression of software that's massively out of date and out of touch with the real world. I know that updating it would likely be very painful, but is it just painful or impossible? Is it something that money could solve? I'd chip in to a fund to replace the license with something less... odd. Later, Brad
Make that a nice open source license and I'm happy to throw some money at it too :> -- Robert http://octarineparrot.com/
Jan 18 2011
prev sibling parent Johann MacDonagh <johann.macdonagh..no spam..gmail.com> writes:
On 1/16/2011 5:07 PM, Walter Bright wrote:
 We'll be moving dmd, phobos, druntime, and the docs to Github shortly.
 The accounts are set up, it's just a matter of getting the svn
 repositories moved and figuring out how it all works.

 I know very little about git and github, but the discussions about it
 here and elsewhere online have thoroughly convinced me (and the other
 devs) that this is the right move for D.
I'm sure you've already seen this, but Pro Git is probably the best guide for git. http://progit.org/book/ Once you understand what a commit is, what a tree is, what a merge is, what a branch is, etc... its actually really simple (Chapter 9 in Pro Git). Definitely a radical departure from svn, and a good one for D.
Jan 18 2011
prev sibling parent retard <re tard.com.invalid> writes:
Sun, 16 Jan 2011 21:46:25 +0100, Andrej Mitrovic wrote:

 With CRTs I could spend a few hours in front of the PC, but after that
 my eyes would get really tired and I'd have to take a break. Since I
 switched to LCDs I've never had this problem anymore, I could spend a
 day staring at screen if I wanted to. Of course, it's still best to take
 some time off regardless of the screen type.
That's a good point. I've already forgotten how much eye strain the old monitors used to cause.
 
 Anyway.. how about that Git thing, then? :D
:)
Jan 16 2011
prev sibling parent retard <re tard.com.invalid> writes:
Sun, 16 Jan 2011 12:34:36 -0800, Walter Bright wrote:

 Andrei Alexandrescu wrote:
 Meanwhile, you are looking at a gamma gun shooting atcha.
I always worried about that. Nobody actually found anything wrong, but still.
It's like the cell phone studies. Whether they're causing brain tumors or not.
Jan 16 2011
prev sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 16/01/2011 19:38, Andrei Alexandrescu wrote:
 On 1/15/11 10:47 PM, Nick Sabalausky wrote:
 "Daniel Gibson"<metalcaedes gmail.com>  wrote in message
 news:igtq08$2m1c$1 digitalmars.com...
 There's two reasons it's good for games:

 1. Like you indicated, to get a better framerate. Framerate is more
 important in most games than resolution.

 2. For games that aren't really designed for multiple resolutions,
 particularly many 2D ones, and especially older games (which are often
 some
 of the best, but they look like shit on an LCD).
It's a legacy issue. Clearly everybody except you is using CRTs for gaming and whatnot. Therefore graphics hardware producers and game vendors are doing what it takes to adapt to a fixed resolution.
Actually, not entirely true, although not for the reasons of old games. Some players of hardcore twitch FPS games (like Quake), especially professional players, still use CRTs, due to the near-zero input lag that LCDs, although having improved in that regard, are still not able to match exactly. But other than that, I really see no reason to stick with CRTs vs a good LCD, yeah. -- Bruno Medeiros - Software Engineer
Jan 28 2011
prev sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 16/01/2011 04:47, Nick Sabalausky wrote:
 There's two reasons it's good for games:

 1. Like you indicated, to get a better framerate. Framerate is more
 important in most games than resolution.
This reason was valid at least at some point in time, for me it actually hold me back from transitioning from CRTs to LCDs for some time. But nowadays the screen resolutions have stabilized (stopped increasing, in terms of DPI), and graphics cards have improved in power enough that you can play nearly any game with the LCDs native resolution with max framerate, so no worries with this anymore (you may have to tone down the graphics settings a bit in some cases, but that is fine with me)
 2. For games that aren't really designed for multiple resolutions,
 particularly many 2D ones, and especially older games (which are often some
 of the best, but they look like shit on an LCD).
Well, if your LCD supports it, you have the option of not expanding the screen if output resolution is not the native one. How good or bad that would be, depends on the game I guess. I actually did this some years ago on certain (recent) games for a some time, use only 1024768 of the 1280x1024 native, to have better framerate. It's not a problem for me for old games, since most of them that I occasionally play are played in console emulator. DOS games unfortunately were very hard to play correctly in XP in the first place (especially with soundblaster), so it's not a concern for me. PS: here's a nice thread for anyone looking to purchase a new LCD: http://forums.anandtech.com/showthread.php?t=39226 It explains a lot of things about LCD technology, and ranks several LCDs according to intended usage (office work, hardcore gaming, etc.). -- Bruno Medeiros - Software Engineer
Jan 28 2011
prev sibling next sibling parent reply Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes:
Nick Sabalausky wrote:

 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message
 news:igt2pl$2u6e$1 digitalmars.com...
 On 1/15/11 2:23 AM, Nick Sabalausky wrote:
 I still use CRTs (one big reason being that I hate the idea of only
 being able to use one resolution)
I'd read some post of Nick and think "hmm, now that's a guy who follows only his own beat" but this has to take the cake. From here on, I wouldn't be surprised if you found good reasons to use whale fat powered candles instead of lightbulbs.
Heh :) Well, I can spend no money and stick with my current 21" CRT that already suits my needs (that I only paid $25 for in the first place), or I can spend a hundred or so dollars to lose the ability to have a decent looking picture at more than one resolution and then say "Gee golly whiz! That sure is a really flat panel!!". Whoop-dee-doo. And popularity and trendyness are just non-issues.
Actually nearly all lcds below 600$-800$ price point (tn-panels) have quite inferior display of colors compared to el cheapo crt's, at any resolution.
Jan 16 2011
parent retard <re tard.com.invalid> writes:
Sun, 16 Jan 2011 11:56:34 +0100, Lutger Blijdestijn wrote:

 Nick Sabalausky wrote:
 
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message
 news:igt2pl$2u6e$1 digitalmars.com...
 On 1/15/11 2:23 AM, Nick Sabalausky wrote:
 I still use CRTs (one big reason being that I hate the idea of only
 being able to use one resolution)
I'd read some post of Nick and think "hmm, now that's a guy who follows only his own beat" but this has to take the cake. From here on, I wouldn't be surprised if you found good reasons to use whale fat powered candles instead of lightbulbs.
Heh :) Well, I can spend no money and stick with my current 21" CRT that already suits my needs (that I only paid $25 for in the first place), or I can spend a hundred or so dollars to lose the ability to have a decent looking picture at more than one resolution and then say "Gee golly whiz! That sure is a really flat panel!!". Whoop-dee-doo. And popularity and trendyness are just non-issues.
Actually nearly all lcds below 600$-800$ price point (tn-panels) have quite inferior display of colors compared to el cheapo crt's, at any resolution.
There are also occasional special offers on IPS flat panels. The TN panels have also improved. I bought a cheap 21.5" TN panel as my second monitor last year. The viewing angles are really wide, basically about 180 degrees horizontally, a tiny bit less vertically. I couldn't see any effects of dithering noise either. It has a DVI input and a power consumption of about 30 Watts max (I run it in eco mode). Now that both framerate and view angle problems have been more or less solved for TN panels (except in pivot mode), the only remaining problems is the color reproduction. But it only matters when working with photographs.
Jan 16 2011
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/15/11 9:11 PM, Nick Sabalausky wrote:
 "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org>  wrote in message
 news:igt2pl$2u6e$1 digitalmars.com...
 On 1/15/11 2:23 AM, Nick Sabalausky wrote:
 I still use CRTs (one big reason being that I hate the idea of only being
 able to use one resolution)
I'd read some post of Nick and think "hmm, now that's a guy who follows only his own beat" but this has to take the cake. From here on, I wouldn't be surprised if you found good reasons to use whale fat powered candles instead of lightbulbs.
Heh :) Well, I can spend no money and stick with my current 21" CRT that already suits my needs (that I only paid $25 for in the first place),
My last CRT was a 19" from Nokia, 1600x1200, top of the line. Got it for free under the condition that I pick it up myself from a porch, which is as far as its previous owner could move it. I was seriously warned to come with a friend to take it. It weighed 86 lbs. That all worked for me: I was a poor student and happened to have a huge desk at home. I didn't think twice about buying a different monitor when I moved across the country... I wonder how much your 21" CRT weighs.
 or I
 can spend a hundred or so dollars to lose the ability to have a decent
 looking picture at more than one resolution and then say "Gee golly whiz!
 That sure is a really flat panel!!". Whoop-dee-doo. And popularity and
 trendyness are just non-issues.
I think your eyes are more important than your ability to fiddle with resolution. Besides, this whole changing the resolution thing is a consequence of using crappy software. What you want is set the resolution to the maximum and do the rest in software. And guess what - at their maximum, CRT monitors suck compared to flat panels. Heck this is unbelievable... I spend time on the relative merits of flat panels vs. CRTs. I'm outta here. Andrei
Jan 16 2011
next sibling parent so <so so.do> writes:
 Besides, this whole changing the resolution thing is a consequence of  
 using crappy software. What you want is set the resolution to the  
 maximum and do the rest in software. And guess what - at their maximum,  
 CRT monitors suck compared to flat panels.
This is just... wrong.
Jan 16 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:igvc0k$c3o$1 digitalmars.com...
 On 1/15/11 9:11 PM, Nick Sabalausky wrote:
 Heh :)  Well, I can spend no money and stick with my current 21" CRT that
 already suits my needs (that I only paid $25 for in the first place),
My last CRT was a 19" from Nokia, 1600x1200, top of the line. Got it for free under the condition that I pick it up myself from a porch, which is as far as its previous owner could move it. I was seriously warned to come with a friend to take it. It weighed 86 lbs. That all worked for me: I was a poor student and happened to have a huge desk at home. I didn't think twice about buying a different monitor when I moved across the country... I wonder how much your 21" CRT weighs.
No clue. It's my desktop system, so I haven't had a reason to pick up the monitor in years. And the desk seems to handle it just fine.
 or I
 can spend a hundred or so dollars to lose the ability to have a decent
 looking picture at more than one resolution and then say "Gee golly whiz!
 That sure is a really flat panel!!". Whoop-dee-doo. And popularity and
 trendyness are just non-issues.
I think your eyes are more important than your ability to fiddle with resolution.
Everyone always seems to be very vague on that issue. Given real, reliable, non-speculative evidence that CRTs are significantly (and not just negligibly) worse on the eyes, I could certainly be persuaded to replace my CRT when I can actually afford to. Now I'm certainly not saying that such evidence isn't out there, but FWIW, I have yet to come across it.
 Besides, this whole changing the resolution thing is a consequence of 
 using crappy software. What you want is set the resolution to the maximum 
 and do the rest in software. And guess what - at their maximum, CRT 
 monitors suck compared to flat panels.
Agreed, but show me an OS that actually *does* handle that reasonably well. XP doesn't. Win7 doesn't. Ubuntu 9.04 and Kubuntu 10.10 don't. (And I'm definitely not going back to OSX, I've had my fill of that.)
 Heck this is unbelievable... I spend time on the relative merits of flat 
 panels vs. CRTs. I'm outta here.
You're really taking this hard, aren't you?
Jan 16 2011
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/16/11 2:22 PM, Nick Sabalausky wrote:
 "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org>  wrote in message
 news:igvc0k$c3o$1 digitalmars.com...
 I think your eyes are more important than your ability to fiddle with
 resolution.
Everyone always seems to be very vague on that issue. Given real, reliable, non-speculative evidence that CRTs are significantly (and not just negligibly) worse on the eyes, I could certainly be persuaded to replace my CRT when I can actually afford to. Now I'm certainly not saying that such evidence isn't out there, but FWIW, I have yet to come across it.
Finding recent research on dangers of CRTs on eyes is difficult to find for the same reason finding recent research on the dangers of steam locomotives. Still, look at what Google thinks when you type "CRT monitor e".
 Besides, this whole changing the resolution thing is a consequence of
 using crappy software. What you want is set the resolution to the maximum
 and do the rest in software. And guess what - at their maximum, CRT
 monitors suck compared to flat panels.
Agreed, but show me an OS that actually *does* handle that reasonably well. XP doesn't. Win7 doesn't. Ubuntu 9.04 and Kubuntu 10.10 don't. (And I'm definitely not going back to OSX, I've had my fill of that.)
I'm happy with the way Ubuntu and OSX handle it.
 Heck this is unbelievable... I spend time on the relative merits of flat
 panels vs. CRTs. I'm outta here.
You're really taking this hard, aren't you?
Apparently I got drawn back into the discussion :o). I'm not as intense about this as one might think, but I do find it surprising that this discussion could possibly occur ever since about 2005. Andrei
Jan 16 2011
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:igvlf8$v20$1 digitalmars.com...
 Apparently I got drawn back into the discussion :o). I'm not as intense 
 about this as one might think, but I do find it surprising that this 
 discussion could possibly occur ever since about 2005.
FWIW, when computer monitors regularly use the pixel density that the newer iPhones currently have, then I'd imagine that would easily compensate for scaling artifacts on non-native resultions enough to get me to find and get one with a small enough delay (assuming I had the $ and needed a new monitor).
Jan 16 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
Nick Sabalausky wrote:
 FWIW, when computer monitors regularly use the pixel density that the newer 
 iPhones currently have, then I'd imagine that would easily compensate for 
 scaling artifacts on non-native resultions enough to get me to find and get 
 one with a small enough delay (assuming I had the $ and needed a new 
 monitor).
I bought the iPod with the retina display. That gizmo has done the impossible - converted me into an Apple fanboi. I absolutely love that display. The weird thing is set it next to an older iPod with the lower res display. They look the same. But I find I can read the retina display without reading glasses, and it's much more fatiguing to do that with the older one. Even though they look the same! You can really see the difference if you look at both using a magnifying glass. I can clearly see the screen door even on my super-dee-duper 1900x1200 monitor, but not at all on the iPod. I've held off on buying an iPad because I want one with a retina display, too (and the camera for video calls).
Jan 16 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:igvlf8$v20$1 digitalmars.com...
 On 1/16/11 2:22 PM, Nick Sabalausky wrote:
 "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org>  wrote in message
 news:igvc0k$c3o$1 digitalmars.com...
 I think your eyes are more important than your ability to fiddle with
 resolution.
Everyone always seems to be very vague on that issue. Given real, reliable, non-speculative evidence that CRTs are significantly (and not just negligibly) worse on the eyes, I could certainly be persuaded to replace my CRT when I can actually afford to. Now I'm certainly not saying that such evidence isn't out there, but FWIW, I have yet to come across it.
Finding recent research on dangers of CRTs on eyes is difficult to find for the same reason finding recent research on the dangers of steam locomotives. Still, look at what Google thinks when you type "CRT monitor e".
It's not as clearcut as you may think. One of the first results for "CRT monitor eye": http://www.tomshardware.com/forum/52709-3-best-eyes Keep in mind too, that the vast majority of the reports of CRTs being significantly worse are either have no backing references or are so anecdotal and vague that it's impossible to distinguish from the placebo effect. And there's other variables that rarely get mentioned, like whether they happen to be looking at a CRT with a bad refresh rate or brightness/contrast set too high. I'm not saying that CRTs are definitely as good as or better than LCDs on the eyes, I'm just saying it doesn't seem quite as clear as so many people assume it to be.
Jan 16 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Nick Sabalausky wrote:
 Keep in mind too, that the vast majority of the reports of CRTs being 
 significantly worse are either have no backing references or are so 
 anecdotal and vague that it's impossible to distinguish from the placebo 
 effect. And there's other variables that rarely get mentioned, like whether 
 they happen to be looking at a CRT with a bad refresh rate or 
 brightness/contrast set too high.
My CRTs would gradually get fuzzier over time. It was so slow you didn't notice until you set them next to a new one.
Jan 16 2011
parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
I found this:
http://stackoverflow.com/questions/315911/git-for-beginners-the-definitive-practical-guide

A bunch of links to SO questions/answers.
Jan 16 2011
prev sibling parent reply retard <re tard.com.invalid> writes:
Sun, 16 Jan 2011 15:22:13 -0500, Nick Sabalausky wrote:

 Dude, you need to upgrade!!!
The CRTs have a limited lifetime. It's simply a fact that you need to switch to flat panels or something better. They won't probably even manufacture CRTs anymore. It becomes more and more impossible to purchase *unused* CRTs anywhere. At least at a reasonable price. For example used 17" TFTs cost less than $40. I found pages like this http://shopper.cnet.com/4566-3175_9-0.html Even the prices aren't very competitive. I only remember that all refresh rates below 85 Hz caused me headache and eye fatigue. You can't use the max resolution  60 Hz for very long.
 Why should *I* spend the money to replace something that already
works fine for me? You might get more things done by using a bigger screen. Maybe get some money to buy better equipment and stop complaining.
 Besides, this whole changing the resolution thing is a consequence of
 using crappy software. What you want is set the resolution to the
 maximum and do the rest in software. And guess what - at their maximum,
 CRT monitors suck compared to flat panels.
Agreed, but show me an OS that actually *does* handle that reasonably well. XP doesn't. Win7 doesn't. Ubuntu 9.04 and Kubuntu 10.10 don't. (And I'm definitely not going back to OSX, I've had my fill of that.)
My monitors have had about the same pixel density over the years. EGA (640x400) or 720x348 (Hercules) / 12", 800x600 / 14", 1024x768 / 15-17", 1280x1024 / 19", 1280x1024 / 17" TFT, 1440x900 / 19", 1920x1080 / 21.5", 2560x1600 / 30" Thus, there's no need to enlarge all graphical widgets or text. My vision is still ok. What changes is the amount of simultaneously visible area for applications. You're just wasting the expensive screen estate by enlarging everything. You're supposed to run more simultaneous tasks on a larger screen.
 I've actually compared the rated power consumpsion between CRTs and
 LCDs of
 similar size and was actually surprised to find that there was little,
 if any, real difference at all on the sets I compared.
I'm pretty sure I did point out the limitations of my observation: "...on
all the sets I compared". And it's pretty obvious I wasn't undertaking a
proper extensive study. There's no need for sarcasm.
Your comparison was pointless. You can come up with all kinds of arbitrary comparisons. The TFT panel power consumption probably varies between 20 and 300 Watts. Do you even know how much your CRT uses power? CRTs used as computer monitors and those used as televisions have different characteristics. CRT TVs have better brightness and contrast, but lower resolution and sharpness than CRT computer monitors. Computer monitors tend to need more power, maybe even twice as much. Also larger monitors of the same brand tend to use more power. When a CRT monitor gets older, you need more power to illuminate the phosphor as the amount of phosphor in the small holes of the grille/mask decreases over time. This isn't the case with TFTs. The backlight brightness and panel's color handling dictates power consumption. A 15" TFT might need as much power as a 22" TFT using the same panel technology. TFT TVs use more power as they typically provide higher brightness. Same thing if you buy those high quality panels for professional graphics work. The TFT power consumption has also drastically dropped because of AMOLED panels, LED backlights and better dynamic contrast logic. The fluorescent backlights lose some of their brightness (maybe about 30%) before dying unlike a CRT which totally goes dark. The LED backlights wont suffer from this (at least observably). My obversation is that e.g. in computer classes (30+ computers per room) the air conditioning started to work much better after the upgrade to flat panels. Another upgrade turned the computers into micro-itx thin clients. Now the room doesn't need air conditioning at all.
Jan 16 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"retard" <re tard.com.invalid> wrote in message 
news:ih0b1t$g2g$3 digitalmars.com...
 For example used 17" TFTs cost less than $40.
Continuing to use my 21" CRT costs me nothing.
 Even the prices aren't very competitive. I only remember that all refresh
 rates below 85 Hz caused me headache and eye fatigue. You can't use the
 max resolution   60 Hz for very long.
I run mine no lower than 85 Hz. It's about 100Hz at the moment. And I never need to run it at the max rez for long. It's just nice to be able to bump it up now and then when I want to. Then it goes back down. And yet people feel the need to bitch about me liking that ability.
 Why should *I* spend the money to replace something that already
works fine for me? You might get more things done by using a bigger screen. Maybe get some money to buy better equipment and stop complaining.
You've got to be kidding me...*other* people start giving *me* crap about what *I* choose to use, and you try to tell me *I'm* the one that needs to stop complaining? I normally try very much to avoid direct personal comments and only attack the arguments not the arguer, but seriously, what the hell is wrong with your head that you could even think of such an enormously idiotic thing to say? Meh, I'm not going to bother with the rest...
Jan 16 2011
parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday 16 January 2011 23:17:22 Nick Sabalausky wrote:
 "retard" <re tard.com.invalid> wrote in message
 news:ih0b1t$g2g$3 digitalmars.com...
 
 For example used 17" TFTs cost less than $40.
Continuing to use my 21" CRT costs me nothing.
 Even the prices aren't very competitive. I only remember that all refresh
 rates below 85 Hz caused me headache and eye fatigue. You can't use the
 max resolution   60 Hz for very long.
I run mine no lower than 85 Hz. It's about 100Hz at the moment.
I've heard that the eye fatigue at 60 Hz is because it matches electricity for the light bulbs in the room, so then the flickering of the light bulbs and the screen match. Keeping it above 60 Hz avoids the problem. 100Hz is obviously well above that.
 And I never need to run it at the max rez for long. It's just nice to be
 able to bump it up now and then when I want to. Then it goes back down. And
 yet people feel the need to bitch about me liking that ability.
You can use whatever you want for all I care. It's your computer, your money, and your time. I just don't understand what the point of messing with your resolution is. I've always just set it at the highest possible level that I can. I've currently got 1920 x 1200 on a 24" monitor, but it wouldn't hurt my feelings any to get a higher resolution. I probably won't, simply because I'm more interested in getting a second monitor than a higher resolution, and I don't want to fork out for two monitors to get a dual monitor setup (since I want both monitors to be the same size) when I already have a perfectly good monitor, but I'd still like a higher resolution. So, the fact that you have and want a CRT and actually want the ability to adjust the resolution baffles me, but I see no reason to try and correct you or complain about it. - Jonathan M Davis
Jan 16 2011
prev sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday 15 January 2011 13:13:41 Andrei Alexandrescu wrote:
 On 1/15/11 2:23 AM, Nick Sabalausky wrote:
 I still use CRTs (one big reason being that I hate the idea of only being
 able to use one resolution)
I'd read some post of Nick and think "hmm, now that's a guy who follows only his own beat" but this has to take the cake. From here on, I wouldn't be surprised if you found good reasons to use whale fat powered candles instead of lightbulbs.
But don't you just _hate_ the fact that lightbulbs don't smell? How can you stand that? ;) Yes. That does take the cake. And I want it back, since cake sounds good right now. LOL. This thread has serious been derailed. I wonder if I should start a new one one the source control issue. I'd _love_ to be able to use git with Phobos and druntime rather than svn, and much as I've never used mercurial and have no clue how it compares to git, it would have to be an improvement over svn. Unfortunately, that topic seems to have not really ultimately gone anywhere in this thread. - Jonathan M Davis
Jan 15 2011
prev sibling parent reply =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Nick Sabalausky wrote:
 "retard" <re tard.com.invalid> wrote in message=20
 Hard drives: these always fail, sooner or later. There's nothing you c=
an
 do except RAID and backups
=20 And SMART monitors: =20 I've had a total of two HDD's fail, and in both cases I really lucked o=
ut.=20
 The first one was in my Mac, but it was after I was already getting=20
 completely fed up with OSX and Apple, so I didn't really care much - I =
was=20
 mostly back on Windows again by that point. The second failure just hap=
pened=20
 to be the least important of the three HDDs in my system. I was still p=
retty=20
 upset about it though, so it was a big wakeup call: I *will not* have a=
=20
 primary system anymore that doesn't have a SMART monitoring program, wi=
th=20
 temperature readouts, always running. And yes, it can't always predict =
a=20
 failure, but sometimes it can so IMO there's no good reason not to have=
it.=20
 That's actually one of the things I don't like about Linux, nothing lik=
e=20
 that seems to exist for Linux. Sure, there's a cmd line program you can=
=20
 poll, but that doesn't remotely cut it.
=20
Simple curiosity: what do you use for SMART monitoring on Windows? I use smard (same as Linux) but where I am reasonably confident that on Linux it will email me if it detects an error condition, I am not as sure of being notified on Windows (where email is not an option because it is at work and Lotus will not accept email from sources other than those explicitly allowed by the IT admins). Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jan 16 2011
parent reply "Nick Sabalausky" <a a.a> writes:
""Jrme M. Berger"" <jeberger free.fr> wrote in message 
news:iguask$1dur$1 digitalmars.com...
Simple curiosity: what do you use for SMART monitoring on Windows?
I use smard (same as Linux) but where I am reasonably confident that
on Linux it will email me if it detects an error condition, I am not
as sure of being notified on Windows (where email is not an option
because it is at work and Lotus will not accept email from sources
other than those explicitly allowed by the IT admins).
Hard Disk Sentinel. I'm not married to it or anything, but it seems to be pretty good.
Jan 16 2011
parent =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Nick Sabalausky wrote:
 ""J=EF=BF=BDr=EF=BF=BDme M. Berger"" <jeberger free.fr> wrote in messag=
e=20
 news:iguask$1dur$1 digitalmars.com...
 Simple curiosity: what do you use for SMART monitoring on Windows?
 I use smard (same as Linux) but where I am reasonably confident that
 on Linux it will email me if it detects an error condition, I am not
 as sure of being notified on Windows (where email is not an option
 because it is at work and Lotus will not accept email from sources
 other than those explicitly allowed by the IT admins).
=20 Hard Disk Sentinel. I'm not married to it or anything, but it seems to =
be=20
 pretty good.
=20
=20
Thanks, I'll have a look. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jan 17 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrej Mitrovic" <andrej.mitrovich gmail.com> wrote in message 
news:mailman.571.1294806486.4748.digitalmars-d puremagic.com...
 Notice the smiley face -> :D

 Yeah I didn't check the price, it's only 30$. But there's no telling
 if that would work either. Also, dirt cheap video cards are almost
 certainly going to cause problems. Even if the drivers worked
 perfectly, a year down the road things will start breaking down. Cheap
 hardware is cheap for a reason.
Rediculous. All of the video cards I'm using are ultra-cheap ones that are about 10 years old and they all work fine.
Jan 12 2011
next sibling parent "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:igkv8v$2gq$1 digitalmars.com...
 "Andrej Mitrovic" <andrej.mitrovich gmail.com> wrote in message 
 news:mailman.571.1294806486.4748.digitalmars-d puremagic.com...
 Notice the smiley face -> :D

 Yeah I didn't check the price, it's only 30$. But there's no telling
 if that would work either. Also, dirt cheap video cards are almost
 certainly going to cause problems. Even if the drivers worked
 perfectly, a year down the road things will start breaking down. Cheap
 hardware is cheap for a reason.
Rediculous. All of the video cards I'm using are ultra-cheap ones that are about 10 years old and they all work fine.
They're cheap because they have lower clock speeds, fewer features, and less memory.
Jan 12 2011
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 1/12/11, Nick Sabalausky <a a.a> wrote:
 Rediculous. All of the video cards I'm using are ultra-cheap ones that are
 about 10 years old and they all work fine.
I'm saying that if you buy a cheap video card *today* you might not get what you expect. And I'm not talking out of my ass, I've had plenty of experience with faulty hardware and device drivers. The 'quality' depends more on who makes the product than what price tag it has, but you have to look these things up and not buy things on first sight because they're cheap.
Jan 12 2011
prev sibling parent reply retard <re tard.com.invalid> writes:
Wed, 12 Jan 2011 14:22:59 -0500, Nick Sabalausky wrote:

 "Andrej Mitrovic" <andrej.mitrovich gmail.com> wrote in message
 news:mailman.571.1294806486.4748.digitalmars-d puremagic.com...
 Notice the smiley face -> :D

 Yeah I didn't check the price, it's only 30$. But there's no telling if
 that would work either. Also, dirt cheap video cards are almost
 certainly going to cause problems. Even if the drivers worked
 perfectly, a year down the road things will start breaking down. Cheap
 hardware is cheap for a reason.
Rediculous. All of the video cards I'm using are ultra-cheap ones that are about 10 years old and they all work fine.
There's no reason why they would break. Few months ago I was reconfiguring an old server at work which still used two 16-bit 10 megabit ISA network cards. I fetched a kernel upgrade (2.6.27.something). It's a modern kernel which is still maintained and had up-to-date drivers for the 20 year old device! Those devices have no moving parts and are stored inside EMP & UPS protected strong server cases. How the heck could they break? Same thing, can't imagine how a video card could break. The old ones didn't even have massive cooling solutions, the chips didn't even need a heatsink. The only problem is driver support, but on Linux it mainly gets better over the years.
Jan 12 2011
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Wednesday 12 January 2011 13:11:13 retard wrote:
 Wed, 12 Jan 2011 14:22:59 -0500, Nick Sabalausky wrote:
 "Andrej Mitrovic" <andrej.mitrovich gmail.com> wrote in message
 news:mailman.571.1294806486.4748.digitalmars-d puremagic.com...
 
 Notice the smiley face -> :D
 
 Yeah I didn't check the price, it's only 30$. But there's no telling if
 that would work either. Also, dirt cheap video cards are almost
 certainly going to cause problems. Even if the drivers worked
 perfectly, a year down the road things will start breaking down. Cheap
 hardware is cheap for a reason.
Rediculous. All of the video cards I'm using are ultra-cheap ones that are about 10 years old and they all work fine.
There's no reason why they would break. Few months ago I was reconfiguring an old server at work which still used two 16-bit 10 megabit ISA network cards. I fetched a kernel upgrade (2.6.27.something). It's a modern kernel which is still maintained and had up-to-date drivers for the 20 year old device! Those devices have no moving parts and are stored inside EMP & UPS protected strong server cases. How the heck could they break? Same thing, can't imagine how a video card could break. The old ones didn't even have massive cooling solutions, the chips didn't even need a heatsink. The only problem is driver support, but on Linux it mainly gets better over the years.
It depends on a number of factors, including the quality of the card and the conditions that it's being used in. I've had video cards die before. I _think_ that it was due to overheating, but I really don't know. It doesn't really matter. The older the part, the more likely it is to break. The cheaper the part, the more likely it is to break. Sure, the lack of moving parts makes it less likely for a video card to die, but it definitely happens. Computer parts don't last forever, and the lower their quality, the less likely it is that they'll last. By no means does that mean that a cheap video card isn't necessarily going to last for years and function just fine, but it is a risk that a cheap card will be too cheap to last. - Jonathan M Davis
Jan 12 2011
parent retard <re tard.com.invalid> writes:
Wed, 12 Jan 2011 13:22:28 -0800, Jonathan M Davis wrote:

 On Wednesday 12 January 2011 13:11:13 retard wrote:
 Same thing, can't imagine how a video card could break. The old ones
 didn't even have massive cooling solutions, the chips didn't even need
 a heatsink. The only problem is driver support, but on Linux it mainly
 gets better over the years.
It depends on a number of factors, including the quality of the card and the conditions that it's being used in.
Of course.
 I've had video cards die before.
 I _think_ that it was due to overheating, but I really don't know. It
 doesn't really matter.
Modern GPU and CPU parts are of course getting hotter and hotter. They're getting so hot it's a miracle the components such as capacitors nearby the cores can handle it. You need better cooling which means even more breaking parts.
 The older the part, the more likely it is to break.
Not true. http://en.wikipedia.org/wiki/Bathtub_curve
 The cheaper the part, the more likely it is to break.
That might be true if the part is a power supply or a monitor. However, the latest and greatest video cards and CPUs are sold at an extremely high price mainly for hardcore gamers (and 3d modelers -- quadro & firegl). This is sometimes purely an intellectual property issue, nothing to do with physical parts. For example I've earned several hundred euros by installing soft-mods, that is upgraded firmware / drivers. Ever heard of Radeon 9500 -> 9700, 9800SE -> 9800, and lately 6950 -> 6970 mods? I've also modded one PC NVIDIA card to work on Macs (sold at a higher price) and done one Geforce -> Quadro mod. You don't touch the parts at all, just flash the ROM. It would be a miracle if that improved the physical quality of the parts. It does raise the price, though. Another observation: the target audience of the low end NVIDIA cards are usually HTPC and office users. These computers have small cases and require low profile cards. The cards have actually *better* multimedia features (purevideo) than the high end cards for gamers. These cards are built by the same companies as the larger versions (Asus, MSI, Gigabyte, and so on). Could it just be that by giving the buyer less physical parts and less intellectual property in the form of GPU firmware, they can sell at a lower price. There are also these cards with the letters "OC" in their name. The manufacturer has deliberately overclocked the cards beyond their specs. That's actually hurting the reliability but the price is even bigger.
Jan 12 2011
prev sibling next sibling parent Jeff Nowakowski <jeff dilacero.org> writes:
On 01/12/2011 04:11 PM, retard wrote:
 Same thing, can't imagine how a video card could break.
I recently had a cheap video card break. It at least had the decency to break within the warranty period, but I was too lazy to return it :P I decided that the integrated graphics, while slow, were "good enough" for what I was using the machine for.
Jan 12 2011
prev sibling next sibling parent reply Ulrik Mikaelsson <ulrik.mikaelsson gmail.com> writes:
Wow. The thread that went "Moving to D"->"Problems with
DMD"->"DVCS"->"WHICH DVCS"->"Linux Problems"->"Driver
Problems/Manufacturer preferences"->"Cheap VS. Expensive". It's a
personally observed record of OT threads, I think.

Anyways, I've refrained from throwing fuel on the thread as long as I
can, I'll bite:

 It depends on a number of factors, including the quality of the card and the
 conditions that it's being used in. I've had video cards die before. I _think_
 that it was due to overheating, but I really don't know. It doesn't really
 matter. The older the part, the more likely it is to break. The cheaper the
 part, the more likely it is to break. Sure, the lack of moving parts makes it
 less likely for a video card to die, but it definitely happens. Computer parts
 don't last forever, and the lower their quality, the less likely it is that
 they'll last. By no means does that mean that a cheap video card isn't
 necessarily going to last for years and function just fine, but it is a risk
that
 a cheap card will be too cheap to last.
"Cheap" in the sense of "less money" isn't the problem. Actually, HW that cost more is often high-end HW which creates more heat, which _might_ actually shorten the lifetime. On the other hand, low-end HW is often less heat-producing, which _might_ make it last longer. The real difference lies in what level of HW are sold at which clock-levels, I.E. manufacturing control procedures. So an expensive low-end for a hundred bucks might easily outlast a cheap high-end alternative for 4 times the money. Buy quality, not expensive. There is a difference.
Jan 12 2011
parent reply retard <re tard.com.invalid> writes:
Wed, 12 Jan 2011 22:46:46 +0100, Ulrik Mikaelsson wrote:

 Wow. The thread that went "Moving to D"->"Problems with
 DMD"->"DVCS"->"WHICH DVCS"->"Linux Problems"->"Driver
 Problems/Manufacturer preferences"->"Cheap VS. Expensive". It's a
 personally observed record of OT threads, I think.
 
 Anyways, I've refrained from throwing fuel on the thread as long as I
 can, I'll bite:
 
 It depends on a number of factors, including the quality of the card
 and the conditions that it's being used in. I've had video cards die
 before. I _think_ that it was due to overheating, but I really don't
 know. It doesn't really matter. The older the part, the more likely it
 is to break. The cheaper the part, the more likely it is to break.
 Sure, the lack of moving parts makes it less likely for a video card to
 die, but it definitely happens. Computer parts don't last forever, and
 the lower their quality, the less likely it is that they'll last. By no
 means does that mean that a cheap video card isn't necessarily going to
 last for years and function just fine, but it is a risk that a cheap
 card will be too cheap to last.
"Cheap" in the sense of "less money" isn't the problem. Actually, HW that cost more is often high-end HW which creates more heat, which _might_ actually shorten the lifetime. On the other hand, low-end HW is often less heat-producing, which _might_ make it last longer. The real difference lies in what level of HW are sold at which clock-levels, I.E. manufacturing control procedures. So an expensive low-end for a hundred bucks might easily outlast a cheap high-end alternative for 4 times the money. Buy quality, not expensive. There is a difference.
Nicely written, I fully agree with you.
Jan 12 2011
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/12/11 2:30 PM, retard wrote:
 Wed, 12 Jan 2011 22:46:46 +0100, Ulrik Mikaelsson wrote:

 Wow. The thread that went "Moving to D"->"Problems with
 DMD"->"DVCS"->"WHICH DVCS"->"Linux Problems"->"Driver
 Problems/Manufacturer preferences"->"Cheap VS. Expensive". It's a
 personally observed record of OT threads, I think.

 Anyways, I've refrained from throwing fuel on the thread as long as I
 can, I'll bite:

 It depends on a number of factors, including the quality of the card
 and the conditions that it's being used in. I've had video cards die
 before. I _think_ that it was due to overheating, but I really don't
 know. It doesn't really matter. The older the part, the more likely it
 is to break. The cheaper the part, the more likely it is to break.
 Sure, the lack of moving parts makes it less likely for a video card to
 die, but it definitely happens. Computer parts don't last forever, and
 the lower their quality, the less likely it is that they'll last. By no
 means does that mean that a cheap video card isn't necessarily going to
 last for years and function just fine, but it is a risk that a cheap
 card will be too cheap to last.
"Cheap" in the sense of "less money" isn't the problem. Actually, HW that cost more is often high-end HW which creates more heat, which _might_ actually shorten the lifetime. On the other hand, low-end HW is often less heat-producing, which _might_ make it last longer. The real difference lies in what level of HW are sold at which clock-levels, I.E. manufacturing control procedures. So an expensive low-end for a hundred bucks might easily outlast a cheap high-end alternative for 4 times the money. Buy quality, not expensive. There is a difference.
Nicely written, I fully agree with you.
Same here. It's not well understood that heating/cooling cycles with the corresponding expansion and contraction cycles are the main reason for which electronics fail. At an extreme, the green-minded person who turns all CFLs and all computers off at all opportunities ends up producing more expense and more waste than the lazier person who leaves stuff on for longer periods of time. Andrei
Jan 12 2011
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
retard wrote:
 There's no reason why they would break. Few months ago I was 
 reconfiguring an old server at work which still used two 16-bit 10 
 megabit ISA network cards. I fetched a kernel upgrade (2.6.27.something). 
 It's a modern kernel which is still maintained and had up-to-date drivers 
 for the 20 year old device! Those devices have no moving parts and are 
 stored inside EMP & UPS protected strong server cases. How the heck could 
 they break?
 
 Same thing, can't imagine how a video card could break. The old ones 
 didn't even have massive cooling solutions, the chips didn't even need a 
 heatsink. The only problem is driver support, but on Linux it mainly gets 
 better over the years.
I paid my way through college hand-making electronics boards for professors and engineers. All semiconductors have a lifetime that is measured by the area under the curve of their temperature over time. The doping in the semiconductor gradually diffuses through the semiconductor, the rate of diffusion increases as the temperature rises. Once the differently doped parts "collide" the semiconductor fails.
Jan 12 2011
parent Eric Poggel <dnewsgroup2 yage3d.net> writes:
On 1/12/2011 6:41 PM, Walter Bright wrote:
 All semiconductors have a lifetime that is measured by the area under
 the curve of their temperature over time.
Oddly enough, milk has the same behavior.
Jan 28 2011
prev sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 12.01.2011 04:02, schrieb Jean Crystof:
 Walter Bright Wrote:

 My mobo is an ASUS M2A-VM. No graphics cards, or any other cards plugged into
 it. It's hardly weird or wacky or old (it was new at the time I bought it to
 install Ubuntu).
ASUS M2A-VM has 690G chipset. Wikipedia says: http://en.wikipedia.org/wiki/AMD_690_chipset_series#690G "AMD recently dropped support for Windows and Linux drivers made for Radeon X1250 graphics integrated in the 690G chipset, stating that users should use the open-source graphics drivers instead. The latest available AMD Linux driver for the 690G chipset is fglrx version 9.3, so all newer Linux distributions using this chipset are unsupported."
I guess a recent version of the free drivers (as delivered with recent Ubuntu releases) still is much better than the one in Walters >2 Years old Ubuntu. Sure, game performance may not be great, but I guess normal working (even in 1920x1200) and watching youtube videos works.
 Fast forward to this day:
 http://www.phoronix.com/scan.php?page=article&item=amd_driver_q111&num=2

 Benchmark page says: the only available driver for your graphics gives only
about 10-20% of the real performance. Why? ATI sucks on Linux. Don't buy ATI.
Buy Nvidia instead:
No it doesn't. The X1250 uses the same driver as the X1950 which is much more mature and also faster than the free driver for the Radeon HD *** cards (for which a proprietary Catalyst driver is still provided).
 http://geizhals.at/a466974.html

 This is 3rd latest Nvidia GPU generation. How long support lasts? Ubuntu 10.10
still supports all Geforce 2+ which is 10 years old. I foretell Ubuntu 19.04 is
last one supporting this. Use Nvidia and your problems are gone.
I agree that a recent nvidia card may improve things even further. Cheers, - Daniel
Jan 12 2011
parent retard <re tard.com.invalid> writes:
Wed, 12 Jan 2011 19:11:22 +0100, Daniel Gibson wrote:

 Am 12.01.2011 04:02, schrieb Jean Crystof:
 Walter Bright Wrote:

 My mobo is an ASUS M2A-VM. No graphics cards, or any other cards
 plugged into it. It's hardly weird or wacky or old (it was new at the
 time I bought it to install Ubuntu).
ASUS M2A-VM has 690G chipset. Wikipedia says: http://en.wikipedia.org/wiki/AMD_690_chipset_series#690G "AMD recently dropped support for Windows and Linux drivers made for Radeon X1250 graphics integrated in the 690G chipset, stating that users should use the open-source graphics drivers instead. The latest available AMD Linux driver for the 690G chipset is fglrx version 9.3, so all newer Linux distributions using this chipset are unsupported."
I guess a recent version of the free drivers (as delivered with recent Ubuntu releases) still is much better than the one in Walters >2 Years old Ubuntu.
Most likely. After all they're fixing more bugs than creating new ones. :-) My other guess is, while the open source drivers are far from perfect for hardcore gaming, the basic functionality like setting up a video mode is getting better. Remember the days you needed to type in all internal and external clock frequencies and packed pixel bit counts in xorg.conf ?!
 Sure, game performance may not be great, but I guess normal working
 (even in 1920x1200) and watching youtube videos works.
Embedded videos on web pages used to require huge amounts of CPU power when you were upscaling them in the fullscreen mode. The reason is that Flash only recently starting supporting hardware accelerated videos, on ***32-bit*** systems equipped with a ***NVIDIA*** card. The same VDPAU libraries are used by the native video players. I tried to accelerate video playback with my Radeon HD 5770, but it failed badly. Believe it or not, my 3 Ghz 4-core Core i7 system with 24 GB of RAM and the fast Radeon HD 5770 was too slow to play 1080p videos 1920x1080 using the open source drivers. Without hardware acceleration you need a modern high-end dual-core system or faster to run the video assuming the drivers aren't broken. If you only want to watch the youtube videos in windowed mode, you still need a 2+ Ghz single-core. But.. Youtube has switched to HTML5 videos recently. This should take the requirements down a notch. Still I wouldn't trust integrated graphics that much. They've always been crap.
Jan 12 2011
prev sibling next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 06.01.2011 20:46, schrieb Walter Bright:
 Russel Winder wrote:
 Pity, because using one of Mercurial, Bazaar or Git instead of
 Subversion is likely the best and fastest way of getting more quality
 contributions to review. Although only anecdotal in every case where a
 team has switched to DVCS from CVCS -- except in the case of closed
 projects, obviously -- it has opened things up to far more people to
 provide contributions. Subversion is probably now the single biggest
 barrier to getting input on system evolution.
A couple months back, I did propose moving to git on the dmd internals mailing list, and nobody was interested. One thing I like a lot about svn is this: http://www.dsource.org/projects/dmd/changeset/291 where the web view will highlight the revision's changes. Does git or mercurial do that? The other thing I like a lot about gif is it sends out emails for each checkin.
It's not SVN but trac doing this. And trac's mercurial plugin seems to support that as well: http://trac.edgewall.org/wiki/TracMercurial#MercurialChangesets Bitbucket also supports that kind of view, see for example: https://bitbucket.org/goshawk/gdc/changeset/44b6978e5f6c The GitPlugin should support that as well, if I interpret the feature list correctly: http://trac-hacks.org/wiki/GitPlugin Dsource seems to support both git and mercurial, but I don't know which projects use them, else I'd them as examples to see how those trac plugins work in real life. Cheers, - Daniel
Jan 06 2011
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Daniel Gibson" <metalcaedes gmail.com> wrote in message 
news:ig57ar$1gn9$1 digitalmars.com...
 Am 06.01.2011 20:46, schrieb Walter Bright:
 Russel Winder wrote:
 Pity, because using one of Mercurial, Bazaar or Git instead of
 Subversion is likely the best and fastest way of getting more quality
 contributions to review. Although only anecdotal in every case where a
 team has switched to DVCS from CVCS -- except in the case of closed
 projects, obviously -- it has opened things up to far more people to
 provide contributions. Subversion is probably now the single biggest
 barrier to getting input on system evolution.
A couple months back, I did propose moving to git on the dmd internals mailing list, and nobody was interested. One thing I like a lot about svn is this: http://www.dsource.org/projects/dmd/changeset/291 where the web view will highlight the revision's changes. Does git or mercurial do that? The other thing I like a lot about gif is it sends out emails for each checkin.
It's not SVN but trac doing this. And trac's mercurial plugin seems to support that as well: http://trac.edgewall.org/wiki/TracMercurial#MercurialChangesets Bitbucket also supports that kind of view, see for example: https://bitbucket.org/goshawk/gdc/changeset/44b6978e5f6c The GitPlugin should support that as well, if I interpret the feature list correctly: http://trac-hacks.org/wiki/GitPlugin Dsource seems to support both git and mercurial, but I don't know which projects use them, else I'd them as examples to see how those trac plugins work in real life.
DDMD uses Mercurial on DSource: http://www.dsource.org/projects/ddmd
Jan 06 2011
next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 06.01.2011 23:26, schrieb Nick Sabalausky:
 "Daniel Gibson"<metalcaedes gmail.com>  wrote in message
 news:ig57ar$1gn9$1 digitalmars.com...
 Am 06.01.2011 20:46, schrieb Walter Bright:
 Russel Winder wrote:
 Pity, because using one of Mercurial, Bazaar or Git instead of
 Subversion is likely the best and fastest way of getting more quality
 contributions to review. Although only anecdotal in every case where a
 team has switched to DVCS from CVCS -- except in the case of closed
 projects, obviously -- it has opened things up to far more people to
 provide contributions. Subversion is probably now the single biggest
 barrier to getting input on system evolution.
A couple months back, I did propose moving to git on the dmd internals mailing list, and nobody was interested. One thing I like a lot about svn is this: http://www.dsource.org/projects/dmd/changeset/291 where the web view will highlight the revision's changes. Does git or mercurial do that? The other thing I like a lot about gif is it sends out emails for each checkin.
It's not SVN but trac doing this. And trac's mercurial plugin seems to support that as well: http://trac.edgewall.org/wiki/TracMercurial#MercurialChangesets Bitbucket also supports that kind of view, see for example: https://bitbucket.org/goshawk/gdc/changeset/44b6978e5f6c The GitPlugin should support that as well, if I interpret the feature list correctly: http://trac-hacks.org/wiki/GitPlugin Dsource seems to support both git and mercurial, but I don't know which projects use them, else I'd them as examples to see how those trac plugins work in real life.
DDMD uses Mercurial on DSource: http://www.dsource.org/projects/ddmd
http://www.dsource.org/projects/ddmd/changeset?new=rt%40185%3A13cf8da225ce&old=rt%40183%3A190ba98276b3 "Trac detected an internal error:" looks like dsource uses an old/broken version of the mercurial plugin. But normally it *should* work, I think.
Jan 06 2011
parent Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes:
Daniel Gibson wrote:

 Am 06.01.2011 23:26, schrieb Nick Sabalausky:
 "Daniel Gibson"<metalcaedes gmail.com>  wrote in message
 news:ig57ar$1gn9$1 digitalmars.com...
 Am 06.01.2011 20:46, schrieb Walter Bright:
 Russel Winder wrote:
 Pity, because using one of Mercurial, Bazaar or Git instead of
 Subversion is likely the best and fastest way of getting more quality
 contributions to review. Although only anecdotal in every case where a
 team has switched to DVCS from CVCS -- except in the case of closed
 projects, obviously -- it has opened things up to far more people to
 provide contributions. Subversion is probably now the single biggest
 barrier to getting input on system evolution.
A couple months back, I did propose moving to git on the dmd internals mailing list, and nobody was interested. One thing I like a lot about svn is this: http://www.dsource.org/projects/dmd/changeset/291 where the web view will highlight the revision's changes. Does git or mercurial do that? The other thing I like a lot about gif is it sends out emails for each checkin.
It's not SVN but trac doing this. And trac's mercurial plugin seems to support that as well: http://trac.edgewall.org/wiki/TracMercurial#MercurialChangesets Bitbucket also supports that kind of view, see for example: https://bitbucket.org/goshawk/gdc/changeset/44b6978e5f6c The GitPlugin should support that as well, if I interpret the feature list correctly: http://trac-hacks.org/wiki/GitPlugin Dsource seems to support both git and mercurial, but I don't know which projects use them, else I'd them as examples to see how those trac plugins work in real life.
DDMD uses Mercurial on DSource: http://www.dsource.org/projects/ddmd
http://www.dsource.org/projects/ddmd/changeset?new=rt%40185%3A13cf8da225ce&old=rt%40183%3A190ba98276b3
 "Trac detected an internal error:"
 looks like dsource uses an old/broken version of the mercurial plugin.
 But normally it *should* work, I think.
This works: http://www.dsource.org/projects/ddmd/changeset/183:190ba98276b3
Jan 06 2011
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/6/11 4:26 PM, Nick Sabalausky wrote:
 DDMD uses Mercurial on DSource: http://www.dsource.org/projects/ddmd
The ready availability of Mercurial on dsource.org plus Don's inclination to use Mercurial just tipped the scale for me. We should do all we can to make Don's and other developers' life easier, and being able to work on multiple fixes at a time is huge. We should create a new Mercurial repository. I suggest we call it digitalmars because the current "phobos" svn repo contains a lot of stuff that's not phobos-related. Andrei
Jan 06 2011
next sibling parent reply Brad Roberts <braddr slice-2.puremagic.com> writes:
On Thu, 6 Jan 2011, Andrei Alexandrescu wrote:

 On 1/6/11 4:26 PM, Nick Sabalausky wrote:
 DDMD uses Mercurial on DSource: http://www.dsource.org/projects/ddmd
The ready availability of Mercurial on dsource.org plus Don's inclination to use Mercurial just tipped the scale for me. We should do all we can to make Don's and other developers' life easier, and being able to work on multiple fixes at a time is huge. We should create a new Mercurial repository. I suggest we call it digitalmars because the current "phobos" svn repo contains a lot of stuff that's not phobos-related. Andrei
Personally, I'd prefer to git over mecurial, which dsource also supports. But, really, I'd prefer github over dsource (sorry, BradA) for stability and just generally much more usable site. My general problem with the switch to a different SCM of any sort: 1) the history of the current source is a mess. a) lack of tags for releases b) logical merges have all been done as individual commits 2) walter's workflow meaning that he'll won't use the scm merge facilities. He manually merges everything. None of this is really a problem, it just becomes a lot more visible when using a system that encourages keeping a very clean history and the use of branches and merging. My 2 cents, Brad
Jan 06 2011
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
I've ever only used hg (mercurial), but only for some private
repositories. I'll say one thing: it's pretty damn fast considering it
requires Python to work. Also, Joel's tutorial that introduced me to
hg was short and and to the point: http://hginit.com/
Jan 06 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Andrej Mitrovic" <andrej.mitrovich gmail.com> wrote in message 
news:mailman.459.1294356168.4748.digitalmars-d puremagic.com...
 I've ever only used hg (mercurial), but only for some private
 repositories. I'll say one thing: it's pretty damn fast considering it
 requires Python to work. Also, Joel's tutorial that introduced me to
 hg was short and and to the point: http://hginit.com/
I have to comment on this part: "The main way you notice this is that in Subversion, if you go into a subdirectory and commit your changes, it only commits changes in that subdirectory and all directories below it, which potentially means you've forgotten to check something in that lives in some other subdirectory which also changed. Whereas, in Mercurial, all commands always apply to the entire tree. If your code is in c:\code, when you issue the hg commit command, you can be in c:\code or in any subdirectory and it has the same effect." Funny thing about that: After accidentally committing a subdirectory instead of the full project one too many times, I submitted a TortoiseSVN feature request for an option to always commit the full working directory, or at least an option to warn when you're not committing the full working directory. They absolutely lynched me for having such a suggestion.
Jan 08 2011
parent Ulrik Mikaelsson <ulrik.mikaelsson gmail.com> writes:
 Funny thing about that: After accidentally committing a subdirectory instead
 of the full project one too many times, I submitted a TortoiseSVN feature
 request for an option to always commit the full working directory, or at
 least an option to warn when you're not committing the full working
 directory. They absolutely lynched me for having such a suggestion.
Of course. You're in conflict with the only hardly-functional branching support SVN knows. (Copy directory) I know some people who considers it a feature to always check out entire SVN-repos, including all branches and all tags. Of course, they are the same people who set aside half-days to do the checkout, and considers it a days work to actually merge something back.
Jan 08 2011
prev sibling next sibling parent "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Fri, 07 Jan 2011 01:03:42 +0200, Brad Roberts  
<braddr slice-2.puremagic.com> wrote:

   2) walter's workflow meaning that he'll won't use the scm merge
      facilities.  He manually merges everything.
Not sure about Hg, but in Git you can solve this by simply manually specifying the two parent commits. Git doesn't care how you merged the two branches. In fact, you can even do this locally by using grafts. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 06 2011
prev sibling parent Russel Winder <russel russel.org.uk> writes:
On Thu, 2011-01-06 at 15:03 -0800, Brad Roberts wrote:
[ . . . ]
   1) the history of the current source is a mess.
      a) lack of tags for releases
      b) logical merges have all been done as individual commits
Any repository coming to DVCS from CVS or Subversion will have much worse than this :-((( In the end you have to bite the bullet and so "let's do it, and repair stuff later if we have to".=20
   2) walter's workflow meaning that he'll won't use the scm merge
      facilities.  He manually merges everything.
At a guess I would say that this is more an issue that CVS and Subversion have truly outdated ideas about branching and merging. Indeed merging branches in Subversion seems still to be so difficult it makes a shift to DVCS the only way forward.
 None of this is really a problem, it just becomes a lot more visible when=
=20
 using a system that encourages keeping a very clean history and the use o=
f=20
 branches and merging.
And no rebasing! At the risk of over-egging the pudding: No organization or project I have knowledge of that made the shift from CVS or Subversion to DVCS (Mercurial, Bazaar, or Git) has ever regretted it. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jan 07 2011
prev sibling next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Andrei:

 The ready availability of Mercurial on dsource.org plus Don's 
 inclination to use Mercurial just tipped the scale for me. We should do 
 all we can to make Don's and other developers' life easier, and being 
 able to work on multiple fixes at a time is huge.
Probably both Mercurial and Git are a little improvement over the current situation. It's also a way to improve D image, making it look a little more open source. I hope a fat "Fork me!" button will be visible on a web page :-) Bye, bearophile
Jan 06 2011
prev sibling parent reply David Nadlinger <see klickverbot.at> writes:
On 1/6/11 11:47 PM, Andrei Alexandrescu wrote:
 Mercurial on dsource.org …
Personally, I'd really like to persuade Walter, you, and whoever else actually decides this to consider hosting the main repository at an external place like GitHub or Mercurial, because DSource has been having some real troubles with stability, although it got slightly better again recently. The problem is somewhat alleviated when using a DVCS, but having availabilities the main source repositories is not quite the best form of advertisement for a language. Additionally, the UI of GitHub supports the scenario where only a few people (or Walter alone) actually have commit/push access to the main repository really well through cheap forks which stay logically connected to he main repository and merge requests. The ability to make comments on specific (lines in) commits, also in combination with pull requests, is awesome as well. I would also like to suggest Git over Mercurial, though this is mostly personal preference – it is used more widely, it has GitHub and Gitorious (I'm having a hard time finding Bitbucket comparable personally), it's proven to work well in settings where the main tree is managed by a single person (->Linux), it tries not artificially restricting you as much as possible (something I imagine Walter might like), … – but again, it's probably a matter of taste, I don't want to start a flamewar here. The most important thing to me is, however, that I'd really like to see a general shift in the way D development is done towards more contributor-friendliness. I can only bow to Walter as a very capable and experienced compiler writer, but as it was discussed several times here on the list as well, in my opinion D has reached a point where it desperately needs to win new contributors to the whole ecosystem. There is a reason why other open source projects encourage you to write helpful commit messages, and yet we don't even have tags for releases (!) in the DMD repository. I didn't intend to offend anybody at all, but I'd really hate to see D2 failing to »take off« for reasons like this… David
Jan 06 2011
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"David Nadlinger" <see klickverbot.at> wrote in message 
news:ig5n74$2vu3$1 digitalmars.com...
 On 1/6/11 11:47 PM, Andrei Alexandrescu wrote:
 Mercurial on dsource.org .
Personally, I'd really like to persuade Walter, you, and whoever else actually decides this to consider hosting the main repository at an external place like GitHub or Mercurial, because DSource has been having some real troubles with stability, although it got slightly better again recently. The problem is somewhat alleviated when using a DVCS, but having availabilities the main source repositories is not quite the best form of advertisement for a language. Additionally, the UI of GitHub supports the scenario where only a few people (or Walter alone) actually have commit/push access to the main repository really well through cheap forks which stay logically connected to he main repository and merge requests. The ability to make comments on specific (lines in) commits, also in combination with pull requests, is awesome as well. I would also like to suggest Git over Mercurial, though this is mostly personal preference - it is used more widely, it has GitHub and Gitorious (I'm having a hard time finding Bitbucket comparable personally), it's proven to work well in settings where the main tree is managed by a single person (->Linux), it tries not artificially restricting you as much as possible (something I imagine Walter might like), . - but again, it's probably a matter of taste, I don't want to start a flamewar here.
I've never used github, but I have used bitbucket and I truly, truly hate it. Horribly implemented site and an honest pain in the ass to use.
Jan 06 2011
parent reply Jesse Phillips <jessekphillips+D gmail.com> writes:
Nick Sabalausky Wrote:

 I've never used github, but I have used bitbucket and I truly, truly hate 
 it. Horribly implemented site and an honest pain in the ass to use.
I've never really used bitbucket, but I don't know how it could be any worse to use then dsource. If you ignore all the features Dsource doesn't have, it feels about the same to me.
Jan 06 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Jesse Phillips" <jessekphillips+D gmail.com> wrote in message 
news:ig62kh$h71$1 digitalmars.com...
 Nick Sabalausky Wrote:

 I've never used github, but I have used bitbucket and I truly, truly hate
 it. Horribly implemented site and an honest pain in the ass to use.
I've never really used bitbucket, but I don't know how it could be any worse to use then dsource. If you ignore all the features Dsource doesn't have, it feels about the same to me.
The features in DSource generally *just work* (except when the whole server is down, of course). With BitBucket, I tried to post a bug report for xfbuild one time (and I'm pretty sure there was another project too) and the damn thing just wouldn't work. And the text-entry box was literally two lines high. Kept trying and eventually I got one post through, but it was all garbled. So I kept trying more and nothing would show up, so I gave up. Came back a day later and there were a bunch of duplicate posts. Gah. And yea, that was just the bug tracker, but it certainly didn't instill any confidence in anything else about the site. And I'm not certain, but I seem to recall some idiotic pains in the ass when trying to sign up for an account, too. With DSource, as long as the server is up, everything's always worked for me...Well...except now that I think of it, I've never been able to edit the roadmap or edit the entries in the bug-tracker's "components" field for any of the projects I admin. Although, I can live without that.
Jan 07 2011
parent Jesse Phillips <jessekphillips+D gmail.com> writes:
Oh, yeah that would be annoying. I haven't done much with the github website
but haven't had issues like that.

About the only thing that makes github a little annoying at first is that you
have to use public/private key pairs to do any pushing to a repo. But I haven't
had any issues creating/using them from Linux/Windows. You can associate
multiple public keys with your account so you don't need to take your private
key everywhere with you. They can also be deleted so you could have temporary
ones.

Nick Sabalausky Wrote:

 The features in DSource generally *just work* (except when the whole server 
 is down, of course). With BitBucket, I tried to post a bug report for 
 xfbuild one time (and I'm pretty sure there was another project too) and the 
 damn thing just wouldn't work. And the text-entry box was literally two 
 lines high. Kept trying and eventually I got one post through, but it was 
 all garbled. So I kept trying more and nothing would show up, so I gave up. 
 Came back a day later and there were a bunch of duplicate posts. Gah.
 
 And yea, that was just the bug tracker, but it certainly didn't instill any 
 confidence in anything else about the site. And I'm not certain, but I seem 
 to recall some idiotic pains in the ass when trying to sign up for an 
 account, too.
 
 With DSource, as long as the server is up, everything's always worked for 
 me...Well...except now that I think of it, I've never been able to edit the 
 roadmap or edit the entries in the bug-tracker's "components" field for any 
 of the projects I admin. Although, I can live without that.
 
 
Jan 07 2011
prev sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 07/01/2011 00:34, David Nadlinger wrote:
 On 1/6/11 11:47 PM, Andrei Alexandrescu wrote:
 Mercurial on dsource.org …
Personally, I'd really like to persuade Walter, you, and whoever else actually decides this to consider hosting the main repository at an external place like GitHub or Mercurial, because DSource has been having some real troubles with stability, although it got slightly better again recently. The problem is somewhat alleviated when using a DVCS, but having availabilities the main source repositories is not quite the best form of advertisement for a language. Additionally, the UI of GitHub supports the scenario where only a few people (or Walter alone) actually have commit/push access to the main repository really well through cheap forks which stay logically connected to he main repository and merge requests. The ability to make comments on specific (lines in) commits, also in combination with pull requests, is awesome as well.
I have to agree and reiterate this point. The issue of whether it is worthwhile for D to move to a DVCS (and which one of the two) is definitely a good thing to consider, but the issue of DSource vs. other code hosting sites is also quite a relevant one. (And not just for DMD but for any project.) I definitely thank Brad for his support and work on DSource, however I question if it is the best way to go for medium or large-sized D projects. Other hosting sites will simply offer better/more features and/or support, stability, less bugs, spam-protection, etc.. What we have here is exactly the same issue of NIH syndrome vs DRY, but applied to hosting and development infrastructure instead of the code itself. But I think the principle applies just the same. -- Bruno Medeiros - Software Engineer
Jan 28 2011
parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 28.01.2011 14:07, schrieb Bruno Medeiros:
 On 07/01/2011 00:34, David Nadlinger wrote:
 On 1/6/11 11:47 PM, Andrei Alexandrescu wrote:
 Mercurial on dsource.org …
Personally, I'd really like to persuade Walter, you, and whoever else actually decides this to consider hosting the main repository at an external place like GitHub or Mercurial, because DSource has been having some real troubles with stability, although it got slightly better again recently. The problem is somewhat alleviated when using a DVCS, but having availabilities the main source repositories is not quite the best form of advertisement for a language. Additionally, the UI of GitHub supports the scenario where only a few people (or Walter alone) actually have commit/push access to the main repository really well through cheap forks which stay logically connected to he main repository and merge requests. The ability to make comments on specific (lines in) commits, also in combination with pull requests, is awesome as well.
I have to agree and reiterate this point. The issue of whether it is worthwhile for D to move to a DVCS (and which one of the two) is definitely a good thing to consider, but the issue of DSource vs. other code hosting sites is also quite a relevant one. (And not just for DMD but for any project.) I definitely thank Brad for his support and work on DSource, however I question if it is the best way to go for medium or large-sized D projects. Other hosting sites will simply offer better/more features and/or support, stability, less bugs, spam-protection, etc.. What we have here is exactly the same issue of NIH syndrome vs DRY, but applied to hosting and development infrastructure instead of the code itself. But I think the principle applies just the same.
D has already moved to github, see D.announce :)
Jan 28 2011
parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 28/01/2011 13:13, Daniel Gibson wrote:
 Am 28.01.2011 14:07, schrieb Bruno Medeiros:
 On 07/01/2011 00:34, David Nadlinger wrote:
 On 1/6/11 11:47 PM, Andrei Alexandrescu wrote:
 Mercurial on dsource.org …
Personally, I'd really like to persuade Walter, you, and whoever else actually decides this to consider hosting the main repository at an external place like GitHub or Mercurial, because DSource has been having some real troubles with stability, although it got slightly better again recently. The problem is somewhat alleviated when using a DVCS, but having availabilities the main source repositories is not quite the best form of advertisement for a language. Additionally, the UI of GitHub supports the scenario where only a few people (or Walter alone) actually have commit/push access to the main repository really well through cheap forks which stay logically connected to he main repository and merge requests. The ability to make comments on specific (lines in) commits, also in combination with pull requests, is awesome as well.
I have to agree and reiterate this point. The issue of whether it is worthwhile for D to move to a DVCS (and which one of the two) is definitely a good thing to consider, but the issue of DSource vs. other code hosting sites is also quite a relevant one. (And not just for DMD but for any project.) I definitely thank Brad for his support and work on DSource, however I question if it is the best way to go for medium or large-sized D projects. Other hosting sites will simply offer better/more features and/or support, stability, less bugs, spam-protection, etc.. What we have here is exactly the same issue of NIH syndrome vs DRY, but applied to hosting and development infrastructure instead of the code itself. But I think the principle applies just the same.
D has already moved to github, see D.announce :)
I know, I know. :) (I am up-to-date on D.announce, just not on "D" and "D.bugs") I still wanted to make that point though. First, for retrospection, but also because it may still apply to a few other DSource projects (current or future ones). -- Bruno Medeiros - Software Engineer
Jan 28 2011
parent reply retard <re tard.com.invalid> writes:
Fri, 28 Jan 2011 15:03:24 +0000, Bruno Medeiros wrote:

 
 I know, I know. :)  (I am up-to-date on D.announce, just not on "D" and
 "D.bugs")
 I still wanted to make that point though. First, for retrospection, but
 also because it may still apply to a few other DSource projects (current
 or future ones).
You don't need to read every post here. Reading every bug report is just stupid.. but it's not my problem. It just means that the rest of us have less competition in everyday situations (getting women, work offers, and so on)
Jan 28 2011
parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 28/01/2011 21:14, retard wrote:
 Fri, 28 Jan 2011 15:03:24 +0000, Bruno Medeiros wrote:

 I know, I know. :)  (I am up-to-date on D.announce, just not on "D" and
 "D.bugs")
 I still wanted to make that point though. First, for retrospection, but
 also because it may still apply to a few other DSource projects (current
 or future ones).
You don't need to read every post here. Reading every bug report is just stupid.. but it's not my problem. It just means that the rest of us have less competition in everyday situations (getting women, work offers, and so on)
I don't read every bug report, I only (try to) read the titles and see if it's something interesting, for example something that might impact the design of the language and is just not a pure implementation issue. Still, yes, I may be spending too much time on the NG (especially for someone who doesn't skip the 8 hours of sleep), but the bottleneck at the moment is writing posts, especially those that involve arguments. They are an order of magnitude more "expensive" than reading posts. -- Bruno Medeiros - Software Engineer
Feb 01 2011
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-01-06 23:26, Nick Sabalausky wrote:
 "Daniel Gibson"<metalcaedes gmail.com>  wrote in message
 news:ig57ar$1gn9$1 digitalmars.com...
 Am 06.01.2011 20:46, schrieb Walter Bright:
 Russel Winder wrote:
 Pity, because using one of Mercurial, Bazaar or Git instead of
 Subversion is likely the best and fastest way of getting more quality
 contributions to review. Although only anecdotal in every case where a
 team has switched to DVCS from CVCS -- except in the case of closed
 projects, obviously -- it has opened things up to far more people to
 provide contributions. Subversion is probably now the single biggest
 barrier to getting input on system evolution.
A couple months back, I did propose moving to git on the dmd internals mailing list, and nobody was interested. One thing I like a lot about svn is this: http://www.dsource.org/projects/dmd/changeset/291 where the web view will highlight the revision's changes. Does git or mercurial do that? The other thing I like a lot about gif is it sends out emails for each checkin.
It's not SVN but trac doing this. And trac's mercurial plugin seems to support that as well: http://trac.edgewall.org/wiki/TracMercurial#MercurialChangesets Bitbucket also supports that kind of view, see for example: https://bitbucket.org/goshawk/gdc/changeset/44b6978e5f6c The GitPlugin should support that as well, if I interpret the feature list correctly: http://trac-hacks.org/wiki/GitPlugin Dsource seems to support both git and mercurial, but I don't know which projects use them, else I'd them as examples to see how those trac plugins work in real life.
DDMD uses Mercurial on DSource: http://www.dsource.org/projects/ddmd
I've been using Mercurial for all my projects on dsource and some other projects not on dsource. All DWT projects uses Mercurial as well. -- /Jacob Carlborg
Jan 08 2011
prev sibling parent Eric Poggel <dnewsgroup2 yage3d.net> writes:
On 1/6/2011 3:03 PM, Daniel Gibson wrote:
 Dsource seems to support both git and mercurial, but I don't know which
 projects use them, else I'd them as examples to see how those trac
 plugins work in real life.
I stumbpled across this url the other day: http://hg.dsource.org/ Seems to list mercurial projects. I couldn't find a similar one for git.
Jan 06 2011
prev sibling next sibling parent reply Robert Clipsham <robert octarineparrot.com> writes:
On 06/01/11 19:46, Walter Bright wrote:
 One thing I like a lot about svn is this:

 http://www.dsource.org/projects/dmd/changeset/291
That's Trac, not SVN doing it - all other version control systems do a similar thing.
 where the web view will highlight the revision's changes. Does git or
 mercurial do that? The other thing I like a lot about gif is it sends
 out emails for each checkin.
This is easily doable with both mercurial and git. If you use a tool like bitbucket or github (which I *highly* recommend you do, opens up a huge community to you, I know of several cases where projects have been discovered through them and gained contributors etc).
 One thing I would dearly like is to be able to merge branches using meld.

 http://meld.sourceforge.net/
There's guides for doing this in both mercurial and git, you generally just run one command one time and forget about it, any time you do git/hg merge it will then automatically use meld or any other tool you discover. -- Robert http://octarineparrot.com/
Jan 06 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Robert Clipsham" <robert octarineparrot.com> wrote in message 
news:ig58tk$24bn$1 digitalmars.com...
 On 06/01/11 19:46, Walter Bright wrote:
 One thing I like a lot about svn is this:

 http://www.dsource.org/projects/dmd/changeset/291
That's Trac, not SVN doing it - all other version control systems do a similar thing.
 where the web view will highlight the revision's changes. Does git or
 mercurial do that? The other thing I like a lot about gif is it sends
 out emails for each checkin.
This is easily doable with both mercurial and git. If you use a tool like bitbucket or github (which I *highly* recommend you do, opens up a huge community to you, I know of several cases where projects have been discovered through them and gained contributors etc).
I would STRONGLY recommend against using any site that requires a valid non-mailinator email address just to do basic things like post a bug report. I'm not sure exactly which ones are and aren't like that, but many free project hosting sites are like that and it's an absolutely inexcusable barrier. And of course everyone knows how I'd feel about any site that required JS for anything that obviously didn't need JS. ;) One other random thought: I'd really hate to use a system that didn't have short sequential changeset identifiers. I think Hg does have that, although I don't think all Hg interfaces actually use it, just some.
Jan 06 2011
next sibling parent =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Nick Sabalausky wrote:
 One other random thought: I'd really hate to use a system that didn't h=
ave=20
 short sequential changeset identifiers. I think Hg does have that, alth=
ough=20
 I don't think all Hg interfaces actually use it, just some.
=20
Hg does support short identifiers (either short hashes or sequential numbers). AFAIK all commands use them (I've never used a command that did not). Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jan 07 2011
prev sibling parent Trass3r <un known.com> writes:
 One other random thought: I'd really hate to use a system that didn't  
 have short sequential changeset identifiers. I think Hg does have that,  
 although I don't think all Hg interfaces actually use it, just some.
It's built into Mercurial.
Jan 08 2011
prev sibling next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Thu, 06 Jan 2011 21:46:47 +0200, Walter Bright  
<newshound2 digitalmars.com> wrote:

 A couple months back, I did propose moving to git on the dmd internals  
 mailing list, and nobody was interested.
Walter, if you do make the move to git (or in generally switch DVCSes), please make it so that the backend is not in the same repository in the frontend. Since the backend has severe redistribution restrictions, the compiler repository can't be simply forked and published. FWIW I'm quite in favor of a switch to git (even better if you choose GitHub, as was discussed in another thread). I had to go through great lengths to set up a private git mirror of the dmd repository, as dsource kept dropping my git-svnimport connections, and it took forever. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 06 2011
parent reply Russel Winder <russel russel.org.uk> writes:
On Fri, 2011-01-07 at 00:31 +0200, Vladimir Panteleev wrote:
[ . . . ]
 FWIW I'm quite in favor of a switch to git (even better if you choose =
=20
 GitHub, as was discussed in another thread). I had to go through great =
=20
 lengths to set up a private git mirror of the dmd repository, as dsource =
=20
 kept dropping my git-svnimport connections, and it took forever.
svnimport is for one-off transformation, i.e. for moving from Subversion to Git. Using git-svn is the way to have Git as your Subversion client -- though you have to remember that it always rebases so your repository cannot be a peer in a Git repository group, it is only a Subversion client. The same applies for Mercurial but not for Bazaar, which can treat the Subversion repository in a Bazaar branch poeer group. There are ways of bridging in Git and Mercurial, but it gets painful. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jan 07 2011
parent "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Fri, 07 Jan 2011 12:15:37 +0200, Russel Winder <russel russel.org.uk>  
wrote:

 On Fri, 2011-01-07 at 00:31 +0200, Vladimir Panteleev wrote:
 [ . . . ]
 FWIW I'm quite in favor of a switch to git (even better if you choose
 GitHub, as was discussed in another thread). I had to go through great
 lengths to set up a private git mirror of the dmd repository, as dsource
 kept dropping my git-svnimport connections, and it took forever.
svnimport is for one-off transformation, i.e. for moving from Subversion to Git. Using git-svn is the way to have Git as your Subversion client -- though you have to remember that it always rebases so your repository cannot be a peer in a Git repository group, it is only a Subversion client. The same applies for Mercurial but not for Bazaar, which can treat the Subversion repository in a Bazaar branch poeer group. There are ways of bridging in Git and Mercurial, but it gets painful.
Sorry, I actually meant git-svn, I confused the two. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 07 2011
prev sibling next sibling parent reply Russel Winder <russel russel.org.uk> writes:
On Thu, 2011-01-06 at 11:46 -0800, Walter Bright wrote:

 A couple months back, I did propose moving to git on the dmd internals ma=
iling=20
 list, and nobody was interested.
That surprises me. Shifting from Subversion to any of Mercurial, Bazaar or Git, is such a huge improvement in tooling. Especially for support of feature branches.
 One thing I like a lot about svn is this:
=20
 http://www.dsource.org/projects/dmd/changeset/291
=20
 where the web view will highlight the revision's changes. Does git or mer=
curial=20
 do that? The other thing I like a lot about gif is it sends out emails fo=
r each=20
 checkin.
This is a feature of the renderer not the version control system. This is not Subversion at work, this is Trac at work. As far as I am aware the Subversion, Mercurial, Git and Bazaar backends for Trac all provide this facility.
=20
 One thing I would dearly like is to be able to merge branches using meld.
=20
 http://meld.sourceforge.net/
Why? Mercurial, Bazaar and Git all support a variety of three-way merge tools including meld, but the whole point of branching and merging is that you don't do it manually -- except in Subversion where merging branching remains a problem.=20 With Mercurial, Bazaar and Git, if you accept a changeset from a branch you jsut merge it, e.g. git merge some-feature-branch job done. If you want to amend the changeset before committing to HEAD then create a feature branch, merge the incoming changeset to the feature branch, work on it till satisfied, merge to HEAD. The only time I used meld these days is to process merge conflicts, not to handle merging per se.=20 --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jan 07 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Russel Winder wrote:
 One thing I would dearly like is to be able to merge branches using meld.

 http://meld.sourceforge.net/
Why?
Because meld makes it easy to review, selectively merge, and do a bit of editing all in one go.
 Mercurial, Bazaar and Git all support a variety of three-way merge tools
 including meld, but the whole point of branching and merging is that you
 don't do it manually -- except in Subversion where merging branching
 remains a problem.
But I want to do it manually.
 With Mercurial, Bazaar and Git, if you accept a changeset from a branch
 you jsut merge it, e.g.
 
 	git merge some-feature-branch
 
 job done.  If you want to amend the changeset before committing to HEAD
 then create a feature branch, merge the incoming changeset to the
 feature branch, work on it till satisfied, merge to HEAD.
 
 The only time I used meld these days is to process merge conflicts, not
 to handle merging per se. 
I've always been highly suspicious of the auto-detection of a 3 way merge conflict.
Jan 07 2011
next sibling parent reply Russel Winder <russel russel.org.uk> writes:
Walter,

On Fri, 2011-01-07 at 10:54 -0800, Walter Bright wrote:
 Russel Winder wrote:
 One thing I would dearly like is to be able to merge branches using me=
ld.
 http://meld.sourceforge.net/
=20 Why?
=20 Because meld makes it easy to review, selectively merge, and do a bit of =
editing=20
 all in one go.
Hummm . . . these days that is seen as being counter-productive to having a full and complete record of the evolution of a project. These days it is assumed that a reviewed changeset is committed as is and then further amendments made as a separate follow-up changeset. A core factor here is of attribution and publicity of who did what. By committing reviewed changesets before amending them, the originator of the changeset is noted as the author of the changeset in the history. As I understand the consequences of the above system, you are always shown as the committer of every change -- but I may just have got this wrong, I haven't actually looked at the DMD repository. =20
 Mercurial, Bazaar and Git all support a variety of three-way merge tool=
s
 including meld, but the whole point of branching and merging is that yo=
u
 don't do it manually -- except in Subversion where merging branching
 remains a problem.
=20 But I want to do it manually.
Clearly I don't understand your workflow. When I used Subversion, its merge capabilities were effectively none -- and as I understand it, things have not got any better in reality despite all the publicity about new merge support. So handling changesets from branches and elsewhere always had to be a manual activity. Maintaining a truly correct history was effectively impossible. Now with Bazaar, Mercurial and Git, merge is so crucial to the very essence of what these systems do that I cannot conceive of manually merging except to resolve actual conflicts. Branch and merge is so trivially easy in all of Bazaar, Mercurial and Git, that it changes workflows. Reviewing changesets is still a crucially important thing, but merging them should not be part of that process. =20
 With Mercurial, Bazaar and Git, if you accept a changeset from a branch
 you jsut merge it, e.g.
=20
 	git merge some-feature-branch
=20
 job done.  If you want to amend the changeset before committing to HEAD
 then create a feature branch, merge the incoming changeset to the
 feature branch, work on it till satisfied, merge to HEAD.
=20
 The only time I used meld these days is to process merge conflicts, not
 to handle merging per se.=20
=20 I've always been highly suspicious of the auto-detection of a 3 way merge=
conflict. I have always been highly suspicious that compilers can optimize my code better than I can ;-) --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jan 08 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
Russel Winder wrote:
 Walter,
 
 On Fri, 2011-01-07 at 10:54 -0800, Walter Bright wrote:
 Russel Winder wrote:
 One thing I would dearly like is to be able to merge branches using meld.

 http://meld.sourceforge.net/
Why?
Because meld makes it easy to review, selectively merge, and do a bit of editing all in one go.
Hummm . . . these days that is seen as being counter-productive to having a full and complete record of the evolution of a project. These days it is assumed that a reviewed changeset is committed as is and then further amendments made as a separate follow-up changeset. A core factor here is of attribution and publicity of who did what. By committing reviewed changesets before amending them, the originator of the changeset is noted as the author of the changeset in the history. As I understand the consequences of the above system, you are always shown as the committer of every change -- but I may just have got this wrong, I haven't actually looked at the DMD repository.
I never thought of that.
 Mercurial, Bazaar and Git all support a variety of three-way merge tools
 including meld, but the whole point of branching and merging is that you
 don't do it manually -- except in Subversion where merging branching
 remains a problem.
But I want to do it manually.
Clearly I don't understand your workflow. When I used Subversion, its merge capabilities were effectively none -- and as I understand it, things have not got any better in reality despite all the publicity about new merge support. So handling changesets from branches and elsewhere always had to be a manual activity. Maintaining a truly correct history was effectively impossible. Now with Bazaar, Mercurial and Git, merge is so crucial to the very essence of what these systems do that I cannot conceive of manually merging except to resolve actual conflicts. Branch and merge is so trivially easy in all of Bazaar, Mercurial and Git, that it changes workflows. Reviewing changesets is still a crucially important thing, but merging them should not be part of that process.
I never thought of it that way before.
 With Mercurial, Bazaar and Git, if you accept a changeset from a branch
 you jsut merge it, e.g.

 	git merge some-feature-branch

 job done.  If you want to amend the changeset before committing to HEAD
 then create a feature branch, merge the incoming changeset to the
 feature branch, work on it till satisfied, merge to HEAD.

 The only time I used meld these days is to process merge conflicts, not
 to handle merging per se. 
I've always been highly suspicious of the auto-detection of a 3 way merge conflict.
I have always been highly suspicious that compilers can optimize my code better than I can ;-)
You should be!
Jan 08 2011
prev sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
== Quote from Walter Bright (newshound2 digitalmars.com)'s article
 Russel Winder wrote:
 One thing I would dearly like is to be able to merge branches using meld.

 http://meld.sourceforge.net/
Why?
Because meld makes it easy to review, selectively merge, and do a bit of editing all in one go.
I whole heartedly agree with this.
Jan 08 2011
prev sibling next sibling parent Russel Winder <russel russel.org.uk> writes:
On Thu, 2011-01-06 at 11:46 -0800, Walter Bright wrote:
[ . . . ]
 do that? The other thing I like a lot about gif is it sends out emails fo=
r each=20
 checkin.
Sorry, I forget to answer this question in my previous reply. Mercurial, Bazaar, and Git all have various hooks for the various branch and repository events. Commit emails are trivial in all of them. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jan 07 2011
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday 07 January 2011 02:09:31 Russel Winder wrote:
 On Thu, 2011-01-06 at 11:46 -0800, Walter Bright wrote:
 A couple months back, I did propose moving to git on the dmd internals
 mailing list, and nobody was interested.
That surprises me. Shifting from Subversion to any of Mercurial, Bazaar or Git, is such a huge improvement in tooling. Especially for support of feature branches.
Part of that was probably because not that many people pay attention to the dmd internals mailing list. I don't recall seeing that post, and I do pay at least some attention to that list. I would have been for it, but then again, I'm also not one of the dmd developers - not that there are many. Personally, I'd love to see dmd, druntime, and Phobos switch over to git, since that's what I typically use. It would certainly be an improvement over subversion. But I can't compare it to other systems such as Mercurial and Bazaar, because I've never used them. Really, for me personally, git works well enough that I've had no reason to check any others out. I can attest though that git is a huge improvement over subversion. Before I started using git, I almost never used source control on my own projects, because it was too much of a pain. With git, it's extremely easy to set up a new repository, it doesn't pollute the whole source tree with source control files, and it doesn't force me to have a second copy of the repository somewhere else. So, thanks to git, I now use source control all the time. - Jonathan M Davis
Jan 07 2011
prev sibling parent "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
On Thu, 06 Jan 2011 11:46:47 -0800, Walter Bright wrote:

 Russel Winder wrote:
 Pity, because using one of Mercurial, Bazaar or Git instead of
 Subversion is likely the best and fastest way of getting more quality
 contributions to review.  Although only anecdotal in every case where a
 team has switched to DVCS from CVCS -- except in the case of closed
 projects, obviously -- it has opened things up to far more people to
 provide contributions.  Subversion is probably now the single biggest
 barrier to getting input on system evolution.
A couple months back, I did propose moving to git on the dmd internals mailing list, and nobody was interested.
I proposed the same on the Phobos list in May, but the discussion went nowhere. It seemed the general consensus was that SVN was "good enough". -Lars
Jan 07 2011
prev sibling parent reply Don <nospam nospam.com> writes:
Walter Bright wrote:
 Nick Sabalausky wrote:
 "Caligo" <iteronvexor gmail.com> wrote in message 
 news:mailman.451.1294306555.4748.digitalmars-d puremagic.com...
 On Thu, Jan 6, 2011 at 12:28 AM, Walter Bright
 <newshound2 digitalmars.com>wrote:

 That's pretty much what I'm afraid of, losing my grip on how the whole
 thing works if there are multiple dmd committers.

 Perhaps using a modern SCM like Git might help?  Everyone could have 
 (and
should have) commit rights, and they would send pull requests. You or one of the managers would then review the changes and pull and merge with the main branch. It works great; just checkout out Rubinius on Github to see what I mean: https://github.com/evanphx/rubinius
I'm not sure I see how that's any different from everyone having "create and submit a patch" rights, and then having Walter or one of the managers review the changes and merge/patch with the main branch.
I don't, either.
There's no difference if you're only making one patch, but once you make more, there's a significant difference. I can generally manage to fix about five bugs at once, before they start to interfere with each other. After that, I have to wait for some of the bugs to be integrated into the trunk, or else start discarding changes from my working copy. Occasionally I also use my own DMD local repository, but it doesn't work very well (gets out of sync with the trunk too easily, because SVN isn't really set up for that development model). I think that we should probably move to Mercurial eventually. I think there's potential for two benefits: (1) quicker for you to merge changes in; (2) increased collaboration between patchers. But due to the pain in changing the developement model, I don't think it's a change we should make in the near term.
Jan 06 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/6/11 9:18 AM, Don wrote:
 Walter Bright wrote:
 Nick Sabalausky wrote:
 "Caligo" <iteronvexor gmail.com> wrote in message
 news:mailman.451.1294306555.4748.digitalmars-d puremagic.com...
 On Thu, Jan 6, 2011 at 12:28 AM, Walter Bright
 <newshound2 digitalmars.com>wrote:

 That's pretty much what I'm afraid of, losing my grip on how the whole
 thing works if there are multiple dmd committers.

 Perhaps using a modern SCM like Git might help? Everyone could have
 (and
should have) commit rights, and they would send pull requests. You or one of the managers would then review the changes and pull and merge with the main branch. It works great; just checkout out Rubinius on Github to see what I mean: https://github.com/evanphx/rubinius
I'm not sure I see how that's any different from everyone having "create and submit a patch" rights, and then having Walter or one of the managers review the changes and merge/patch with the main branch.
I don't, either.
There's no difference if you're only making one patch, but once you make more, there's a significant difference. I can generally manage to fix about five bugs at once, before they start to interfere with each other. After that, I have to wait for some of the bugs to be integrated into the trunk, or else start discarding changes from my working copy. Occasionally I also use my own DMD local repository, but it doesn't work very well (gets out of sync with the trunk too easily, because SVN isn't really set up for that development model). I think that we should probably move to Mercurial eventually. I think there's potential for two benefits: (1) quicker for you to merge changes in; (2) increased collaboration between patchers. But due to the pain in changing the developement model, I don't think it's a change we should make in the near term.
What are the advantages of Mercurial over git? (git does allow multiple branches.) Andrei
Jan 06 2011
next sibling parent bioinfornatics <bioinfornatics fedoraproject.org> writes:
i have used svn, cvs a little, mercurial and git and i prefer git for me is
better way
Very powerfull for managing branch and do merge. Chery pick is too very
powerfull.
And yes git allow multi branch
Jan 06 2011
prev sibling next sibling parent reply =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Andrei Alexandrescu wrote:
 What are the advantages of Mercurial over git? (git does allow multiple=
 branches.)
=20
Here's a comparison. Although I am partial to Mercurial, I have tried to be fair. Some of the points are in favor of Mercurial, some in favor of Git, and some are simply differences I noted (six of one, half a dozen of the other): http://www.digitalmars.com/pnews/read.php?server=3Dnews.digitalmars.com&g= roup=3Ddigitalmars.D&artnum=3D91657 An extra point I did not raise at the time: Git is deliberately engineered to be as different from CVS/SVN as possible (quoting Wikipedia: "Take CVS as an example of what /not/ to do; if in doubt, make the exact opposite decision"). IMO this makes it a poor choice when migrating from SVN. Mercurial (or Bazaar) would be much more comfortable. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jan 06 2011
next sibling parent David Nadlinger <see klickverbot.at> writes:
On 1/6/11 8:19 PM, "Jérôme M. Berger" wrote:
 	Here's a comparison. Although I am partial to Mercurial, I have
 tried to be fair.
Jérôme, I'm usually not the one arguing ad hominem, but are you sure that you really tried to be fair? If you want to make subjective statements about Mercurial, that you personally like it better because of this and that reason, that's fine, but please don't try to make it look like an objective comparison. A fair part of the arguments you made in the linked post are objectively wrong, which is understandable if you're mainly a Mercurial user – but please don't make it look like you had done more in-depth research regarding both to other people… For example, you dwelt on being able to »hg pull help« being an advantage over Git – where the equivalent command reads »git pull --help«. Are you serious?! By the way, at least for Mercurial 1.6, you need to pass »--help« as a »proper« argument using two dashes as well, your command does not work (anymore).
 	An extra point I did not raise at the time: Git is deliberately
 engineered to be as different from CVS/SVN as possible (quoting
 Wikipedia: "Take CVS as an example of what /not/ to do; if in doubt,
 make the exact opposite decision").
You missed the »… quote Torvalds, speaking somewhat tongue-in-cheek« part – in the talk the quote is from, Linus Torvalds was making the point that centralized SCMs just can't keep up with distributed concepts, and as you probably know, he really likes to polarize. In the same talk, he also mentioned Mercurial being very similar to Git – does that make it an unfavorable switch as well in your eyes? I hope not… David
Jan 06 2011
prev sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 06/01/2011 19:19, "Jérôme M. Berger" wrote:
 Andrei Alexandrescu wrote:
 What are the advantages of Mercurial over git? (git does allow multiple
 branches.)
I've also been mulling over whether to try out and switch away from Subversion to a DVCS, but never went ahead cause I've also been undecided about Git vs. Mercurial. So this whole discussion here in the NG has been helpful, even though I rarely use branches, if at all. However, there is an important issue for me that has not been mentioned ever, I wonder if other people also find it relevant. It annoys me a lot in Subversion, and basically it's the aspect where if you delete, rename, or copy a folder under version control in a SVN working copy, without using the SVN commands, there is a high likelihood your working copy will break! It's so annoying, especially since sometimes no amount of svn revert, cleanup, unlock, override and update, etc. will fix it. I just had one recently where I had to delete and re-checkout the whole project because it was that broken. Other situations also seem to cause this, even when using SVN tooling (like partially updating from a commit that delete or moves directories, or something like that) It's just so brittle. I think it may be a consequence of the design aspect of SVN where each subfolder of a working copy is a working copy as well (and each subfolder of repository is a repository as well) Anyways, I hope Mercurial and Git are better at this, I'm definitely going to try them out with regards to this. -- Bruno Medeiros - Software Engineer
Jan 28 2011
parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2011-01-28 11:29:49 -0500, Bruno Medeiros 
<brunodomedeiros+spam com.gmail> said:

 I've also been mulling over whether to try out and switch away from 
 Subversion to a DVCS, but never went ahead cause I've also been 
 undecided about Git vs. Mercurial. So this whole discussion here in the 
 NG has been helpful, even though I rarely use branches, if at all.
 
 However, there is an important issue for me that has not been mentioned 
 ever, I wonder if other people also find it relevant. It annoys me a 
 lot in Subversion, and basically it's the aspect where if you delete, 
 rename, or copy a folder under version control in a SVN working copy, 
 without using the SVN commands, there is a high likelihood your working 
 copy will break! It's so annoying, especially since sometimes no amount 
 of svn revert, cleanup, unlock, override and update, etc. will fix it. 
 I just had one recently where I had to delete and re-checkout the whole 
 project because it was that broken.
 Other situations also seem to cause this, even when using SVN tooling 
 (like partially updating from a commit that delete or moves 
 directories, or something like that) It's just so brittle.
 I think it may be a consequence of the design aspect of SVN where each 
 subfolder of a working copy is a working copy as well (and each 
 subfolder of repository is a repository as well)
 
 Anyways, I hope Mercurial and Git are better at this, I'm definitely 
 going to try them out with regards to this.
Git doesn't care how you move your files around. It track files by their content. If you rename a file and most of the content stays the same, git will see it as a rename. If most of the file has changed, it'll see it as a new file (with the old one deleted). There is 'git mv', but it's basically just a shortcut for moving the file, doing 'git rm' on the old path and 'git add' on the new path. I don't know about Mercurial. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Jan 28 2011
parent reply =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Michel Fortin wrote:
 On 2011-01-28 11:29:49 -0500, Bruno Medeiros
 <brunodomedeiros+spam com.gmail> said:
=20
 I've also been mulling over whether to try out and switch away from
 Subversion to a DVCS, but never went ahead cause I've also been
 undecided about Git vs. Mercurial. So this whole discussion here in
 the NG has been helpful, even though I rarely use branches, if at all.=
 However, there is an important issue for me that has not been
 mentioned ever, I wonder if other people also find it relevant. It
 annoys me a lot in Subversion, and basically it's the aspect where if
 you delete, rename, or copy a folder under version control in a SVN
 working copy, without using the SVN commands, there is a high
 likelihood your working copy will break! It's so annoying, especially
 since sometimes no amount of svn revert, cleanup, unlock, override and=
 update, etc. will fix it. I just had one recently where I had to
 delete and re-checkout the whole project because it was that broken.
 Other situations also seem to cause this, even when using SVN tooling
 (like partially updating from a commit that delete or moves
 directories, or something like that) It's just so brittle.
 I think it may be a consequence of the design aspect of SVN where each=
 subfolder of a working copy is a working copy as well (and each
 subfolder of repository is a repository as well)

 Anyways, I hope Mercurial and Git are better at this, I'm definitely
 going to try them out with regards to this.
=20 Git doesn't care how you move your files around. It track files by thei=
r
 content. If you rename a file and most of the content stays the same,
 git will see it as a rename. If most of the file has changed, it'll see=
 it as a new file (with the old one deleted). There is 'git mv', but it'=
s
 basically just a shortcut for moving the file, doing 'git rm' on the ol=
d
 path and 'git add' on the new path.
=20
 I don't know about Mercurial.
=20
Mercurial can record renamed or copied files after the fact (simply pass the -A option to "hg cp" or "hg mv"). It also has the "addremove" command which will automatically remove any missing files and add any unknown non-ignored files. Addremove can detect renamed files if they are similar enough to the old file (the similarity level is configurable) but it will not detect copies. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jan 29 2011
parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 29/01/2011 10:02, "Jérôme M. Berger" wrote:
 Michel Fortin wrote:
 On 2011-01-28 11:29:49 -0500, Bruno Medeiros
 <brunodomedeiros+spam com.gmail>  said:

 I've also been mulling over whether to try out and switch away from
 Subversion to a DVCS, but never went ahead cause I've also been
 undecided about Git vs. Mercurial. So this whole discussion here in
 the NG has been helpful, even though I rarely use branches, if at all.

 However, there is an important issue for me that has not been
 mentioned ever, I wonder if other people also find it relevant. It
 annoys me a lot in Subversion, and basically it's the aspect where if
 you delete, rename, or copy a folder under version control in a SVN
 working copy, without using the SVN commands, there is a high
 likelihood your working copy will break! It's so annoying, especially
 since sometimes no amount of svn revert, cleanup, unlock, override and
 update, etc. will fix it. I just had one recently where I had to
 delete and re-checkout the whole project because it was that broken.
 Other situations also seem to cause this, even when using SVN tooling
 (like partially updating from a commit that delete or moves
 directories, or something like that) It's just so brittle.
 I think it may be a consequence of the design aspect of SVN where each
 subfolder of a working copy is a working copy as well (and each
 subfolder of repository is a repository as well)

 Anyways, I hope Mercurial and Git are better at this, I'm definitely
 going to try them out with regards to this.
Git doesn't care how you move your files around. It track files by their content. If you rename a file and most of the content stays the same, git will see it as a rename. If most of the file has changed, it'll see it as a new file (with the old one deleted). There is 'git mv', but it's basically just a shortcut for moving the file, doing 'git rm' on the old path and 'git add' on the new path. I don't know about Mercurial.
Mercurial can record renamed or copied files after the fact (simply pass the -A option to "hg cp" or "hg mv"). It also has the "addremove" command which will automatically remove any missing files and add any unknown non-ignored files. Addremove can detect renamed files if they are similar enough to the old file (the similarity level is configurable) but it will not detect copies. Jerome
Indeed, that's want I found out now that I tried Mercurial. So that's really nice (especially the "addremove" command), it's actually motivation enough for me to switch to Mercurial or Git, as it's a major annoyance in SVN. I've learned a few more things recently: there's a minor issue with Git and Mercurial in that they both are not able to record empty directories. A very minor annoyance (it's workaround-able), but still conceptually lame, I mean, directories are resources too! It's curious that the wiki pages for both Git and Mercurial on this issue are exactly the same, word by word most of them: http://mercurial.selenic.com/wiki/MarkEmptyDirs https://git.wiki.kernel.org/index.php/MarkEmptyDirs (I guess it's because they were written by the same guy) A more serious issue that I learned (or rather forgotten about before and remembered now) is the whole DVCSes keep the whole repository history locally aspect, which has important ramifications. If the repository is big, although disk space may not be much of an issue, it's a bit annoying when copying the repository locally(*), or cloning it from the internet and thus having to download large amounts of data. For example in the DDT Eclipse IDE I keep the project dependencies (https://svn.codespot.com/a/eclipselabs.org/ddt/trunk/org.dsourc .ddt-build/target/) on source control, which is 141Mb total on a single revision, and they might change ever semester or so... I'm still not sure what to do about this. I may split this part of the project into a separate Mercurial repository, although I do lose some semantic information because of this: a direct association between each revision in the source code projects, and the corresponding revision in the dependencies project. Conceptually I would want this to be a single repository. (*) Yeah, I know Mercurial and Git may use hardlinks to speed up the cloning process, even on Windows, but that solution is not suitable to me, as I my workflow is usually to copy entire Eclipse workspaces when I want to "branch" on some task. Doesn't happen that often though. -- Bruno Medeiros - Software Engineer
Feb 01 2011
next sibling parent David Nadlinger <see klickverbot.at> writes:
On 2/1/11 2:44 PM, Bruno Medeiros wrote:
 […] a direct association between each
 revision in the source code projects, and the corresponding revision in
 the dependencies project. […]
With Git, you could use submodules for that task – I don't know if something similar exists for Mercurial. David
Feb 01 2011
prev sibling next sibling parent foobar <foo bar.com> writes:
Bruno Medeiros Wrote:

 On 29/01/2011 10:02, "Jérôme M. Berger" wrote:
 Michel Fortin wrote:
 On 2011-01-28 11:29:49 -0500, Bruno Medeiros
 <brunodomedeiros+spam com.gmail>  said:

 I've also been mulling over whether to try out and switch away from
 Subversion to a DVCS, but never went ahead cause I've also been
 undecided about Git vs. Mercurial. So this whole discussion here in
 the NG has been helpful, even though I rarely use branches, if at all.

 However, there is an important issue for me that has not been
 mentioned ever, I wonder if other people also find it relevant. It
 annoys me a lot in Subversion, and basically it's the aspect where if
 you delete, rename, or copy a folder under version control in a SVN
 working copy, without using the SVN commands, there is a high
 likelihood your working copy will break! It's so annoying, especially
 since sometimes no amount of svn revert, cleanup, unlock, override and
 update, etc. will fix it. I just had one recently where I had to
 delete and re-checkout the whole project because it was that broken.
 Other situations also seem to cause this, even when using SVN tooling
 (like partially updating from a commit that delete or moves
 directories, or something like that) It's just so brittle.
 I think it may be a consequence of the design aspect of SVN where each
 subfolder of a working copy is a working copy as well (and each
 subfolder of repository is a repository as well)

 Anyways, I hope Mercurial and Git are better at this, I'm definitely
 going to try them out with regards to this.
Git doesn't care how you move your files around. It track files by their content. If you rename a file and most of the content stays the same, git will see it as a rename. If most of the file has changed, it'll see it as a new file (with the old one deleted). There is 'git mv', but it's basically just a shortcut for moving the file, doing 'git rm' on the old path and 'git add' on the new path. I don't know about Mercurial.
Mercurial can record renamed or copied files after the fact (simply pass the -A option to "hg cp" or "hg mv"). It also has the "addremove" command which will automatically remove any missing files and add any unknown non-ignored files. Addremove can detect renamed files if they are similar enough to the old file (the similarity level is configurable) but it will not detect copies. Jerome
Indeed, that's want I found out now that I tried Mercurial. So that's really nice (especially the "addremove" command), it's actually motivation enough for me to switch to Mercurial or Git, as it's a major annoyance in SVN. I've learned a few more things recently: there's a minor issue with Git and Mercurial in that they both are not able to record empty directories. A very minor annoyance (it's workaround-able), but still conceptually lame, I mean, directories are resources too! It's curious that the wiki pages for both Git and Mercurial on this issue are exactly the same, word by word most of them: http://mercurial.selenic.com/wiki/MarkEmptyDirs https://git.wiki.kernel.org/index.php/MarkEmptyDirs (I guess it's because they were written by the same guy) A more serious issue that I learned (or rather forgotten about before and remembered now) is the whole DVCSes keep the whole repository history locally aspect, which has important ramifications. If the repository is big, although disk space may not be much of an issue, it's a bit annoying when copying the repository locally(*), or cloning it from the internet and thus having to download large amounts of data. For example in the DDT Eclipse IDE I keep the project dependencies (https://svn.codespot.com/a/eclipselabs.org/ddt/trunk/org.dsourc .ddt-build/target/) on source control, which is 141Mb total on a single revision, and they might change ever semester or so... I'm still not sure what to do about this. I may split this part of the project into a separate Mercurial repository, although I do lose some semantic information because of this: a direct association between each revision in the source code projects, and the corresponding revision in the dependencies project. Conceptually I would want this to be a single repository. (*) Yeah, I know Mercurial and Git may use hardlinks to speed up the cloning process, even on Windows, but that solution is not suitable to me, as I my workflow is usually to copy entire Eclipse workspaces when I want to "branch" on some task. Doesn't happen that often though. -- Bruno Medeiros - Software Engineer
You raised a valid concern regarding the local copy issue and it has already been taken care of in DVCSes: 1. git stores all the actual data in "blobs" which are compressed whereas svn stores everything in plain-text (including all the history!) 2. git stores and transfers deltas and not full files unlike svn 3. it's possible to wrap a bunch of commits into a "bundle" - a single compressed binary file. This file can be than downloaded and than you can git fetch from it. In general Git (and I assume mercurial as well) both needs way less space than comparable SVN repositories and is much faster in fetching from upstream compared to svn update. Try cloning your svn repository with git-svn and compare repository sizes..
Feb 01 2011
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Bruno Medeiros wrote:
 A more serious issue that I learned (or rather forgotten about before 
 and remembered now) is the whole DVCSes keep the whole repository 
 history locally aspect, which has important ramifications. If the 
 repository is big, although disk space may not be much of an issue,
I still find myself worrying about disk usage, despite being able to get a 2T drive these days for under a hundred bucks. Old patterns of thought die hard.
Feb 01 2011
next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Tuesday, February 01, 2011 15:07:58 Walter Bright wrote:
 Bruno Medeiros wrote:
 A more serious issue that I learned (or rather forgotten about before
 and remembered now) is the whole DVCSes keep the whole repository
 history locally aspect, which has important ramifications. If the
 repository is big, although disk space may not be much of an issue,
I still find myself worrying about disk usage, despite being able to get a 2T drive these days for under a hundred bucks. Old patterns of thought die hard.
And some things will likely _always_ make disk usage a concern. Video would be a good example. If you have much video, even with good compression, it's going to take up a lot of space. Granted, there are _lots_ of use cases which just don't take up enough disk space to matter anymore, but you can _always_ find ways to use up disk space. Entertainingly, a fellow I know had a friend who joked that he could always hold all of his data in a shoebox. Originally, it was punch cards. Then it was 5 1/4" floppy disks. Then it was 3 1/2" floppy disks. Then it was CDs. Etc. Storage devices keep getting bigger and bigger, but we keep finding ways to fill them... - Jonathan M Davis
Feb 01 2011
prev sibling next sibling parent reply Brad Roberts <braddr slice-2.puremagic.com> writes:
On Tue, 1 Feb 2011, Walter Bright wrote:

 Bruno Medeiros wrote:
 A more serious issue that I learned (or rather forgotten about before and
 remembered now) is the whole DVCSes keep the whole repository history
 locally aspect, which has important ramifications. If the repository is big,
 although disk space may not be much of an issue,
I still find myself worrying about disk usage, despite being able to get a 2T drive these days for under a hundred bucks. Old patterns of thought die hard.
For what it's worth, the sizes of the key git dirs on my box: dmd.git == 4.4 - 5.9M (depends on if the gc has run recently to re-pack new objects) druntime.git == 1.4 - 3.0M phobos.git == 5.1 - 6.7M The checked out copy of each of those is considerably more than the packed full history. The size, inclusive of full history and the checked out copy, after a make clean: dmd 15M druntime 4M phobos 16M Ie, essentially negligable. Later, Brad
Feb 01 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Brad Roberts wrote:
 Ie, essentially negligable.
Yeah, and I caught myself worrying about the disk usage from having two clones of the git repository (one for D1, the other for D2).
Feb 01 2011
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
Bleh. I tried to use Git to update some of the doc files, but getting
the thing to work will be a miracle.

git can't find the public keys unless I use msysgit. Great. How
exactly do I cd to D:\ ?

So I try git-gui. Seems to work fine, I clone the forked repo and make
a few changes. I try to commit, it says I have to update first. So I
do that. *Error: crash crash crash*. I try to close the thing, it just
keeps crashing. CTRL+ALT+DEL time..

Okay, I try another GUI package, GitExtensions. I make new
public/private keys and add it to github, I'm about to clone but then
I get this "fatal: The remote end hung up unexpectedly".

I don't know what to say..
Feb 01 2011
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Andrej Mitrovic wrote:
 I don't know what to say..
Git is a Linux program and will never work right on Windows. The problems you're experiencing are classic ones I find whenever I attempt to use a Linux program that has been "ported" to Windows.
Feb 01 2011
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 2/2/11, Walter Bright <newshound2 digitalmars.com> wrote:
 Andrej Mitrovic wrote:
 I don't know what to say..
Git is a Linux program and will never work right on Windows. The problems you're experiencing are classic ones I find whenever I attempt to use a Linux program that has been "ported" to Windows.
Yeah, I know what you mean. "Use my app on Windows too, it works! But you have to install this Linux simulator first, though". Is this why you've made your own version of make and microemacs for Windows? I honestly can't blame you. :)
Feb 01 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
Andrej Mitrovic wrote:
 Is this why you've made your own version of make and microemacs for
 Windows? I honestly can't blame you. :)
Microemacs floated around the intarnets for free back in the 80's, and I liked it because it was very small, fast, and customizable. Having an editor that fit in 50k was just the ticket for a floppy based system. Most code editors of the day were many times larger, took forever to load, etc. I wrote my own make because I needed one to sell and so couldn't use someone else's.
Feb 01 2011
prev sibling next sibling parent Brad Roberts <braddr puremagic.com> writes:
On 2/1/2011 7:55 PM, Andrej Mitrovic wrote:
 On 2/2/11, Walter Bright <newshound2 digitalmars.com> wrote:
 Andrej Mitrovic wrote:
 I don't know what to say..
Git is a Linux program and will never work right on Windows. The problems you're experiencing are classic ones I find whenever I attempt to use a Linux program that has been "ported" to Windows.
Yeah, I know what you mean. "Use my app on Windows too, it works! But you have to install this Linux simulator first, though". Is this why you've made your own version of make and microemacs for Windows? I honestly can't blame you. :)
Of course, it forms a nice vicious circle. Without users, there's little incentive to fix and chances are there's fewer users reporting bugs. Sounds.. familiar. :)
Feb 01 2011
prev sibling next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 2/2/11, Walter Bright <newshound2 digitalmars.com> wrote:

I've noticed you have "Version Control with Git" listed in your list
of books. Did you just buy that recently, or were you secretly
planning to switch to Git at the instant someone mentioned it? :p
Feb 01 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
Andrej Mitrovic wrote:
 I've noticed you have "Version Control with Git" listed in your list
 of books. Did you just buy that recently, or were you secretly
 planning to switch to Git at the instant someone mentioned it? :p
I listed it recently.
Feb 01 2011
prev sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 2/2/11, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:
 On 2/2/11, Walter Bright <newshound2 digitalmars.com> wrote:

 ...listed in your list...
Crap.. I just made a 2-dimensional book list by accident. My bad.
Feb 01 2011
prev sibling next sibling parent David Nadlinger <see klickverbot.at> writes:
On 2/2/11 3:17 AM, Andrej Mitrovic wrote:
 Bleh. I tried to use Git to update some of the doc files, but getting
 the thing to work will be a miracle.

 git can't find the public keys unless I use msysgit. Great. How
 exactly do I cd to D:\ ?
If you are new to Git or SSH, the folks at GitHub have put up a tutorial explaining how to generate and set up a pair of SSH keys: http://help.github.com/msysgit-key-setup/. There is also a page describing solutions to some SSH setup problems: http://help.github.com/troubleshooting-ssh/. If you already have a private/public key and want to use it with Git, either copy them to Git's .ssh/ directory or edit the .ssh/config of the SSH instance used by Git accordingly. If you need to refer to »D:\somefile« inside the MSYS shell, use »/d/somefile«. I don't quite get what you mean with »git can't find the public keys unless I use msysgit«. Obviously, you need to modify the configuration of the SSH program Git uses, but other than that, you don't need to use the MSYS shell for setting up stuff – you can just use Windows Explorer and your favorite text editor for that as well. David
Feb 02 2011
prev sibling parent =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Andrej Mitrovic wrote:
 Bleh. I tried to use Git to update some of the doc files, but getting
 the thing to work will be a miracle.
=20
 git can't find the public keys unless I use msysgit. Great. How
 exactly do I cd to D:\ ?
=20
 So I try git-gui. Seems to work fine, I clone the forked repo and make
 a few changes. I try to commit, it says I have to update first. So I
 do that. *Error: crash crash crash*. I try to close the thing, it just
 keeps crashing. CTRL+ALT+DEL time..
=20
 Okay, I try another GUI package, GitExtensions. I make new
 public/private keys and add it to github, I'm about to clone but then
 I get this "fatal: The remote end hung up unexpectedly".
=20
 I don't know what to say..
Why do you think I keep arguing against Git every chance I get? Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Feb 02 2011
prev sibling parent Brad Roberts <braddr puremagic.com> writes:
On 2/1/2011 6:17 PM, Andrej Mitrovic wrote:
 Bleh. I tried to use Git to update some of the doc files, but getting
 the thing to work will be a miracle.
 
 git can't find the public keys unless I use msysgit. Great. How
 exactly do I cd to D:\ ?
 
 So I try git-gui. Seems to work fine, I clone the forked repo and make
 a few changes. I try to commit, it says I have to update first. So I
 do that. *Error: crash crash crash*. I try to close the thing, it just
 keeps crashing. CTRL+ALT+DEL time..
 
 Okay, I try another GUI package, GitExtensions. I make new
 public/private keys and add it to github, I'm about to clone but then
 I get this "fatal: The remote end hung up unexpectedly".
 
 I don't know what to say..
I use cygwin for all my windows work (which I try to keep to a minimum). Works just fine in that environment.
Feb 01 2011
prev sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 01/02/2011 23:07, Walter Bright wrote:
 Bruno Medeiros wrote:
 A more serious issue that I learned (or rather forgotten about before
 and remembered now) is the whole DVCSes keep the whole repository
 history locally aspect, which has important ramifications. If the
 repository is big, although disk space may not be much of an issue,
I still find myself worrying about disk usage, despite being able to get a 2T drive these days for under a hundred bucks. Old patterns of thought die hard.
Well, like I said, my concern about size is not so much disk space, but the time to make local copies of the repository, or cloning it from the internet (and the associated transfer times), both of which are not neglectable yet. My project at work could easily have gone to 1Gb of repo size if in the last year or so it has been stored on a DVCS! :S I hope this gets addressed at some point. But I fear that the main developers of both Git and Mercurial may be too "biased" to experience projects which are typically somewhat small in size, in terms of bytes (projects that consist almost entirely of source code). For example, in UI applications it would be common to store binary data (images, sounds, etc.) in the source control. The other case is what I mentioned before, wanting to store dependencies together with the project (in my case including the javadoc and source code of the dependencies - and there's very good reasons to want to do that). In this analysis: http://code.google.com/p/support/wiki/DVCSAnalysis they said that Git has some functionality to address this issue: "Client Storage Management. Both Mercurial and Git allow users to selectively pull branches from other repositories. This provides an upfront mechanism for narrowing the amount of history stored locally. In addition, Git allows previously pulled branches to be discarded. Git also allows old revision data to be pruned from the local repository (while still keeping recent revision data on those branches). With Mercurial, if a branch is in the local repository, then all of its revisions (back to the very initial commit) must also be present, and there is no way to prune branches other than by creating a new repository and selectively pulling branches into it. There has been some work addressing this in Mercurial, but nothing official yet." However I couldn't find more info about this, and other articles and comments about Git seem to omit or contradict this... :S Can Git really have an usable but incomplete local clone? -- Bruno Medeiros - Software Engineer
Feb 04 2011
next sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2011-02-04 11:12:12 -0500, Bruno Medeiros 
<brunodomedeiros+spam com.gmail> said:

 Can Git really have an usable but incomplete local clone?
Yes, it's called a shallow clone. See the --depth switch of git clone: <http://www.kernel.org/pub/software/scm/git/docs/git-clone.html> -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Feb 04 2011
parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 04/02/2011 20:11, Michel Fortin wrote:
 On 2011-02-04 11:12:12 -0500, Bruno Medeiros
 <brunodomedeiros+spam com.gmail> said:

 Can Git really have an usable but incomplete local clone?
Yes, it's called a shallow clone. See the --depth switch of git clone: <http://www.kernel.org/pub/software/scm/git/docs/git-clone.html>
I was about to say "Cool!", but then I checked the doc on that link and it says: "A shallow repository has a number of limitations (you cannot clone or fetch from it, nor push from nor into it), but is adequate if you are only interested in the recent history of a large project with a long history, and would want to send in fixes as patches. " So it's actually not good for what I meant, since it is barely usable (you cannot push from it). :( -- Bruno Medeiros - Software Engineer
Feb 09 2011
parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2011-02-09 07:49:31 -0500, Bruno Medeiros 
<brunodomedeiros+spam com.gmail> said:

 On 04/02/2011 20:11, Michel Fortin wrote:
 On 2011-02-04 11:12:12 -0500, Bruno Medeiros
 <brunodomedeiros+spam com.gmail> said:
 
 Can Git really have an usable but incomplete local clone?
Yes, it's called a shallow clone. See the --depth switch of git clone: <http://www.kernel.org/pub/software/scm/git/docs/git-clone.html>
I was about to say "Cool!", but then I checked the doc on that link and it says: "A shallow repository has a number of limitations (you cannot clone or fetch from it, nor push from nor into it), but is adequate if you are only interested in the recent history of a large project with a long history, and would want to send in fixes as patches. " So it's actually not good for what I meant, since it is barely usable (you cannot push from it). :(
Actually, pushing from a shallow repository can work, but if your history is not deep enough it will be a problem when git tries determine the common ancestor. Be sure to have enough depth so that your history contains the common ancestor of all the branches you might want to merge, and also make sure the remote repository won't rewrite history beyond that point and you should be safe. At least, that's what I understand from: <http://git.661346.n2.nabble.com/pushing-from-a-shallow-repo-allowed-td2332252.html> -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Feb 09 2011
next sibling parent "nedbrek" <nedbrek yahoo.com> writes:
Hello all,

"Michel Fortin" <michel.fortin michelf.com> wrote in message 
news:iiu8dm$10te$1 digitalmars.com...
 On 2011-02-09 07:49:31 -0500, Bruno Medeiros 
 <brunodomedeiros+spam com.gmail> said:
 On 04/02/2011 20:11, Michel Fortin wrote:
 On 2011-02-04 11:12:12 -0500, Bruno Medeiros
 <brunodomedeiros+spam com.gmail> said:

 Can Git really have an usable but incomplete local clone?
Yes, it's called a shallow clone. See the --depth switch of git clone: <http://www.kernel.org/pub/software/scm/git/docs/git-clone.html>
I was about to say "Cool!", but then I checked the doc on that link and it says: "A shallow repository has a number of limitations (you cannot clone or fetch from it, nor push from nor into it), but is adequate if you are only interested in the recent history of a large project with a long history, and would want to send in fixes as patches. " So it's actually not good for what I meant, since it is barely usable (you cannot push from it). :(
Actually, pushing from a shallow repository can work, but if your history is not deep enough it will be a problem when git tries determine the common ancestor. Be sure to have enough depth so that your history contains the common ancestor of all the branches you might want to merge, and also make sure the remote repository won't rewrite history beyond that point and you should be safe. At least, that's what
The other way to collaborate is to email someone a diff. Git has a lot of support for extracting diffs from emails and applying the patches. HTH, Ned
Feb 10 2011
prev sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 09/02/2011 14:27, Michel Fortin wrote:
 On 2011-02-09 07:49:31 -0500, Bruno Medeiros
 <brunodomedeiros+spam com.gmail> said:

 On 04/02/2011 20:11, Michel Fortin wrote:
 On 2011-02-04 11:12:12 -0500, Bruno Medeiros
 <brunodomedeiros+spam com.gmail> said:

 Can Git really have an usable but incomplete local clone?
Yes, it's called a shallow clone. See the --depth switch of git clone: <http://www.kernel.org/pub/software/scm/git/docs/git-clone.html>
I was about to say "Cool!", but then I checked the doc on that link and it says: "A shallow repository has a number of limitations (you cannot clone or fetch from it, nor push from nor into it), but is adequate if you are only interested in the recent history of a large project with a long history, and would want to send in fixes as patches. " So it's actually not good for what I meant, since it is barely usable (you cannot push from it). :(
Actually, pushing from a shallow repository can work, but if your history is not deep enough it will be a problem when git tries determine the common ancestor. Be sure to have enough depth so that your history contains the common ancestor of all the branches you might want to merge, and also make sure the remote repository won't rewrite history beyond that point and you should be safe. At least, that's what I understand from: <http://git.661346.n2.nabble.com/pushing-from-a-shallow-repo-allowed-td2332252.html>
Interesting. But it still feels very much like a second-class functionality, not something they really have in mind to support well, at least not yet. Ideally, if one wants to do push but the ancestor history is incomplete, the VCS would download from the central repository whatever revision/changeset information was missing. Before someone says, oh but that defeats some of the purposes of a distributed VCS, like being able to work offline. I know, and I personally don't care that much, in fact I find this "benefit" of DVCS has been overvalued way out of proportion. Does anyone do any serious coding while being offline for an extended period of time? Some people mentioned coding on the move, with laptops, but seriously, even if I am connected to the Internet I cannot code with my laptop only, I need it connected to a monitor, as well as a mouse, (and preferably a keyboard as well). -- Bruno Medeiros - Software Engineer
Feb 11 2011
next sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2011-02-11 08:05:27 -0500, Bruno Medeiros 
<brunodomedeiros+spam com.gmail> said:

 On 09/02/2011 14:27, Michel Fortin wrote:
 On 2011-02-09 07:49:31 -0500, Bruno Medeiros
 <brunodomedeiros+spam com.gmail> said:
 
 I was about to say "Cool!", but then I checked the doc on that link
 and it says:
 "A shallow repository has a number of limitations (you cannot clone or
 fetch from it, nor push from nor into it), but is adequate if you are
 only interested in the recent history of a large project with a long
 history, and would want to send in fixes as patches. "
 So it's actually not good for what I meant, since it is barely usable
 (you cannot push from it). :(
Actually, pushing from a shallow repository can work, but if your history is not deep enough it will be a problem when git tries determine the common ancestor. Be sure to have enough depth so that your history contains the common ancestor of all the branches you might want to merge, and also make sure the remote repository won't rewrite history beyond that point and you should be safe. At least, that's what I understand from: <http://git.661346.n2.nabble.com/pushing-from-a-shallow-repo-allowed-td2332252.html>
Interesting.
 
But it still feels very much like a second-class functionality, not something they really have in mind to support well, at least not yet. Ideally, if one wants to do push but the ancestor history is incomplete, the VCS would download from the central repository whatever revision/changeset information was missing.
Actually, there's no "central" repository in Git. But I agree with your idea in general: one of the remotes could be designated as being a source to look for when encountering a missing object, probably the one from which you shallowly cloned from. All we need is someone to implement that. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Feb 11 2011
parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 11/02/2011 18:31, Michel Fortin wrote:
 Ideally, if one wants to do push but the ancestor history is
 incomplete, the VCS would download from the central repository
 whatever revision/changeset information was missing.
Actually, there's no "central" repository in Git.
That stuff about DVCS not having a central repository is another thing that is being said a lot, but is only true in a very shallow (and non-useful) way. Yes, in DVCS there are no more "working copies" as in Subversion, now everyone's working copy is a full fledged repository/clone that in technical terms is peer of any other repository. However, from an organizational point of view in a project, there is always going to be a "central" repository. The one that actually represents the product/application/library, where the builds and releases are made from. (Of course, there could be more than one central repository if there are multiple kinds of releases like stable/experimental, or forks of the the product, etc.) Maybe the DVCS world likes the term public/shared repository better, but that doesn't make much difference. -- Bruno Medeiros - Software Engineer
Feb 16 2011
next sibling parent Russel Winder <russel russel.org.uk> writes:
On Wed, 2011-02-16 at 14:51 +0000, Bruno Medeiros wrote:
[ . . . ]
 That stuff about DVCS not having a central repository is another thing=
=20
 that is being said a lot, but is only true in a very shallow (and=20
 non-useful) way. Yes, in DVCS there are no more "working copies" as in=
=20
 Subversion, now everyone's working copy is a full fledged=20
 repository/clone that in technical terms is peer of any other repository.
 However, from an organizational point of view in a project, there is=20
 always going to be a "central" repository. The one that actually=20
 represents the product/application/library, where the builds and=20
 releases are made from. (Of course, there could be more than one central=
=20
 repository if there are multiple kinds of releases like=20
 stable/experimental, or forks of the the product, etc.)
Definitely the case. There can only be one repository that represents the official state of a given project. That isn't really the issue in the move from CVCS systems to DVCS systems.
 Maybe the DVCS world likes the term public/shared repository better, but=
=20
 that doesn't make much difference.
In the Bazaar community, and I think increasingly in Mercurial and Git ones, people talk of the "mainline" or "master".=20 --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 16 2011
prev sibling parent reply Ulrik Mikaelsson <ulrik.mikaelsson gmail.com> writes:
2011/2/16 Russel Winder <russel russel.org.uk>:
 Definitely the case. =C2=A0There can only be one repository that represen=
ts
 the official state of a given project. =C2=A0That isn't really the issue =
in
 the move from CVCS systems to DVCS systems.
Just note that not all projects have a specific "state" to represent. Many projects are centered around the concept of a centralized project, a "core"-team, and all-around central organisation and planning. Some projects however, I guess the Linux kernel is a prime example, have been quite de-centralized even in their nature for a long time. In the case of KDE, for a centralized example, there is a definite "project version", which is the version currently blessed by the central project team. There is a centralized project planning, including meetings, setting out goals for the coming development. In the case of Linux, it's FAR less obvious. Sure, most people see master torvalds/linux-2.6.git as THE Linux-version. However, there are many other trees interesting to track as well, such as the various distribution-trees which might incorporate many drivers not in mainline, especially for older stability-oriented kernels, RHEL or Debian is probably THE version to care about. You might also be interested in special-environment-kernels, such as non x86-kernels, in which case you're probably more interested in the central repo for that architecture, which is rarely Linuses. Also, IIRC, hard and soft realtime-enthusiasts neither looks at linuses tree first. Above all, in the Linux-kernel, there is not much of "centralised planning". Linus doesn't call to a big planning-meeting quarterly to set up specific milestones for the next kernel release, but in the beginning of each cycle, he is spammed with things already developed independently, scratching someones itch. He then cherry-picks the things that has got good reviews and are interesting for where he wants to go with the kernel. That is not to say that there aren't a lot of coordination and communication, but there isn't a clear centralized authority steering development in the same ways as in many other projects. The bottom line is, many projects, even ones using DVCS, are often centrally organized. However, the Linux kernel is clear evidence it is not the only project model that works.
Feb 16 2011
parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 16/02/2011 17:54, Ulrik Mikaelsson wrote:
 2011/2/16 Russel Winder<russel russel.org.uk>:
 Definitely the case.  There can only be one repository that represents
 the official state of a given project.  That isn't really the issue in
 the move from CVCS systems to DVCS systems.
Just note that not all projects have a specific "state" to represent. Many projects are centered around the concept of a centralized project, a "core"-team, and all-around central organisation and planning. Some projects however, I guess the Linux kernel is a prime example, have been quite de-centralized even in their nature for a long time. In the case of KDE, for a centralized example, there is a definite "project version", which is the version currently blessed by the central project team. There is a centralized project planning, including meetings, setting out goals for the coming development. In the case of Linux, it's FAR less obvious. Sure, most people see master torvalds/linux-2.6.git as THE Linux-version. However, there are many other trees interesting to track as well, such as the various distribution-trees which might incorporate many drivers not in mainline, especially for older stability-oriented kernels, RHEL or Debian is probably THE version to care about. You might also be interested in special-environment-kernels, such as non x86-kernels, in which case you're probably more interested in the central repo for that architecture, which is rarely Linuses. Also, IIRC, hard and soft realtime-enthusiasts neither looks at linuses tree first. Above all, in the Linux-kernel, there is not much of "centralised planning". Linus doesn't call to a big planning-meeting quarterly to set up specific milestones for the next kernel release, but in the beginning of each cycle, he is spammed with things already developed independently, scratching someones itch. He then cherry-picks the things that has got good reviews and are interesting for where he wants to go with the kernel. That is not to say that there aren't a lot of coordination and communication, but there isn't a clear centralized authority steering development in the same ways as in many other projects. The bottom line is, many projects, even ones using DVCS, are often centrally organized. However, the Linux kernel is clear evidence it is not the only project model that works.
Yeah, that's true. Some projects, the Linux kernel being one of the best examples, are more distributed in nature than not, in actual organizational terms. But projects like that are (and will remain) in the minority, a minority which is probably a very, very small. -- Bruno Medeiros - Software Engineer
Feb 17 2011
parent Ulrik Mikaelsson <ulrik.mikaelsson gmail.com> writes:
2011/2/17 Bruno Medeiros <brunodomedeiros+spam com.gmail>:
 Yeah, that's true. Some projects, the Linux kernel being one of the best
 examples, are more distributed in nature than not, in actual organizational
 terms. But projects like that are (and will remain) in the minority, a
 minority which is probably a very, very small.
Indeed. However, I think it will be interesting to see how things develop, if this will be the case in the future too. The Linux kernel, and a few other projects were probably decentralized from start by necessity, filling very different purposes. However, new tools tends to affect models, which might make it a bit more common in the future. In any case, it's an interesting time to do software development.
Feb 18 2011
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Bruno Medeiros wrote:
 but seriously, even if I am 
 connected to the Internet I cannot code with my laptop only, I need it 
 connected to a monitor, as well as a mouse, (and preferably a keyboard 
 as well).
I found I can't code on my laptop anymore; I am too used to and needful of a large screen.
Feb 11 2011
parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 11/02/2011 23:30, Walter Bright wrote:
 Bruno Medeiros wrote:
 but seriously, even if I am connected to the Internet I cannot code
 with my laptop only, I need it connected to a monitor, as well as a
 mouse, (and preferably a keyboard as well).
I found I can't code on my laptop anymore; I am too used to and needful of a large screen.
Yeah, that was my point as well. The laptop monitor is too small for coding, (unless one has a huge laptop). -- Bruno Medeiros - Software Engineer
Feb 16 2011
prev sibling parent reply Ulrik Mikaelsson <ulrik.mikaelsson gmail.com> writes:
2011/2/4 Bruno Medeiros <brunodomedeiros+spam com.gmail>:
 Well, like I said, my concern about size is not so much disk space, but the
 time to make local copies of the repository, or cloning it from the internet
 (and the associated transfer times), both of which are not neglectable yet.
 My project at work could easily have gone to 1Gb of repo size if in the last
 year or so it has been stored on a DVCS! :S

 I hope this gets addressed at some point. But I fear that the main
 developers of both Git and Mercurial may be too "biased" to experience
 projects which are typically somewhat small in size, in terms of bytes
 (projects that consist almost entirely of source code).
 For example, in UI applications it would be common to store binary data
 (images, sounds, etc.) in the source control. The other case is what I
 mentioned before, wanting to store dependencies together with the project
 (in my case including the javadoc and source code of the dependencies - and
 there's very good reasons to want to do that).
I think the storage/bandwidth requirements of DVCS:s are very often exagerated, especially for text, but also somewhat for blobs. * For text-content, the compression of archives reduces them to, perhaps, 1/5 of their original size? - That means, that unless you completely rewrite a file 5 times during the course of a project, simple per-revision-compression of the file will turn out smaller, than the single uncompressed base-file that subversion transfers and stores. - The delta-compression applied ensures small changes does not count as a "rewrite". * For blobs, the archive-compression may not do as much, and they certainly pose a larger challenge for storing history, but: - AFAIU, at least git delta-compresses even binaries so even changes in them might be slightly reduced (dunno about the others) - I think more and more graphics are today are written in SVG? - I believe, for most projects, audio-files are usually not changed very often, once entered a project? Usually existing samples are simply copied in? * For both binaries and text, and for most projects, the latest revision is usually the largest. (Projects usually grow over time, they don't consistently shrink) I.E. older revisions are, compared to current, much much smaller, making the size of old history smaller compared to the size of current history. Finally, as a test, I tried checking out the last version of druntime from SVN and compare it to git (AFICT, history were preserved in the git-migration), the results were about what I expected. Checking out trunk from SVN, and the whole history from git: SVN: 7.06 seconds, 5,3 MB on disk Git: 2.88 seconds, 3.5 MB on disk Improvement Git/SVN: time reduced by 59%, space reduced by 34%. I did not measure bandwidth, but my guess is it is somewhere between the disk- and time- reductions. Also, if someone has an example of a recently converted repository including some blobs it would make an interesting experiment to repeat. Regards / Ulrik ----- ulrik ulrik ~/p/test> time svn co http://svn.dsource.org/projects/druntime/trunk druntime_svn ... 0.26user 0.21system 0:07.06elapsed 6%CPU (0avgtext+0avgdata 47808maxresident)k 544inputs+11736outputs (3major+3275minor)pagefaults 0swaps ulrik ulrik ~/p/test> du -sh druntime_svn 5,3M druntime_svn ulrik ulrik ~/p/test> time git clone git://github.com/D-Programming-Language/druntime.git druntime_git ... 0.26user 0.06system 0:02.88elapsed 11%CPU (0avgtext+0avgdata 14320maxresident)k 3704inputs+7168outputs (18major+1822minor)pagefaults 0swaps ulrik ulrik ~/p/test> du -sh druntime_git/ 3,5M druntime_git/
Feb 06 2011
parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 06/02/2011 14:17, Ulrik Mikaelsson wrote:
 2011/2/4 Bruno Medeiros<brunodomedeiros+spam com.gmail>:
 Well, like I said, my concern about size is not so much disk space, but the
 time to make local copies of the repository, or cloning it from the internet
 (and the associated transfer times), both of which are not neglectable yet.
 My project at work could easily have gone to 1Gb of repo size if in the last
 year or so it has been stored on a DVCS! :S

 I hope this gets addressed at some point. But I fear that the main
 developers of both Git and Mercurial may be too "biased" to experience
 projects which are typically somewhat small in size, in terms of bytes
 (projects that consist almost entirely of source code).
 For example, in UI applications it would be common to store binary data
 (images, sounds, etc.) in the source control. The other case is what I
 mentioned before, wanting to store dependencies together with the project
 (in my case including the javadoc and source code of the dependencies - and
 there's very good reasons to want to do that).
I think the storage/bandwidth requirements of DVCS:s are very often exagerated, especially for text, but also somewhat for blobs. * For text-content, the compression of archives reduces them to, perhaps, 1/5 of their original size? - That means, that unless you completely rewrite a file 5 times during the course of a project, simple per-revision-compression of the file will turn out smaller, than the single uncompressed base-file that subversion transfers and stores. - The delta-compression applied ensures small changes does not count as a "rewrite". * For blobs, the archive-compression may not do as much, and they certainly pose a larger challenge for storing history, but: - AFAIU, at least git delta-compresses even binaries so even changes in them might be slightly reduced (dunno about the others) - I think more and more graphics are today are written in SVG? - I believe, for most projects, audio-files are usually not changed very often, once entered a project? Usually existing samples are simply copied in? * For both binaries and text, and for most projects, the latest revision is usually the largest. (Projects usually grow over time, they don't consistently shrink) I.E. older revisions are, compared to current, much much smaller, making the size of old history smaller compared to the size of current history. Finally, as a test, I tried checking out the last version of druntime from SVN and compare it to git (AFICT, history were preserved in the git-migration), the results were about what I expected. Checking out trunk from SVN, and the whole history from git: SVN: 7.06 seconds, 5,3 MB on disk Git: 2.88 seconds, 3.5 MB on disk Improvement Git/SVN: time reduced by 59%, space reduced by 34%. I did not measure bandwidth, but my guess is it is somewhere between the disk- and time- reductions. Also, if someone has an example of a recently converted repository including some blobs it would make an interesting experiment to repeat. Regards / Ulrik ----- ulrik ulrik ~/p/test> time svn co http://svn.dsource.org/projects/druntime/trunk druntime_svn ... 0.26user 0.21system 0:07.06elapsed 6%CPU (0avgtext+0avgdata 47808maxresident)k 544inputs+11736outputs (3major+3275minor)pagefaults 0swaps ulrik ulrik ~/p/test> du -sh druntime_svn 5,3M druntime_svn ulrik ulrik ~/p/test> time git clone git://github.com/D-Programming-Language/druntime.git druntime_git ... 0.26user 0.06system 0:02.88elapsed 11%CPU (0avgtext+0avgdata 14320maxresident)k 3704inputs+7168outputs (18major+1822minor)pagefaults 0swaps ulrik ulrik ~/p/test> du -sh druntime_git/ 3,5M druntime_git/
Yes, Brad had posted some statistics of the size of the Git repositories for dmd, druntime, and phobos, and yes, they are pretty small. Projects which contains practically only source code, and little to no binary data are unlikely to grow much and repo size ever be a problem. But it might not be the case for other projects (also considering that binary data is usually already well compressed, like .zip, .jpg, .mp3, .ogg, etc., so VCS compression won't help much). It's unlikely you will see converted repositories with a lot of changing blob data. DVCS, at the least in the way they work currently, simply kill this workflow/organization-pattern. I very much suspect this issue will become more important as time goes on - a lot of people are still new to DVCS and they still don't realize the full implications of that architecture with regards to repo size. Any file you commit will add to the repository size *FOREVER*. I'm pretty sure we haven't heard the last word on the VCS battle, in that in a few years time people are *again* talking about and switching to another VCS :( . Mark these words. (The only way this is not going to happen is if Git or Mercurial are able to address this issue in a satisfactory way, which I'm not sure is possible or easy) -- Bruno Medeiros - Software Engineer
Feb 09 2011
next sibling parent =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Bruno Medeiros wrote:
 Yes, Brad had posted some statistics of the size of the Git repositorie=
s
 for dmd, druntime, and phobos, and yes, they are pretty small.
 Projects which contains practically only source code, and little to no
 binary data are unlikely to grow much and repo size ever be a problem.
 But it might not be the case for other projects (also considering that
 binary data is usually already well compressed, like .zip, .jpg, .mp3,
 .ogg, etc., so VCS compression won't help much).
=20
 It's unlikely you will see converted repositories with a lot of changin=
g
 blob data. DVCS, at the least in the way they work currently, simply
 kill this workflow/organization-pattern.
 I very much suspect this issue will become more important as time goes
 on - a lot of people are still new to DVCS and they still don't realize=
 the full implications of that architecture with regards to repo size.
 Any file you commit will add to the repository size *FOREVER*. I'm
 pretty sure we haven't heard the last word on the VCS battle, in that i=
n
 a few years time people are *again* talking about and switching to
 another VCS :( . Mark these words. (The only way this is not going to
 happen is if Git or Mercurial are able to address this issue in a
 satisfactory way, which I'm not sure is possible or easy)
=20
There are several Mercurial extensions that attempt to address this issue. See for example: http://wiki.netbeans.org/HgExternalBinaries or http://mercurial.selenic.com/wiki/BigfilesExtension I do not know how well they perform in practice. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Feb 09 2011
prev sibling parent reply Ulrik Mikaelsson <ulrik.mikaelsson gmail.com> writes:
2011/2/9 Bruno Medeiros <brunodomedeiros+spam com.gmail>:
 It's unlikely you will see converted repositories with a lot of changing
 blob data. DVCS, at the least in the way they work currently, simply kill
 this workflow/organization-pattern.
 I very much suspect this issue will become more important as time goes on -
 a lot of people are still new to DVCS and they still don't realize the full
 implications of that architecture with regards to repo size. Any file you
 commit will add to the repository size *FOREVER*. I'm pretty sure we haven't
 heard the last word on the VCS battle, in that in a few years time people
 are *again* talking about and switching to another VCS :( . Mark these
 words. (The only way this is not going to happen is if Git or Mercurial are
 able to address this issue in a satisfactory way, which I'm not sure is
 possible or easy)
You don't happen to know about any projects of this kind in any other VCS that can be practically tested, do you? Besides, AFAIU this discussion was originally regarding to the D language components, I.E. DMD, druntime and Phobos. Not a lot of binaries here.
Feb 09 2011
parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 09/02/2011 23:02, Ulrik Mikaelsson wrote:
 2011/2/9 Bruno Medeiros<brunodomedeiros+spam com.gmail>:
 It's unlikely you will see converted repositories with a lot of changing
 blob data. DVCS, at the least in the way they work currently, simply kill
 this workflow/organization-pattern.
 I very much suspect this issue will become more important as time goes on -
 a lot of people are still new to DVCS and they still don't realize the full
 implications of that architecture with regards to repo size. Any file you
 commit will add to the repository size *FOREVER*. I'm pretty sure we haven't
 heard the last word on the VCS battle, in that in a few years time people
 are *again* talking about and switching to another VCS :( . Mark these
 words. (The only way this is not going to happen is if Git or Mercurial are
 able to address this issue in a satisfactory way, which I'm not sure is
 possible or easy)
You don't happen to know about any projects of this kind in any other VCS that can be practically tested, do you?
You mean a project like that, hosted in Subversion or CVS (so that you can convert it to Git/Mercurial and see how it is in terms of repo size)? I don't know any of the top of my head, except the one in my job, but naturally it is commercial and closed-source so I can't share it. I'm cloning the Mozilla Firefox repo right now, I'm curious how big it is. ( https://developer.mozilla.org/en/Mozilla_Source_Code_%28Mercurial%29) But other than that, what exactly do you want to test? There is no specific thing to test, if you add a binary file (from a format that is already compressed, like zip, jar, jpg, etc.) of size X, you will increase the repo size by X bytes forever. There is no other way around it. (Unless on Git you rewrite the history on the repo, which doubtfully will ever be allowed on central repositories) -- Bruno Medeiros - Software Engineer
Feb 11 2011
next sibling parent reply Jean Crystof <a a.a> writes:
Bruno Medeiros Wrote:

 On 09/02/2011 23:02, Ulrik Mikaelsson wrote:
 2011/2/9 Bruno Medeiros<brunodomedeiros+spam com.gmail>:
 It's unlikely you will see converted repositories with a lot of changing
 blob data. DVCS, at the least in the way they work currently, simply kill
 this workflow/organization-pattern.
 I very much suspect this issue will become more important as time goes on -
 a lot of people are still new to DVCS and they still don't realize the full
 implications of that architecture with regards to repo size. Any file you
 commit will add to the repository size *FOREVER*. I'm pretty sure we haven't
 heard the last word on the VCS battle, in that in a few years time people
 are *again* talking about and switching to another VCS :( . Mark these
 words. (The only way this is not going to happen is if Git or Mercurial are
 able to address this issue in a satisfactory way, which I'm not sure is
 possible or easy)
You don't happen to know about any projects of this kind in any other VCS that can be practically tested, do you?
You mean a project like that, hosted in Subversion or CVS (so that you can convert it to Git/Mercurial and see how it is in terms of repo size)? I don't know any of the top of my head, except the one in my job, but naturally it is commercial and closed-source so I can't share it. I'm cloning the Mozilla Firefox repo right now, I'm curious how big it is. ( https://developer.mozilla.org/en/Mozilla_Source_Code_%28Mercurial%29) But other than that, what exactly do you want to test? There is no specific thing to test, if you add a binary file (from a format that is already compressed, like zip, jar, jpg, etc.) of size X, you will increase the repo size by X bytes forever. There is no other way around it. (Unless on Git you rewrite the history on the repo, which doubtfully will ever be allowed on central repositories)
One thing we've done at work with game asset files is we put them in a separate repository and to conserve space we use a cleaned branch as a base for work repository. The "graph" below shows how it works initial state -> alpha1 -> alpha2 -> beta1 -> internal rev X -> internal rev X+1 -> internal rev X+2 -> ... -> internal rev X+n -> beta2 Now we have a new beta2. What happens next is we take a snapshot copy of the state of beta2, go back to beta1, create a new branch and "paste" the snapshot there. Now we move the old working branch with internal revisions to someplace safe and start using this as a base. And the work continues with this: initial state -> alpha1 -> alpha2 -> beta1 -> beta2 > internal rev X+n+1 -> ... The repository size won't become a problem with text / source code. Since you're a SVN advocate, please explain how well it works with 2500 GB of asset files?
Feb 11 2011
parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 11/02/2011 13:14, Jean Crystof wrote:
 Since you're a SVN advocate, please explain how well it works with 2500 GB of
asset files?
I'm not an SVN advocate. I have started using DVCSs over Subversion, and generally I agree they are better, but what I'm saying is that they are not all roses... it is not a complete win-win, there are a few important cons, like this one. -- Bruno Medeiros - Software Engineer
Feb 16 2011
prev sibling parent Ulrik Mikaelsson <ulrik.mikaelsson gmail.com> writes:
2011/2/11 Bruno Medeiros <brunodomedeiros+spam com.gmail>:
 On 09/02/2011 23:02, Ulrik Mikaelsson wrote:
 You don't happen to know about any projects of this kind in any other
 VCS that can be practically tested, do you?
You mean a project like that, hosted in Subversion or CVS (so that you can convert it to Git/Mercurial and see how it is in terms of repo size)? I don't know any of the top of my head, except the one in my job, but naturally it is commercial and closed-source so I can't share it. I'm cloning the Mozilla Firefox repo right now, I'm curious how big it is. ( https://developer.mozilla.org/en/Mozilla_Source_Code_%28Mercurial%29) But other than that, what exactly do you want to test? There is no specific thing to test, if you add a binary file (from a format that is already compressed, like zip, jar, jpg, etc.) of size X, you will increase the repo size by X bytes forever. There is no other way around it. (Unless on Git you rewrite the history on the repo, which doubtfully will ever be allowed on central repositories)
I want to test how much overhead the git-version _actually_ is, compared to the SVN-version. Even though the jpg are unlikely to be much more compressible with regular compression, with delta-compression and the fact of growing project-size it might still be interesting to see how much overhead we're talking, and what the performance over network is.
Feb 12 2011
prev sibling next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Thu, 06 Jan 2011 17:42:29 +0200, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 What are the advantages of Mercurial over git? (git does allow multiple  
 branches.)
We've had a discussion in #d (IRC), and the general consensus there seems to be strongly in favor of Git/GitHub. For completeness (there's been a discussion before) here are my arguments: 1) Git has the largest user base - more people will be able to get started hacking on the source immediately. (GitHub compared to DSource below, some of these also apply to Gitorious, Bitbucket, Launchpad) 2) One-click forking - you can easily publish improvements that are easily discoverable to people interested in the project. (This practically guarantees that an open-source project will never hit a dead end, as long as some people are interested in it - both occasional patches and maintained forks are easily discoverable.) 3) UI for pull requests (requests to merge changes in a forked repository upstream), with comments. 4) Inline comments (you can comment on a specific line in a commit/patch). This integrates very nicely with 3) for great code review capabilities. 5) (Unique to GitHub) The network graph allows visualizing all commits in all forks of the project. 6) GitHub is run by a commercial company, and the same infrastructure is used for hosting commercial projects. Therefore, you can expect better uptime and support. GitHub has integrated wiki, issues and downloads (all optional). One thing GitHub doesn't have that DSource has is forums. I think there is no "shame" in leaving DSource for DigitalMars projects, many large open-source projects use GitHub (see GitHub's front page). Some existing D projects on GitHub: https://github.com/languages/D I think Jérôme's observations of Git performance are specific to Windows. Git is expected to be slower on Windows, since it runs on top of cygwin/msys. Here's a study on the Git wiki: https://git.wiki.kernel.org/index.php/GitBenchmarks Google has done a study of Git vs. Mercurial in 2008: http://code.google.com/p/support/wiki/DVCSAnalysis The main disadvantage they found in Git (poor performance over HTTP) doesn't apply to us, and I believe it was addressed in recent versions anyway. Disclaimer: I use Git, and avoid Mercurial if I can mainly because I don't want to learn another VCS. Nevertheless, I tried to be objective above. As I mentioned on IRC, I strongly believe this must be a fully-informed decision, since changing VCSes again is unrealistic once it's done. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 06 2011
next sibling parent Travis Boucher <boucher.travis gmail.com> writes:
On 01/06/11 17:55, Vladimir Panteleev wrote:
 Disclaimer: I use Git, and avoid Mercurial if I can mainly because I
 don't want to learn another VCS. Nevertheless, I tried to be objective
 above.
 As I mentioned on IRC, I strongly believe this must be a fully-informed
 decision, since changing VCSes again is unrealistic once it's done.
Recently I have been using mercurial (bitbucket). I have used git previously, and subversion alot. The question I think is less of git vs. mercurial and more of (git|mercurial) vs. (subversion) and even more (github|bitbucket) vs. dsource. I like dsource alot, however it doesn't compare feature wise to github & bitbucket. The only argument feature wise is forums, and in reality we already have many places to offer/get support for D and D projects other than the dsource forums (newsgroups & irc for example). Another big issue I have with dsource is that its hard to find active projects and projects that have been dead (sometimes for 5+ years). The 'social coding' networks allow projects to be easily revived in the case they do die. Personally I don't care which is used (git|mercurial, github|bitbucket), as long as we find a better way of managing the code, and a nice way of doing experimental things and having a workflow to have those experimental things pulled into the official code bases. dsource has served us well, and could still be a useful tool (maybe have it index D stuff from (github|bitbucket)?), but its time to start using some of the other, better, tools out there.
Jan 06 2011
prev sibling next sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2011-01-06 19:55:10 -0500, "Vladimir Panteleev" 
<vladimir thecybershadow.net> said:

 2) One-click forking - you can easily publish improvements that are 
 easily  discoverable to people interested in the project. (This 
 practically  guarantees that an open-source project will never hit a 
 dead end, as long  as some people are interested in it - both 
 occasional patches and  maintained forks are easily discoverable.)
Easy forking is nice, but it could be a problem in our case. The license for the backend is not open-source enough for someone to republish it (in a separate own repo) without Walter's permission. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Jan 06 2011
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Fri, 07 Jan 2011 03:17:50 +0200, Michel Fortin  
<michel.fortin michelf.com> wrote:

 Easy forking is nice, but it could be a problem in our case. The license  
 for the backend is not open-source enough for someone to republish it  
 (in a separate own repo) without Walter's permission.
I suggested elsewhere in this thread that the two must be separated first. I think it must be done anyway when moving to a DVCS, regardless of the specific one or which hosting site we'd use. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 06 2011
next sibling parent Travis Boucher <boucher.travis gmail.com> writes:
On 01/06/11 18:30, Vladimir Panteleev wrote:
 On Fri, 07 Jan 2011 03:17:50 +0200, Michel Fortin
 <michel.fortin michelf.com> wrote:

 Easy forking is nice, but it could be a problem in our case. The
 license for the backend is not open-source enough for someone to
 republish it (in a separate own repo) without Walter's permission.
I suggested elsewhere in this thread that the two must be separated first. I think it must be done anyway when moving to a DVCS, regardless of the specific one or which hosting site we'd use.
I agree, separating out the proprietary stuff has other interesting possibilities such as a D front end written in D and integration with IDEs and analysis tools. Of course all of this is possible now, but it'd make merging front end updates so much nicer.
Jan 06 2011
prev sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2011-01-06 20:30:53 -0500, "Vladimir Panteleev" 
<vladimir thecybershadow.net> said:

 On Fri, 07 Jan 2011 03:17:50 +0200, Michel Fortin  
 <michel.fortin michelf.com> wrote:
 
 Easy forking is nice, but it could be a problem in our case. The 
 license  for the backend is not open-source enough for someone to 
 republish it  (in a separate own repo) without Walter's permission.
I suggested elsewhere in this thread that the two must be separated first. I think it must be done anyway when moving to a DVCS, regardless of the specific one or which hosting site we'd use.
Which means that we need another solution for the backend, and if that solution isn't too worthless it could be used to host the other parts too and keep them together. That said, wherever the repositories are kept, nothing prevents them from being automatically mirrored on github (or anywhere else) by simply adding a post-update hook in the main repository. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Jan 06 2011
parent reply David Nadlinger <see klickverbot.at> writes:
On 1/7/11 2:43 AM, Michel Fortin wrote:
 Which means that we need another solution for the backend, and if that
 solution isn't too worthless it could be used to host the other parts
 too and keep them together.
Just to be sure: You did mean »together« as in »separate repositories on the same hosting platform«, right? I don't even think we necessarily can't use GitHub or the likes for the backend, we'd just need permission from Walter to redistribute the sources through that repository, right? It's been quite some time since I had a look at the backend license though… David
Jan 06 2011
parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
I don't think git really needs MSYS? I mean I've just installed git
again and it does have it's own executable runnable from the console.

It seems to have a gui as well, runnable with "git gui". Pretty cool.
And you can create an icon shotcut to the repo. Sweet.

I'd vote for either the two, although I have to say I do like github a
lot. I didn't know it supported wiki pages though, I haven't seen
anyone use those on a project.
Jan 06 2011
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Fri, 07 Jan 2011 04:09:04 +0200, Andrej Mitrovic  
<andrej.mitrovich gmail.com> wrote:

 I don't think git really needs MSYS? I mean I've just installed git
 again and it does have it's own executable runnable from the console.
MSysGit comes with its own copy of MSys. It's pretty transparent to the user, though. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 06 2011
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 1/7/11, Vladimir Panteleev <vladimir thecybershadow.net> wrote:
 On Fri, 07 Jan 2011 04:09:04 +0200, Andrej Mitrovic
 <andrej.mitrovich gmail.com> wrote:

 I don't think git really needs MSYS? I mean I've just installed git
 again and it does have it's own executable runnable from the console.
MSysGit comes with its own copy of MSys. It's pretty transparent to the user, though.
Aye, but I didn't download that one, I got the one on the top here: https://code.google.com/p/msysgit/downloads/list?can=3 And if I put git.exe in it's own directory the only .dll it complains about is libiconv2.dll (well that, and some missing templates). Using these two alone seems to work fine.
Jan 06 2011
next sibling parent "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Fri, 07 Jan 2011 04:31:35 +0200, Andrej Mitrovic  
<andrej.mitrovich gmail.com> wrote:

 On 1/7/11, Vladimir Panteleev <vladimir thecybershadow.net> wrote:
 On Fri, 07 Jan 2011 04:09:04 +0200, Andrej Mitrovic
 <andrej.mitrovich gmail.com> wrote:

 I don't think git really needs MSYS? I mean I've just installed git
 again and it does have it's own executable runnable from the console.
MSysGit comes with its own copy of MSys. It's pretty transparent to the user, though.
Aye, but I didn't download that one, I got the one on the top here: https://code.google.com/p/msysgit/downloads/list?can=3 And if I put git.exe in it's own directory the only .dll it complains about is libiconv2.dll (well that, and some missing templates). Using these two alone seems to work fine.
Ah, that's interesting! Must be a recent change. So they finally rewrote all the remaining bash/perl components to C? If so, that should give it a significant speed boost, most noticeable on Windows. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 07 2011
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-01-07 03:31, Andrej Mitrovic wrote:
 On 1/7/11, Vladimir Panteleev<vladimir thecybershadow.net>  wrote:
 On Fri, 07 Jan 2011 04:09:04 +0200, Andrej Mitrovic
 <andrej.mitrovich gmail.com>  wrote:

 I don't think git really needs MSYS? I mean I've just installed git
 again and it does have it's own executable runnable from the console.
MSysGit comes with its own copy of MSys. It's pretty transparent to the user, though.
Aye, but I didn't download that one, I got the one on the top here: https://code.google.com/p/msysgit/downloads/list?can=3 And if I put git.exe in it's own directory the only .dll it complains about is libiconv2.dll (well that, and some missing templates). Using these two alone seems to work fine.
Ever heard of TortoiseGit: http://code.google.com/p/tortoisegit/ -- /Jacob Carlborg
Jan 08 2011
parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 1/8/11, Jacob Carlborg <doob me.com> wrote:
 Ever heard of TortoiseGit: http://code.google.com/p/tortoisegit/
I can't stand Turtoise projects. They install explorer shells and completely slow down the system whenever I'm browsing through the file system. "Turtoise" is a perfect name for it.
Jan 08 2011
next sibling parent "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Sat, 08 Jan 2011 17:32:05 +0200, Andrej Mitrovic  
<andrej.mitrovich gmail.com> wrote:

 On 1/8/11, Jacob Carlborg <doob me.com> wrote:
 Ever heard of TortoiseGit: http://code.google.com/p/tortoisegit/
I can't stand Turtoise projects. They install explorer shells and completely slow down the system whenever I'm browsing through the file system. "Turtoise" is a perfect name for it.
Hmm, MSysGit comes with its own shell extension (GitCheetah), although it's just something to integrate the standard GUI tools (git gui / gitk) into the Explorer shell. It's optional, of course (installer option). -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 08 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrej Mitrovic" <andrej.mitrovich gmail.com> wrote in message 
news:mailman.493.1294500734.4748.digitalmars-d puremagic.com...
 On 1/8/11, Jacob Carlborg <doob me.com> wrote:
 Ever heard of TortoiseGit: http://code.google.com/p/tortoisegit/
I can't stand Turtoise projects. They install explorer shells and completely slow down the system whenever I'm browsing through the file system. "Turtoise" is a perfect name for it.
You need to go into the "Icon Overlays" section of the settings and set up the "Exclude Paths" and "Include Paths" (Exclude everything, ex "C:\*", and then include whatever path or paths you keep all your projects in.) Once I did that (on TortoiseSVN) the speed was perfectly fine, even though my system was nothing more than an old single-core Celeron 1.7 GHz with 1GB RAM (it's back up to 2GB now though :)).
Jan 08 2011
parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 1/8/11, Nick Sabalausky <a a.a> wrote:
 You need to go into the "Icon Overlays" section of the settings and set up
 the "Exclude Paths" and "Include Paths" (Exclude everything, ex "C:\*", and
 then include whatever path or paths you keep all your projects in.)
Ok thanks, I might give it another try.
Jan 08 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Vladimir Panteleev" <vladimir thecybershadow.net> wrote in message 
news:op.vow11fqdtuzx1w cybershadow.mshome.net...
 On Fri, 07 Jan 2011 04:09:04 +0200, Andrej Mitrovic 
 <andrej.mitrovich gmail.com> wrote:

 I don't think git really needs MSYS? I mean I've just installed git
 again and it does have it's own executable runnable from the console.
MSysGit comes with its own copy of MSys. It's pretty transparent to the user, though.
That might not be too bad then if it's all packaged well. The main problem with MSYS/MinGW is just getting the damn thing downloaded, installed and running properly. Do you need to actually use the MSYS/MinGW command-line, or is that all hidden away and totally behind-the-scenes?
Jan 06 2011
parent reply Jesse Phillips <jessekphillips+D gmail.com> writes:
Nick Sabalausky Wrote:

 "Vladimir Panteleev" <vladimir thecybershadow.net> wrote in message 
 news:op.vow11fqdtuzx1w cybershadow.mshome.net...
 On Fri, 07 Jan 2011 04:09:04 +0200, Andrej Mitrovic 
 <andrej.mitrovich gmail.com> wrote:

 I don't think git really needs MSYS? I mean I've just installed git
 again and it does have it's own executable runnable from the console.
MSysGit comes with its own copy of MSys. It's pretty transparent to the user, though.
That might not be too bad then if it's all packaged well. The main problem with MSYS/MinGW is just getting the damn thing downloaded, installed and running properly. Do you need to actually use the MSYS/MinGW command-line, or is that all hidden away and totally behind-the-scenes?
I am able to run git commands from powershell. I ran a single install program that made it all happen for me. You can run what it calls "git bash" to open mingw.
Jan 06 2011
parent "Nick Sabalausky" <a a.a> writes:
"Jesse Phillips" <jessekphillips+D gmail.com> wrote in message 
news:ig61ni$frh$1 digitalmars.com...
 Nick Sabalausky Wrote:

 "Vladimir Panteleev" <vladimir thecybershadow.net> wrote in message
 news:op.vow11fqdtuzx1w cybershadow.mshome.net...
 On Fri, 07 Jan 2011 04:09:04 +0200, Andrej Mitrovic
 <andrej.mitrovich gmail.com> wrote:

 I don't think git really needs MSYS? I mean I've just installed git
 again and it does have it's own executable runnable from the console.
MSysGit comes with its own copy of MSys. It's pretty transparent to the user, though.
That might not be too bad then if it's all packaged well. The main problem with MSYS/MinGW is just getting the damn thing downloaded, installed and running properly. Do you need to actually use the MSYS/MinGW command-line, or is that all hidden away and totally behind-the-scenes?
I am able to run git commands from powershell. I ran a single install program that made it all happen for me. You can run what it calls "git bash" to open mingw.
I just tried the msysgit installer that Andrej linked to. I didn't try to use or create any repository, but everthing seems to work great so far. Painless installer, Git GUI launches fine, "git" works from my ordinary windows command line, and I never had to touch MSYS directly in any way. Nice!
Jan 07 2011
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Vladimir Panteleev" <vladimir thecybershadow.net> wrote in message 
news:op.vowx58gqtuzx1w cybershadow.mshome.net...
 Git is expected to be slower on Windows, since it runs on top of 
 cygwin/msys.
I'd consider running under MSYS to be a *major* disadvantage. MSYS is barely usable garbage (and cygwin is just plain worthless).
Jan 06 2011
parent "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Fri, 07 Jan 2011 03:30:16 +0200, Nick Sabalausky <a a.a> wrote:

 I'd consider running under MSYS to be a *major* disadvantage. MSYS is  
 barely
 usable garbage (and cygwin is just plain worthless).
Why? MSysGit works great here! I have absolutely no issues with it. It doesn't pollute PATH, either, because by default only one directory with git/gitk is added to PATH. MSysGit can even integrate with PuTTYLink and use your PuTTY SSH sessions (but you can of course also use MSys' OpenSSH). Git GUI and Gitk even run better on Windows in my experience (something weird about Tcl/Tk on Ubuntu). -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 06 2011
prev sibling parent Russel Winder <russel russel.org.uk> writes:
On Fri, 2011-01-07 at 02:55 +0200, Vladimir Panteleev wrote:
[ . . . ]
 We've had a discussion in #d (IRC), and the general consensus there seems=
=20
 to be strongly in favor of Git/GitHub. For completeness (there's been a =
=20
 discussion before) here are my arguments:
If the active D contributors are mostly in favour of Git then go for it. Personally I would go with Mercurial but a shift to DVCS is way, way more important than which DVCS!
 1) Git has the largest user base - more people will be able to get starte=
d =20
 hacking on the source immediately.
As with all statistics, you can prove nigh on any statement. I doubt Git actually has the largest user base, but it does have the zeitgeist. O'Reilly declared Git the winner in the DVCS race three years ago, and all the Linux, Ruby on Rails, etc,. hype is about Git. On the other hand Sun/Oracle, Python, etc., etc. went with Mercurial. Mercurial and Bazaar are a smoother transition from Subversion, which may or may not be an issue.
 (GitHub compared to DSource below, some of these also apply to Gitorious,=
=20
 Bitbucket, Launchpad)
 2) One-click forking - you can easily publish improvements that are easil=
y =20
 discoverable to people interested in the project. (This practically =20
 guarantees that an open-source project will never hit a dead end, as long=
=20
 as some people are interested in it - both occasional patches and =20
 maintained forks are easily discoverable.)
I think this is just irrelevant hype. The real issue is not how easy it is to fork a repository, the issue is how easy is it to create changesets, submit changesets for review, merge changesets into Trunk. I guess the question is not about repositories, it is about review tools: Gerrit, Rietveld, etc. (Jokes about Guido's choice of the name Rietveld should be considered pass=C3=A9, if not part of the furniture :-) (cf. http://en.wikipedia.org/wiki/Gerrit_Rietveld)
 3) UI for pull requests (requests to merge changes in a forked repository=
=20
 upstream), with comments.
Launchpad certainly supports this as, I think BitBucket does. It is an important issue.
 4) Inline comments (you can comment on a specific line in a commit/patch)=
. =20
 This integrates very nicely with 3) for great code review capabilities.
Better still use a changeset review processing tool rather than just a workflow?
 5) (Unique to GitHub) The network graph allows visualizing all commits in=
=20
 all forks of the project.
Do the Linux folk use this? I doubt it, once you get to a very large number of forks, it will become useless. A fun tool but only for medium size projects. I guess the question is whether D will become huge or stay small?
 6) GitHub is run by a commercial company, and the same infrastructure is =
=20
 used for hosting commercial projects. Therefore, you can expect better =
=20
 uptime and support.
Launchpad and BitBucket are run by commercial companies.
 GitHub has integrated wiki, issues and downloads (all optional). One thin=
g =20
 GitHub doesn't have that DSource has is forums.
Launcpad and BitBucket have all the same.
 I think there is no "shame" in leaving DSource for DigitalMars projects, =
=20
 many large open-source projects use GitHub (see GitHub's front page).
Everyone complains about DSource so either change it or move from it.
 Some existing D projects on GitHub: https://github.com/languages/D
=20
 I think J=C3=A9r=C3=B4me's observations of Git performance are specific t=
o Windows. =20
 Git is expected to be slower on Windows, since it runs on top of =20
 cygwin/msys.
 Here's a study on the Git wiki: =20
 https://git.wiki.kernel.org/index.php/GitBenchmarks
=20
 Google has done a study of Git vs. Mercurial in 2008:
 http://code.google.com/p/support/wiki/DVCSAnalysis
 The main disadvantage they found in Git (poor performance over HTTP) =20
 doesn't apply to us, and I believe it was addressed in recent versions =
=20
 anyway.
=20
 Disclaimer: I use Git, and avoid Mercurial if I can mainly because I don'=
t =20
 want to learn another VCS. Nevertheless, I tried to be objective above.
 As I mentioned on IRC, I strongly believe this must be a fully-informed =
=20
 decision, since changing VCSes again is unrealistic once it's done.
I have to disagree that your presentation was objective, but let us leave it aside so as to avoid flame wars or becoming uncivil. In the end there is a technical choice to be made between Git and Mercurial on the one side and Bazaar on the other since the repository/branch model is so different. If the choice is between Git and Mercurial, then it is really down to personally prejudice, tribalism, etc. If the majority of people who are genuinely active in creating changesets want to go with Git, then do it. Having interminable debates on Git vs. Mercurial is the real enemy. NB This is a decision that should be made by the people *genuinely* active in creating code changes -- people like me who are really just D users really do not count in this election. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jan 07 2011
prev sibling next sibling parent reply Don <nospam nospam.com> writes:
Andrei Alexandrescu wrote:
 On 1/6/11 9:18 AM, Don wrote:
 Walter Bright wrote:
 Nick Sabalausky wrote:
 "Caligo" <iteronvexor gmail.com> wrote in message
 news:mailman.451.1294306555.4748.digitalmars-d puremagic.com...
 On Thu, Jan 6, 2011 at 12:28 AM, Walter Bright
 <newshound2 digitalmars.com>wrote:

 That's pretty much what I'm afraid of, losing my grip on how the 
 whole
 thing works if there are multiple dmd committers.

 Perhaps using a modern SCM like Git might help? Everyone could have
 (and
should have) commit rights, and they would send pull requests. You or one of the managers would then review the changes and pull and merge with the main branch. It works great; just checkout out Rubinius on Github to see what I mean: https://github.com/evanphx/rubinius
I'm not sure I see how that's any different from everyone having "create and submit a patch" rights, and then having Walter or one of the managers review the changes and merge/patch with the main branch.
I don't, either.
There's no difference if you're only making one patch, but once you make more, there's a significant difference. I can generally manage to fix about five bugs at once, before they start to interfere with each other. After that, I have to wait for some of the bugs to be integrated into the trunk, or else start discarding changes from my working copy. Occasionally I also use my own DMD local repository, but it doesn't work very well (gets out of sync with the trunk too easily, because SVN isn't really set up for that development model). I think that we should probably move to Mercurial eventually. I think there's potential for two benefits: (1) quicker for you to merge changes in; (2) increased collaboration between patchers. But due to the pain in changing the developement model, I don't think it's a change we should make in the near term.
What are the advantages of Mercurial over git? (git does allow multiple branches.) Andrei
Essentially political and practical rather than technical. Mercurial doesn't have the blatant hostility to Windows that is evident in git. It also doesn't have the blatant hostility to svn (in fact, it tries hard to ease the transition). Technically, I don't think there's much difference between git and Mercurical, compared to how different they are from svn.
Jan 06 2011
next sibling parent reply "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
On Fri, 07 Jan 2011 08:53:06 +0100, Don wrote:

 Andrei Alexandrescu wrote:
 What are the advantages of Mercurial over git? (git does allow multiple
 branches.)
 
 Andrei
Essentially political and practical rather than technical. Mercurial doesn't have the blatant hostility to Windows that is evident in git. It also doesn't have the blatant hostility to svn (in fact, it tries hard to ease the transition).
I don't think Git's SVN hostility is a problem in practice. AFAIK there are tools (git-svn comes to mind) that can transfer the contents of an SVN repository, with full commit history and all, to a Git repo. Also, it will only have to be done once, so that shouldn't weigh too heavily on the decision.
 Technically, I don't think there's much difference between git and
 Mercurical, compared to how different they are from svn.
Then my vote goes to Git, simply because that's what I'm familiar with. -Lars
Jan 07 2011
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday 07 January 2011 03:33:48 Lars T. Kyllingstad wrote:
 On Fri, 07 Jan 2011 08:53:06 +0100, Don wrote:
 Andrei Alexandrescu wrote:
 What are the advantages of Mercurial over git? (git does allow multiple
 branches.)
 
 Andrei
Essentially political and practical rather than technical. Mercurial doesn't have the blatant hostility to Windows that is evident in git. It also doesn't have the blatant hostility to svn (in fact, it tries hard to ease the transition).
I don't think Git's SVN hostility is a problem in practice. AFAIK there are tools (git-svn comes to mind) that can transfer the contents of an SVN repository, with full commit history and all, to a Git repo. Also, it will only have to be done once, so that shouldn't weigh too heavily on the decision.
 Technically, I don't think there's much difference between git and
 Mercurical, compared to how different they are from svn.
Then my vote goes to Git, simply because that's what I'm familiar with. -Lars
Well, you get the full commit history if you use git-svn to commit to an svn repository. I'm not sure it deals with svn branches very well though, since svn treats those as separate files, and so each branch is actually a separate set of files, and I don't believe that git will consider them to be the same. However, since I always just use git-svn on the trunk of whatever svn repository I'm dealing with, I'm not all that experienced with dealing with how svn branches look in a git repository's history. And it may be that there's a way to specifically import an svn repository in a manner which makes all of those branches look as a single set of files to git. I don't know. But on the whole, converting from subversion to git is pretty easy. We technically use svn at work, but I always just use git-svn. Life is much more pleasant that way. - Jonathan M Davis
Jan 07 2011
next sibling parent "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
On Fri, 07 Jan 2011 03:42:33 -0800, Jonathan M Davis wrote:

 On Friday 07 January 2011 03:33:48 Lars T. Kyllingstad wrote:
 On Fri, 07 Jan 2011 08:53:06 +0100, Don wrote:
 Andrei Alexandrescu wrote:
 What are the advantages of Mercurial over git? (git does allow
 multiple branches.)
 
 Andrei
Essentially political and practical rather than technical. Mercurial doesn't have the blatant hostility to Windows that is evident in git. It also doesn't have the blatant hostility to svn (in fact, it tries hard to ease the transition).
I don't think Git's SVN hostility is a problem in practice. AFAIK there are tools (git-svn comes to mind) that can transfer the contents of an SVN repository, with full commit history and all, to a Git repo. Also, it will only have to be done once, so that shouldn't weigh too heavily on the decision.
 Technically, I don't think there's much difference between git and
 Mercurical, compared to how different they are from svn.
Then my vote goes to Git, simply because that's what I'm familiar with. -Lars
Well, you get the full commit history if you use git-svn to commit to an svn repository. I'm not sure it deals with svn branches very well though, [...]
Here's a page that deals with importing an SVN repo in git: http://help.github.com/svn-importing/ Actually, based on that page, it seems Github can automatically take care of the whole transfer for us, if we decide to set up there. -Lars
Jan 07 2011
prev sibling parent Jesse Phillips <jessekphillips+D gmail.com> writes:
Jonathan M Davis Wrote:

 Well, you get the full commit history if you use git-svn to commit to an svn 
 repository. I'm not sure it deals with svn branches very well though, since
svn 
 treats those as separate files, and so each branch is actually a separate set
of 
 files, and I don't believe that git will consider them to be the same.
However, 
 since I always just use git-svn on the trunk of whatever svn repository I'm 
 dealing with, I'm not all that experienced with dealing with how svn branches 
 look in a git repository's history. And it may be that there's a way to 
 specifically import an svn repository in a manner which makes all of those 
 branches look as a single set of files to git. I don't know. But on the whole, 
 converting from subversion to git is pretty easy. We technically use svn at 
 work, but I always just use git-svn. Life is much more pleasant that way.
 
 - Jonathan M Davis
You can have git-svn import the standard svn layout. This will than import the tags and branches. And best I can tell, the reason it takes so long to do this is because it is analyzing each branch to see where it occurred, and then making the proper branches as it would be in Git. You can specify your own layout if your branches aren't set up like a standard svn.
Jan 07 2011
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Don wrote:
 Mercurial doesn't have the blatant hostility to Windows that is evident 
 in git. It also doesn't have the blatant hostility to svn (in fact, it 
 tries hard to ease the transition).
I've been using git on a couple small projects, and I find that I have to transfer the files to Linux in order to check them in to git.
Jan 07 2011
next sibling parent "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:ig7mee$1r10$1 digitalmars.com...
 Don wrote:
 Mercurial doesn't have the blatant hostility to Windows that is evident 
 in git. It also doesn't have the blatant hostility to svn (in fact, it 
 tries hard to ease the transition).
I've been using git on a couple small projects, and I find that I have to transfer the files to Linux in order to check them in to git.
When I installed msysgit I got Git entires added to explorer's right-click menu. Do those not work?
Jan 07 2011
prev sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Fri, 07 Jan 2011 20:33:42 +0200, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Don wrote:
 Mercurial doesn't have the blatant hostility to Windows that is evident  
 in git. It also doesn't have the blatant hostility to svn (in fact, it  
 tries hard to ease the transition).
I've been using git on a couple small projects, and I find that I have to transfer the files to Linux in order to check them in to git.
Could you please elaborate? A lot of people are using Git on Windows without any problems. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 07 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Vladimir Panteleev wrote:
 On Fri, 07 Jan 2011 20:33:42 +0200, Walter Bright 
 <newshound2 digitalmars.com> wrote:
 
 Don wrote:
 Mercurial doesn't have the blatant hostility to Windows that is 
 evident in git. It also doesn't have the blatant hostility to svn (in 
 fact, it tries hard to ease the transition).
I've been using git on a couple small projects, and I find that I have to transfer the files to Linux in order to check them in to git.
Could you please elaborate? A lot of people are using Git on Windows without any problems.
No download for Windows from the git site.
Jan 07 2011
next sibling parent Jesse Phillips <jessekphillips+D gmail.com> writes:
Walter Bright Wrote:

 Vladimir Panteleev wrote:
 On Fri, 07 Jan 2011 20:33:42 +0200, Walter Bright 
 <newshound2 digitalmars.com> wrote:
 
 Don wrote:
 Mercurial doesn't have the blatant hostility to Windows that is 
 evident in git. It also doesn't have the blatant hostility to svn (in 
 fact, it tries hard to ease the transition).
I've been using git on a couple small projects, and I find that I have to transfer the files to Linux in order to check them in to git.
Could you please elaborate? A lot of people are using Git on Windows without any problems.
No download for Windows from the git site.
Direct: http://code.google.com/p/msysgit/downloads/detail?name=Git-1.7.3.1-preview20101002.exe&can=2&q= Website: http://code.google.com/p/msysgit/
Jan 07 2011
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 1/7/11, Walter Bright <newshound2 digitalmars.com> wrote:
 No download for Windows from the git site.
There's a big Windows icon on the right: http://git-scm.com/
Jan 07 2011
prev sibling parent reply David Nadlinger <see klickverbot.at> writes:
On 1/7/11 10:21 PM, Walter Bright wrote:
 No download for Windows from the git site.
Are you deliberately trying to make yourself look ignorant? Guess what's right at the top of http://git-scm.com/… David
Jan 07 2011
next sibling parent reply David Nadlinger <see klickverbot.at> writes:
On 1/7/11 10:31 PM, David Nadlinger wrote:
 Are you deliberately trying to make yourself look ignorant? Guess what's
 right at the top of http://git-scm.com/
I just realized that this might have sounded a bit too harsh, there was no offense intended. I am just somewhat annoyed by the frequency easy-to-research facts are misquoted at this newsgroup right now, as well as how this could influence the way D and the D community are perceived as a whole. David
Jan 07 2011
parent reply bearophile <bearophileHUGS lycos.com> writes:
David Nadlinger:

 I just realized that this might have sounded a bit too harsh, there was 
 no offense intended.
Being gentle and not offensive is Just Necessary [TM] in a newsgroup like this. On the other hand Walter is a pretty adult person so I think he's not offended. Bye, bearophile
Jan 07 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/7/11 5:49 PM, bearophile wrote:
 David Nadlinger:

 I just realized that this might have sounded a bit too harsh, there was
 no offense intended.
Being gentle and not offensive is Just Necessary [TM] in a newsgroup like this. On the other hand Walter is a pretty adult person so I think he's not offended. Bye, bearophile
Well he is adult all right. Pretty? Maybt not that much :o). Andrei
Jan 07 2011
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
I don't recall Walter ever loosing his cool, which is quite an
achievement on this NG.
Jan 07 2011
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Well he is adult all right. Pretty? Maybt not that much :o).
Hawt? Perhaps!
Jan 07 2011
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
David Nadlinger wrote:
 On 1/7/11 10:21 PM, Walter Bright wrote:
 No download for Windows from the git site.
Are you deliberately trying to make yourself look ignorant? Guess what's right at the top of http://git-scm.com/
So it is. The last time I looked, it wasn't there.
Jan 07 2011
prev sibling parent reply David Nadlinger <see klickverbot.at> writes:
On 1/7/11 8:53 AM, Don wrote:
 What are the advantages of Mercurial over git? (git does allow
 multiple branches.)

 Andrei
Essentially political and practical rather than technical. […]
By the way, I just stumbled upon this page presenting arguments in favor of Git, which seems about as objective to me as it will probably get: http://whygitisbetterthanx.com/ Obviously, this site is biased in the sense that it doesn't mention possible arguments against Git – do you know of any similar collections for other DVCS? David
Jan 08 2011
parent =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
David Nadlinger wrote:
 On 1/7/11 8:53 AM, Don wrote:
 What are the advantages of Mercurial over git? (git does allow
 multiple branches.)

 Andrei
Essentially political and practical rather than technical. [=E2=80=A6=
]
=20
 By the way, I just stumbled upon this page presenting arguments in favo=
r
 of Git, which seems about as objective to me as it will probably get:
 http://whygitisbetterthanx.com/
=20
 Obviously, this site is biased in the sense that it doesn't mention
 possible arguments against Git =E2=80=93 do you know of any similar col=
lections
 for other DVCS?
=20
* Cheap local branching Available in Mercurial with the LocalbranchExtension. * Git is fast Probably true on Linux, not so on Windows. The speed is acceptable for most operations, but it is slower than Mercurial. * Staging area Could actually be seen as a drawback since it adds extra complexity. Depending on your workflow, most of the use cases can be handled more easily in Mercurial with the crecord extension. * GitHub Bitbucket. * Easy to learn Mouahahahahahahah! The other points are true, but they are also applicable to any DVCS. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jan 09 2011
prev sibling parent Gour <gour atmarama.net> writes:
On Thu, 06 Jan 2011 09:42:29 -0600
 "Andrei" =3D=3D Andrei Alexandrescu wrote:
Andrei> What are the advantages of Mercurial over git? (git does allow Andrei> multiple branches.) It's not as established as Git/Mercurial, but I like a a lot...coming from Sqlite main developer - Fossil (http://fossil-scm.org). Simple command set, very powerful, using sqlite3 back-end for storage, integrated wiki, distributed bug tracker, extra lite for hosting...it'sp ossible to import/export from/to Git's fast-import/export (see http://fossil-scm.org/index.html/doc/trunk/www/inout.wiki). Sincerely, Gour --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ----------------------------------------------------------------
Jan 07 2011
prev sibling parent Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes:
Nick Sabalausky wrote:

 "Caligo" <iteronvexor gmail.com> wrote in message
 news:mailman.451.1294306555.4748.digitalmars-d puremagic.com...
 On Thu, Jan 6, 2011 at 12:28 AM, Walter Bright
 <newshound2 digitalmars.com>wrote:

 That's pretty much what I'm afraid of, losing my grip on how the whole
 thing works if there are multiple dmd committers.

 Perhaps using a modern SCM like Git might help?  Everyone could have
 (and
should have) commit rights, and they would send pull requests. You or one of the managers would then review the changes and pull and merge with the main branch. It works great; just checkout out Rubinius on Github to see what I mean: https://github.com/evanphx/rubinius
I'm not sure I see how that's any different from everyone having "create and submit a patch" rights, and then having Walter or one of the managers review the changes and merge/patch with the main branch.
There isn't because it is basically the same workflow. The reason why people would prefer git style fork and merge over sending svn patches is because these tools do the same job much better. github increases the usability further and give you nice pr for free. otoh I understand that it's not exactly attractive to invest time to replace something that also works right now.
Jan 06 2011
prev sibling next sibling parent reply Russel Winder <russel russel.org.uk> writes:
On Thu, 2011-01-06 at 03:35 -0600, Caligo wrote:
[ . . . ]
        =20
 Perhaps using a modern SCM like Git might help?  Everyone could have
 (and should have) commit rights, and they would send pull requests.
 You or one of the managers would then review the changes and pull and
 merge with the main branch.  It works great; just checkout out
 Rubinius on Github to see what I mean:
 https://github.com/evanphx/rubinius
Whilst I concur (massively) that Subversion is no longer the correct tool for collaborative working, especially on FOSS projects, but also for proprietary ones, I am not sure Git is the best choice of tool. Whilst Git appears to have the zeitgeist, Mercurial and Bazaar are actually much easier to work with. Where Git has GitHub, Mercurial has BitBucket, and Bazaar has Launchpad. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jan 06 2011
parent Jesse Phillips <jessekphillips+D gmail.com> writes:
Russel Winder Wrote:

 Whilst I concur (massively) that Subversion is no longer the correct
 tool for collaborative working, especially on FOSS projects, but also
 for proprietary ones, I am not sure Git is the best choice of tool.
 Whilst Git appears to have the zeitgeist, Mercurial and Bazaar are
 actually much easier to work with.  Where Git has GitHub, Mercurial has
 BitBucket, and Bazaar has Launchpad.
First I think one must be convinced to move. Then that using a social site adds even more. Then we can discuss which one to use. My personal choice is git because I don't use the others. And this was a great read: http://progit.org/book/
Jan 06 2011
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-01-06 07:28, Walter Bright wrote:
 Nick Sabalausky wrote:
 Automatically accepting all submissions immediately into the main line
 with no review isn't a good thing either. In that article he's
 complaining about MS, but MS is notorious for ignoring all non-MS
 input, period. D's already light-years ahead of that. Since D's purely
 volunteer effort, and with a lot of things to be done, sometimes
 things *are* going to tale a while to get in. But there's just no way
 around that without major risks to quality. And yea Walter could grant
 main-line DMD commit access to others, but then we'd be left with a
 situation where no single lead dev understands the whole program
 inside and out - and when that happens to projects, that's inevitably
 the point where it starts to go downhill.
That's pretty much what I'm afraid of, losing my grip on how the whole thing works if there are multiple dmd committers.
That is very understandable. Maybe we can have a look at the linux kernel development process: http://ldn.linuxfoundation.org/book/how-participate-linux-community As how I understands it, Linus Torvalds day to day work on the linux kerenl mostly consist of merging changes made in developer branches into the main branch.
 On the bright (!) side, Brad Roberts has gotten the test suite in shape
 so that anyone developing a patch can run it through the full test
 suite, which is a prerequisite to getting it folded in.
Has this been announced (somewhere else than the DMD mailing list)? Where can one get the test suite? It should be available and easy to find and with instructions how to run it. Somewhere on the Digitalmars site or/and perhaps released with the DMD source code?
 In the last release, most of the patches in the changelog were done by
 people other than myself, although yes, I vet and double check them all
 before committing them.
-- /Jacob Carlborg
Jan 06 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
Jacob Carlborg wrote:
 Has this been announced (somewhere else than the DMD mailing list)? 
 Where can one get the test suite? It should be available and easy to 
 find and with instructions how to run it. Somewhere on the Digitalmars 
 site or/and perhaps released with the DMD source code?
It's part of dmd on svn: http://www.dsource.org/projects/dmd/browser/trunk/test
Jan 06 2011
prev sibling next sibling parent reply Caligo <iteronvexor gmail.com> writes:
On Thu, Jan 6, 2011 at 5:50 AM, Russel Winder <russel russel.org.uk> wrote:

 Whilst I concur (massively) that Subversion is no longer the correct
 tool for collaborative working, especially on FOSS projects, but also
 for proprietary ones, I am not sure Git is the best choice of tool.
 Whilst Git appears to have the zeitgeist, Mercurial and Bazaar are
 actually much easier to work with.  Where Git has GitHub, Mercurial has
 BitBucket, and Bazaar has Launchpad.

 --
 Russel.

 =============================================================================
 Dr Russel Winder      t: +44 20 7585 2200   voip:
 sip:russel.winder ekiga.net <sip%3Arussel.winder ekiga.net>
 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel russel.org.uk
 London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder
BitBucket has copied almost everything from Github, and I don't understand how they've never been sued. http://dev.pocoo.org/~blackbird/github-vs-bitbucket/bitbucket.html Github team has been the innovator here, and they never stop improving the site with new features and bug fixes. It would be nice to support their work by using Github. There is also Gitorious. It only offers free hosting and it is more team orientated than Github, but Github has recently added the "Organization" feature. The interesting thing about Gitorious is that you can run it on your own server. I don't think you can do that with Github. One cool thing about Github that I like is gist: https://gist.github.com/ It's a pastebin, but it uses Git and supports D syntax. People are always sharing snippets on these newsgroups, and it would have been nice if they were gists. I've never used Bazaar, so no comment on that. But, between Git and Mercurial, I vote for Git.
Jan 06 2011
next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 07.01.2011 03:20, schrieb Caligo:
 On Thu, Jan 6, 2011 at 5:50 AM, Russel Winder <russel russel.org.uk
 <mailto:russel russel.org.uk>> wrote:

     Whilst I concur (massively) that Subversion is no longer the correct
     tool for collaborative working, especially on FOSS projects, but also
     for proprietary ones, I am not sure Git is the best choice of tool.
     Whilst Git appears to have the zeitgeist, Mercurial and Bazaar are
     actually much easier to work with.  Where Git has GitHub, Mercurial has
     BitBucket, and Bazaar has Launchpad.

     --
     Russel.
     =============================================================================
     Dr Russel Winder      t: +44 20 7585 2200   voip:
     sip:russel.winder ekiga.net <mailto:sip%3Arussel.winder ekiga.net>
     41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel russel.org.uk
     <mailto:russel russel.org.uk>
     London SW11 1EN, UK   w: www.russel.org.uk <http://www.russel.org.uk>
       skype: russel_winder


 BitBucket has copied almost everything from Github, and I don't understand how
 they've never been sued.

 http://dev.pocoo.org/~blackbird/github-vs-bitbucket/bitbucket.html
Yeah, see also: http://schacon.github.com/bitbucket.html by the same author When this rant was new I read a page that listed where Github stole their ideas and designs (Sourceforce for example), but I can't find it anymore. This rant was bullshit, as even the author seems to have accepted. I don't understand why people still mirror and link this crap.
Jan 06 2011
parent Caligo <iteronvexor gmail.com> writes:
On Thu, Jan 6, 2011 at 8:47 PM, Daniel Gibson <metalcaedes gmail.com> wrote:

 Yeah, see also: http://schacon.github.com/bitbucket.html by the same
 author

 When this rant was new I read a page that listed where Github stole their
 ideas and designs (Sourceforce for example), but I can't find it anymore.
 This rant was bullshit, as even the author seems to have accepted.

 I don't understand why people still mirror and link this crap.
hmmm...Interesting! I did not know that, and thanks for the share. There is even a discussion about it on reddit where the author apologizes. I don't understand why he would do such a thing.
Jan 06 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Caligo" <iteronvexor gmail.com> wrote in message 
news:mailman.461.1294366839.4748.digitalmars-d puremagic.com...
 BitBucket has copied almost everything from Github, and I don't understand
 how they've never been sued.

 http://dev.pocoo.org/~blackbird/github-vs-bitbucket/bitbucket.html
That page looks like the VCS equivalent of taking pictures of sandwiches from two different restaurants and then bitching "Oh my god! What a blatant copy! Look, they both have meat, lettuce and condiments between slices of bread! And they BOTH have the lettuce on top of the meat! What a pathetic case of plagiarism!" Bah.
Jan 06 2011
parent reply Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes:
Nick Sabalausky wrote:

 "Caligo" <iteronvexor gmail.com> wrote in message
 news:mailman.461.1294366839.4748.digitalmars-d puremagic.com...
 BitBucket has copied almost everything from Github, and I don't
 understand how they've never been sued.

 http://dev.pocoo.org/~blackbird/github-vs-bitbucket/bitbucket.html
That page looks like the VCS equivalent of taking pictures of sandwiches from two different restaurants and then bitching "Oh my god! What a blatant copy! Look, they both have meat, lettuce and condiments between slices of bread! And they BOTH have the lettuce on top of the meat! What a pathetic case of plagiarism!" Bah.
Really? When I first visited bitbucket, I though this was from the makers of github launching a hg site from their github code, with some slightly altered css. There is quite a difference between github, gitorious and launchpad on the other hand.
Jan 07 2011
parent Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes:
Lutger Blijdestijn wrote:

 Nick Sabalausky wrote:
 
 "Caligo" <iteronvexor gmail.com> wrote in message
 news:mailman.461.1294366839.4748.digitalmars-d puremagic.com...
 BitBucket has copied almost everything from Github, and I don't
 understand how they've never been sued.

 http://dev.pocoo.org/~blackbird/github-vs-bitbucket/bitbucket.html
That page looks like the VCS equivalent of taking pictures of sandwiches from two different restaurants and then bitching "Oh my god! What a blatant copy! Look, they both have meat, lettuce and condiments between slices of bread! And they BOTH have the lettuce on top of the meat! What a pathetic case of plagiarism!" Bah.
Really? When I first visited bitbucket, I though this was from the makers of github launching a hg site from their github code, with some slightly altered css. There is quite a difference between github, gitorious and launchpad on the other hand.
To be clear: not that I care much, good ideas should be copied (or, from your perspective, bad ideas could ;) )
Jan 07 2011
prev sibling parent Russel Winder <russel russel.org.uk> writes:
On Thu, 2011-01-06 at 20:20 -0600, Caligo wrote:

< . . . ignoring all the plagiarism rubbish which has been dealt with by
others . . . >

 There is also Gitorious.  It only offers free hosting and it is more
 team orientated than Github, but Github has recently added the
 "Organization" feature.   The interesting thing about Gitorious is
 that you can run it on your own server.  I don't think you can do that
 with Github.
I have never used Gitorious (though I do have an account). My experience is limited to GitHub, BitBucket, GoogleCode, and Launchpad. The crucial difference between GitHub and BitBucket on the one hand and Launchpad on the other is that Launchpad supports teams as well as individuals. GoogleCode enforces teams and doesn't support individuals at all so doesn't really count. Where SourceForge and all the other sit these days is I guess a moot point.=20
 One cool thing about Github that I like is gist:
 https://gist.github.com/ =20
 It's a pastebin, but it uses Git and supports D syntax.  People are
 always sharing snippets on these newsgroups, and it would have been
 nice if they were gists. =20
Personally I have never used these things, nor found a reason to do so.
 I've never used Bazaar, so no comment on that.  But, between Git and
 Mercurial, I vote for Git.
Mercurial and Git are very similar in so many ways, though there are some crucial differences (the index in Git being the most obvious, but for me the most important is remote tracking branches). Bazaar has a completely different core model. Sadly, fashion and tribalism tend to play far too important a role in all discussions of DVCS -- I note no-one has mentioned Darcs or Monotone yet! And recourse to argument about number of projects using a given DVCS are fatuous. What matters is the support for VCS in the tool chain and the workflow. It is undoubtedly the case that Git and Mercurial currently have the most support across the board, though Canonical are trying very hard to make Bazaar a strong player -- sadly they are focusing too much on Ubuntu and not enough on Windows to stay in the game for software developers, no support for Visual Studio. Anecdotal experience seems to indicate that Mercurial has a more average-developer-friendly use model -- though there are some awkward corners. Despite a huge improvement to Git over the last 3 years, it still lags Mercurial on this front. However, worrying about the average developer is more important for companies and proprietary work than it is for FOSS projects -- where the skill set appears to be better than average. All in all it is up to the project lead to make a choice and for everyone else to live with it. I would advise Walter to shift to one of Mercurial or Git, but if he wants to stick with Subversion -- and suffer the tragic inability to sanely work with branches -- that is his choice. As any Git/Mercurial/Bazaar user knows, Git, Mercurial and Bazaar can all be used as Subversion clients. However without creating a proper bridge these clients cannot be used in a DVCS peer group because of the rebasing that is enforced -- at least by Git and Mercurial, Bazaar has a mode of working that avoids the rebasing and so the Subversion repository appears as a peer in the DVCS peer group. Perhaps the interesting models to consider are GoogleCode that chose to support Mercurial and Subversion, and Codehaus that chose to support Git and Subversion (using Gitosis). Of course DSource already support all three. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jan 07 2011
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-01-05 22:39, bearophile wrote:
 Jacob Carlborg:

 And sometimes Mac OS X is *slightly* ahead of the other OSes, Tango has
 had support for dynamic libraries on Mac OS X using DMD for quite a
 while now. For D2 a patch is just sitting there in bugzilla waiting for
 the last part of it to be commited. I'm really pushing this because
 people seem to forget this.
A quotation from here: http://whatupdave.com/post/1170718843/leaving-net
 Also stop using codeplex its not real open source! Real open source isnt
submitting a patch and waiting/hoping that one day it might be accepted and
merged into the main line.<
Bye, bearophile
So what are you saying here? That I should fork druntime and apply the patches myself? I already have too many projects to handle, I probably can't handle yet another one. -- /Jacob Carlborg
Jan 06 2011
parent bearophile <bearophileHUGS lycos.com> writes:
Jacob Carlborg:

 So what are you saying here? That I should fork druntime and apply the 
 patches myself? I already have too many projects to handle, I probably 
 can't handle yet another one.
See my more recent post for some answer. I think changing how DMD source code is managed (allowing people to create branches, etc) is not going to increase your work load. On the other hand it's going to make D more open source for people that like this and have some free time. Bye, bearophile
Jan 06 2011
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 Adrian Mercieca:
 
 How does D square up, performance-wise, to C and C++ ? Has anyone got any
 benchmark figures?
DMD has an old back-end, it doesn't use SSE (or AVX) registers yet (64 bit version will use 8 or more SSE registers), and sometimes it's slower for integer programs too.
The benchmarks you posted where it was supposedly slower in integer math turned out to be mistaken.
 I've seen DMD programs slow down if you nest two
 foreach inside each other. There is a collection of different slow
 microbenchmarks.
 
 But LDC1 is able to run D1 code that looks like C about equally fast as C or
 sometimes a bit faster.
 
 DMD2 uses thread local memory on default that in theory slows code down a bit
 if you use global data, but I have never seen a benchmark that shows this
 slowdown clearly (an there is __gshared too, but sometimes it seems a
 placebo).
 
 If you use higher level constructs your program will often go slower.
Rubbish. The higher level constructs are "lowered" into the equivalent low level constructs.
 Often one of the most important things for speed is memory management, D
 encourages to heap allocate a lot (class instances are usually on the heap),
 and this is very bad for performance,
That is not necessarily true. Using the gc can often result in higher performance than explicit allocation, for various subtle reasons. And saying it is "very bad" is just wrong.
 also because the built-in GC doesn't
 have an Eden generation managed as a stack. So if you want more performance
 you must program like in Pascal/Ada, stack-allocating a lot, or using memory
 pools, etc. It's a lot a matter of self-discipline while you program.
This is quite wrong.
Jan 05 2011
next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 05 Jan 2011 14:53:16 -0500, Walter Bright  
<newshound2 digitalmars.com> wrote:

 bearophile wrote:
 Often one of the most important things for speed is memory management, D
 encourages to heap allocate a lot (class instances are usually on the  
 heap),
 and this is very bad for performance,
That is not necessarily true. Using the gc can often result in higher performance than explicit allocation, for various subtle reasons. And saying it is "very bad" is just wrong.
In practice, it turns out D's GC is pretty bad performance-wise. Avoiding using the heap (or using the C heap) whenever possible usually results in a vast speedup. This is not to say that the GC concept is to blame, I think we just have a GC that is not the best out there. It truly depends on the situation. In something like a user app where the majority of the time is spent sleeping waiting for events, the GC most likely does very well. I expect the situation to get better when someone has time to pay attention to increasing GC performance. -Steve
Jan 05 2011
prev sibling next sibling parent bearophile <bearophileHUGS lycos.com> writes:
For people interested in do-it-yourself regarding benchmarking D, there are
some synthetic ones here:
http://is.gd/kbiQM

Many others on request.

Bye,
bearophile
Jan 05 2011
prev sibling parent reply Long Chang <changedalone gmail.com> writes:
2011/1/6 Walter Bright <newshound2 digitalmars.com>

 bearophile wrote:

 Adrian Mercieca:

  How does D square up, performance-wise, to C and C++ ? Has anyone got any
 benchmark figures?
DMD has an old back-end, it doesn't use SSE (or AVX) registers yet (64 bit version will use 8 or more SSE registers), and sometimes it's slower for integer programs too.
The benchmarks you posted where it was supposedly slower in integer math turned out to be mistaken. I've seen DMD programs slow down if you nest two
 foreach inside each other. There is a collection of different slow
 microbenchmarks.

 But LDC1 is able to run D1 code that looks like C about equally fast as C
 or
 sometimes a bit faster.

 DMD2 uses thread local memory on default that in theory slows code down a
 bit
 if you use global data, but I have never seen a benchmark that shows this
 slowdown clearly (an there is __gshared too, but sometimes it seems a
 placebo).

 If you use higher level constructs your program will often go slower.
Rubbish. The higher level constructs are "lowered" into the equivalent low level constructs. Often one of the most important things for speed is memory management, D
 encourages to heap allocate a lot (class instances are usually on the
 heap),
 and this is very bad for performance,
That is not necessarily true. Using the gc can often result in higher performance than explicit allocation, for various subtle reasons. And saying it is "very bad" is just wrong. also because the built-in GC doesn't
 have an Eden generation managed as a stack. So if you want more
 performance
 you must program like in Pascal/Ada, stack-allocating a lot, or using
 memory
 pools, etc. It's a lot a matter of self-discipline while you program.
This is quite wrong.
I using D for 3 years . I am not in newsgroup because my English is very pool . D is excellent , I try it with Libevent, Libev, pcre, sqlite, c-ares, dwt, and a lot other amazing Lib. It work great with C-lib . I enjoy it so much . My work is a web developer, I also try use D in web field , It not result well . Adam D. Ruppe post some interesting cod in here , And I find a lot people try in web field. for example: (mango, https://github.com/temiy/daedalus, Sendero ... ) , But in the end I had to say, most D project is dying . D like a beautiful girl friends, You play with her can have a lot of fun. But she is be scared to make promisee , you can't count your life on it. she is not a good potential marriage . her life is still in mess, and day after day she is more smart but not become more mature. so if you want do some serious work , You'd better choose another language. if you just wan fun , D is a good companion .
Jan 05 2011
parent "Nick Sabalausky" <a a.a> writes:
"Long Chang" <changedalone gmail.com> wrote in message 
news:mailman.445.1294291595.4748.digitalmars-d puremagic.com...
 2011/1/6 Walter Bright <newshound2 digitalmars.com>

 bearophile wrote:

 Adrian Mercieca:

  How does D square up, performance-wise, to C and C++ ? Has anyone got 
 any
 benchmark figures?
DMD has an old back-end, it doesn't use SSE (or AVX) registers yet (64 bit version will use 8 or more SSE registers), and sometimes it's slower for integer programs too.
The benchmarks you posted where it was supposedly slower in integer math turned out to be mistaken. I've seen DMD programs slow down if you nest two
 foreach inside each other. There is a collection of different slow
 microbenchmarks.

 But LDC1 is able to run D1 code that looks like C about equally fast as 
 C
 or
 sometimes a bit faster.

 DMD2 uses thread local memory on default that in theory slows code down 
 a
 bit
 if you use global data, but I have never seen a benchmark that shows 
 this
 slowdown clearly (an there is __gshared too, but sometimes it seems a
 placebo).

 If you use higher level constructs your program will often go slower.
Rubbish. The higher level constructs are "lowered" into the equivalent low level constructs. Often one of the most important things for speed is memory management, D
 encourages to heap allocate a lot (class instances are usually on the
 heap),
 and this is very bad for performance,
That is not necessarily true. Using the gc can often result in higher performance than explicit allocation, for various subtle reasons. And saying it is "very bad" is just wrong. also because the built-in GC doesn't
 have an Eden generation managed as a stack. So if you want more
 performance
 you must program like in Pascal/Ada, stack-allocating a lot, or using
 memory
 pools, etc. It's a lot a matter of self-discipline while you program.
This is quite wrong.
I using D for 3 years . I am not in newsgroup because my English is very pool . D is excellent , I try it with Libevent, Libev, pcre, sqlite, c-ares, dwt, and a lot other amazing Lib. It work great with C-lib . I enjoy it so much . My work is a web developer, I also try use D in web field , It not result well . Adam D. Ruppe post some interesting cod in here , And I find a lot people try in web field. for example: (mango, https://github.com/temiy/daedalus, Sendero ... ) , But in the end I had to say, most D project is dying . D like a beautiful girl friends, You play with her can have a lot of fun. But she is be scared to make promisee , you can't count your life on it. she is not a good potential marriage . her life is still in mess, and day after day she is more smart but not become more mature. so if you want do some serious work , You'd better choose another language. if you just wan fun , D is a good companion .
I'd say D is more like an above-average teen. Sure, they're young and naturally may still fuck up now and then, but they're operating on a strong foundation and just need a little more training.
Jan 05 2011
prev sibling parent Jesse Phillips <jessekphillips+D gmail.com> writes:
Walter Bright Wrote:

 I'm not sure I see how that's any different from everyone having "create and 
 submit a patch" rights, and then having Walter or one of the managers review 
 the changes and merge/patch with the main branch.
I don't, either.
I actually founds some D repositories at github, not really up-to-date: https://github.com/d-lang https://github.com/braddr/dmd Don't know who d-lang is, but they probably should have added some code. And it would be better if Walter was managing it... There are many benefits to the coder for using a distributed CMS. And you can use git with SVN, but may run into other issues as pointed out by Don. Now, if you add github or another social repository site, what you have is the ability for anyone to public display their patches, merge in other's patches, or demonstrate new features (tail const objects) which has a visible connection to the main branch. Then on top of that patches are submitted as a pull request: http://help.github.com/pull-requests/ Which provides review of the changes, public visibility into the current requests against the main branch. The benefit to Walter or even the patch writer would not be great, but it provides a lot of visibility to the observer. And using this model still allows Walter control over every patch that comes into the main branch. But it will make it 20x easier for those that want to build their own to roll in all available patches. (aren't numbers with no data to back them great). But the simplicity of branching for a distributed CMS definitely makes using them much nicer than SVN.
Jan 06 2011