www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Poll: Primary D version

reply "Nick Sabalausky" <a a.a> writes:
I'm interested in trying to gauge the current state of D version usage, so 
I've set up a poll:

http://micropoll.com/t/KEFfsZBH5F

I apologize for using MicroPoll (and all its manditory-JavaScript-ness). I 
personally hate MicroPoll but everything else I've seen is even worse and I 
don't have time to make a custom one.
May 19 2010
next sibling parent reply BCS <none anon.com> writes:
Hello Nick,

 I'm interested in trying to gauge the current state of D version
 usage, so I've set up a poll:
 
 http://micropoll.com/t/KEFfsZBH5F
 
 I apologize for using MicroPoll (and all its
 manditory-JavaScript-ness). I personally hate MicroPoll but everything
 else I've seen is even worse and I don't have time to make a custom
 one.
 
your missing "Mosty Dx but some of the other." -- ... <IXOYE><
May 20 2010
parent "Nick Sabalausky" <a a.a> writes:
"BCS" <none anon.com> wrote in message 
news:a6268ff13eaa8ccc5f8db8fcfd6 news.digitalmars.com...
 Hello Nick,

 I'm interested in trying to gauge the current state of D version
 usage, so I've set up a poll:

 http://micropoll.com/t/KEFfsZBH5F

 I apologize for using MicroPoll (and all its
 manditory-JavaScript-ness). I personally hate MicroPoll but everything
 else I've seen is even worse and I don't have time to make a custom
 one.
your missing "Mosty Dx but some of the other."
The idea was just to gauge which one a person leans more heavily towards.
May 20 2010
prev sibling next sibling parent Leandro Lucarella <llucax gmail.com> writes:
Nick Sabalausky, el 20 de mayo a las 02:52 me escribiste:
 I'm interested in trying to gauge the current state of D version usage, so 
 I've set up a poll:
 
 http://micropoll.com/t/KEFfsZBH5F
 
 I apologize for using MicroPoll (and all its manditory-JavaScript-ness). I 
 personally hate MicroPoll but everything else I've seen is even worse and I 
 don't have time to make a custom one.
Oh, yeah! Only USA uses D... Nice =P -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- We're rotten fruit We're damaged goods What the hell, we've got nothing more to lose One gust and we will probably crumble We're backdrifters
May 20 2010
prev sibling next sibling parent reply Bane <branimir.milosavljevic gmail.com> writes:
OBG! I'm minority! (still stuck on 1.030)

Nick Sabalausky Wrote:

 I'm interested in trying to gauge the current state of D version usage, so 
 I've set up a poll:
 
 http://micropoll.com/t/KEFfsZBH5F
 
 I apologize for using MicroPoll (and all its manditory-JavaScript-ness). I 
 personally hate MicroPoll but everything else I've seen is even worse and I 
 don't have time to make a custom one.
 
 
May 20 2010
parent reply Bane <branimir.milosavljevic gmail.com> writes:
OMG! I can't even spell OMG right!

 OBG! I'm minority! (still stuck on 1.030)
 
 Nick Sabalausky Wrote:
 
 I'm interested in trying to gauge the current state of D version usage, so 
 I've set up a poll:
 
 http://micropoll.com/t/KEFfsZBH5F
 
 I apologize for using MicroPoll (and all its manditory-JavaScript-ness). I 
 personally hate MicroPoll but everything else I've seen is even worse and I 
 don't have time to make a custom one.
 
 
May 20 2010
parent reply div0 <div0 users.sourceforge.net> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Bane wrote:
 OMG! I can't even spell OMG right!
 
 OBG! I'm minority! (still stuck on 1.030)
rofl. you tool. (i mean that in a good way) I'm surprised so many people who don't use D bother to read this news group and voted on the poll. Surely they must have better things to do. - -- My enormous talent is exceeded only by my outrageous laziness. http://www.ssTk.co.uk -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iD8DBQFL9XkwT9LetA9XoXwRAslqAKCGBjh2XR3WJ1Pc6wAn0kiMZgTYbACfSXy5 UujYfGkxvmEj/e8TBDZfCIQ= =+GPH -----END PGP SIGNATURE-----
May 20 2010
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"div0" <div0 users.sourceforge.net> wrote in message 
news:ht3tfa$2smm$1 digitalmars.com...
 I'm surprised so many people who don't use D bother to read this news
 group and voted on the poll. Surely they must have better things to do.
I have a few guesses for that phonomenon: - There are a lot of people who are keeping a close eye on D, but aren't ready/able to commit to any use just yet. In fact, I've already been under the impression that that's the case. - Troll and/or trolls. I turned on "enable IP logging", but I didn't turn on "prevent multiple votes from same IP", since shared IPs are fairly common. Maybe GirlProgrammer's been messing with it. Or maybe the Reddit D-Downvoters found it. Maybe I made a mistake by not preventing multiple votes from same IP. Maybe if it turn that on it'll apply retro-actively...or maybe not? I dunno, I really should get around to making my own poll software. - Maybe people misunderstood what I meant by "Which version of D do you primarily use?". What I meant was "When you use D, which version do you use more heavily?". Maybe there are people thinking "Well, I use D, but my primary language is C/Python/Java/whatever, so I'll say 'None'". Question to all, including lurkers: Did anyone say "None" because that's what they thought it meant?
May 20 2010
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:ht3uj4$30fv$1 digitalmars.com...
 "div0" <div0 users.sourceforge.net> wrote in message 
 news:ht3tfa$2smm$1 digitalmars.com...
 I'm surprised so many people who don't use D bother to read this news
 group and voted on the poll. Surely they must have better things to do.
I have a few guesses for that phonomenon:
...
 - Troll and/or trolls. I turned on "enable IP logging", but I didn't turn 
 on "prevent multiple votes from same IP", since shared IPs are fairly 
 common. Maybe GirlProgrammer's been messing with it. Or maybe the Reddit 
 D-Downvoters found it. Maybe I made a mistake by not preventing multiple 
 votes from same IP. Maybe if it turn that on it'll apply 
 retro-actively...or maybe not? I dunno, I really should get around to 
 making my own poll software.
...

I've looked into this a little. I was able to download a chart of the IPs, 
and the number of votes per IP. Unfortunately, there doesn't seem to be any 
way to tell anything about the actual votes from a particular IP, which I 
suppose is good for privacy, but it prevents me from looking at an IP with 
multiple votes and determining if all of the votes were suspiciously 
strongly favoring the one option.

There are 105 IPs. The vast majority of the IPs only had one vote. There 
were five IPs that had two votes each, and one IP that had three votes. I'm 
willing to assume those are just multiple people from the same ISP, and even 
if not they're not particularly significant compared to the rest. But then 
there was one IP with 72 votes. So, yea, that does seem suspicious. In case 
anyone thinks they might have more insight, the IP in question is 
115.131.192.250, and appears to be from Austrailia.
May 20 2010
next sibling parent Adam Ruppe <destructionator gmail.com> writes:
As a thought, when/if you decide to write your own polling system, I
think it should log the website referrer as well as the voter's ip and
choice.

It'd be interesting to see stats about skewing from a certain site,
like if everyone who followed a link on "d-sucks-ass.org" voted "none
and never will", you might be able to discount them as trolls.
May 20 2010
prev sibling next sibling parent "Simen kjaeraas" <simen.kjaras gmail.com> writes:
Nick Sabalausky <a a.a> wrote:

 I've looked into this a little. I was able to download a chart of the  
 IPs,
 and the number of votes per IP. Unfortunately, there doesn't seem to be  
 any
 way to tell anything about the actual votes from a particular IP, which I
 suppose is good for privacy, but it prevents me from looking at an IP  
 with
 multiple votes and determining if all of the votes were suspiciously
 strongly favoring the one option.

 There are 105 IPs. The vast majority of the IPs only had one vote. There
 were five IPs that had two votes each, and one IP that had three votes.  
 I'm
 willing to assume those are just multiple people from the same ISP, and  
 even
 if not they're not particularly significant compared to the rest. But  
 then
 there was one IP with 72 votes. So, yea, that does seem suspicious. In  
 case
 anyone thinks they might have more insight, the IP in question is
 115.131.192.250, and appears to be from Austrailia.
Looking at the statistics, if we assume all those 72 votes are for 'None - Not likely to use D anytime soon', we get these nice numbers: D2 43 % Both D1 and D2 fairly equally 3 % D1 - Not likely to use D2 anytime soon 28 % D1 - Likely to switch to D2 soon 6 % D1 - Likely to do both D1 and D2 soon 5 % None - Not likely to use D anytime soon 9 % None - Likely to use D2 soon 6 % None - Likely to use D1 soon 0 % None - Likely to use both D1 and D2 soon 0 % Honestly, I find this to seem more likely as a serious result. That said, I do not expect 43% of programmers to be using D at the moment. :P -- Simen
May 20 2010
prev sibling parent reply Bane <branimir.milosavljevic gmail.com> writes:
Nick Sabalausky Wrote:

 "Nick Sabalausky" <a a.a> wrote in message 
 news:ht3uj4$30fv$1 digitalmars.com...
 "div0" <div0 users.sourceforge.net> wrote in message 
 news:ht3tfa$2smm$1 digitalmars.com...
 I'm surprised so many people who don't use D bother to read this news
 group and voted on the poll. Surely they must have better things to do.
I have a few guesses for that phonomenon:
...
 - Troll and/or trolls. I turned on "enable IP logging", but I didn't turn 
 on "prevent multiple votes from same IP", since shared IPs are fairly 
 common. Maybe GirlProgrammer's been messing with it. Or maybe the Reddit 
 D-Downvoters found it. Maybe I made a mistake by not preventing multiple 
 votes from same IP. Maybe if it turn that on it'll apply 
 retro-actively...or maybe not? I dunno, I really should get around to 
 making my own poll software.
...

 
 I've looked into this a little. I was able to download a chart of the IPs, 
 and the number of votes per IP. Unfortunately, there doesn't seem to be any 
 way to tell anything about the actual votes from a particular IP, which I 
 suppose is good for privacy, but it prevents me from looking at an IP with 
 multiple votes and determining if all of the votes were suspiciously 
 strongly favoring the one option.
 
 There are 105 IPs. The vast majority of the IPs only had one vote. There 
 were five IPs that had two votes each, and one IP that had three votes. I'm 
 willing to assume those are just multiple people from the same ISP, and even 
 if not they're not particularly significant compared to the rest. But then 
 there was one IP with 72 votes. So, yea, that does seem suspicious. In case 
 anyone thinks they might have more insight, the IP in question is 
 115.131.192.250, and appears to be from Austrailia.
 
 
Thats great. It proves my point - even D trolls are loyal to the project. 72 votes? That is some dedication.
May 21 2010
parent reply retard <re tard.com.invalid> writes:
Fri, 21 May 2010 11:00:32 -0400, Bane wrote:

 Nick Sabalausky Wrote:
 
 "Nick Sabalausky" <a a.a> wrote in message
 news:ht3uj4$30fv$1 digitalmars.com...
 "div0" <div0 users.sourceforge.net> wrote in message
 news:ht3tfa$2smm$1 digitalmars.com...
 I'm surprised so many people who don't use D bother to read this
 news group and voted on the poll. Surely they must have better
 things to do.
I have a few guesses for that phonomenon:
...
 - Troll and/or trolls. I turned on "enable IP logging", but I didn't
 turn on "prevent multiple votes from same IP", since shared IPs are
 fairly common. Maybe GirlProgrammer's been messing with it. Or maybe
 the Reddit D-Downvoters found it. Maybe I made a mistake by not
 preventing multiple votes from same IP. Maybe if it turn that on
 it'll apply retro-actively...or maybe not? I dunno, I really should
 get around to making my own poll software.
...

 I've looked into this a little. I was able to download a chart of the
 IPs, and the number of votes per IP. Unfortunately, there doesn't seem
 to be any way to tell anything about the actual votes from a particular
 IP, which I suppose is good for privacy, but it prevents me from
 looking at an IP with multiple votes and determining if all of the
 votes were suspiciously strongly favoring the one option.
 
 There are 105 IPs. The vast majority of the IPs only had one vote.
 There were five IPs that had two votes each, and one IP that had three
 votes. I'm willing to assume those are just multiple people from the
 same ISP, and even if not they're not particularly significant compared
 to the rest. But then there was one IP with 72 votes. So, yea, that
 does seem suspicious. In case anyone thinks they might have more
 insight, the IP in question is 115.131.192.250, and appears to be from
 Austrailia.
 
 
 
Thats great. It proves my point - even D trolls are loyal to the project. 72 votes? That is some dedication.
What is more interesting is that the majority of D users already use D2, which has a huge list of bugs. It just tells that most D users don't use D in serious / mission critical / money bringing projects, but instead as a hobby.
May 21 2010
next sibling parent reply Bane <branimir.milosavljevic gmail.com> writes:
 What is more interesting is that the majority of D users already use D2, 
 which has a huge list of bugs. It just tells that most D users don't use 
 D in serious / mission critical / money bringing projects, but instead as 
 a hobby. 
I'm in serious business with it. I think D1 is up to it pretty well.
May 21 2010
parent Bane <branimir.milosavljevic gmail.com> writes:
Bane Wrote:
 What is more interesting is that the majority of D users already use D2, 
 which has a huge list of bugs. It just tells that most D users don't use 
 D in serious / mission critical / money bringing projects, but instead as 
 a hobby. 
I'm in serious business with it. I think D1 is up to it pretty well.
D1 I mean. Shit. I lack clarity.
May 21 2010
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
retard wrote:
 What is more interesting is that the majority of D users already use D2, 
 which has a huge list of bugs. It just tells that most D users don't use 
 D in serious / mission critical / money bringing projects, but instead as 
 a hobby. 
What matters not is the number of bugs, it is whether they block reasonable use of the compiler. Just one bug can make it unusable, whereas a thousand insignificant ones may not.
May 21 2010
parent Alex Makhotin <alex bitprox.com> writes:
Walter Bright wrote:
 What matters not is the number of bugs, it is whether they block 
 reasonable use of the compiler. Just one bug can make it unusable, 
 whereas a thousand insignificant ones may not.
In Steven's dcollections
 // workaround for compiler deficiencies.  Note you MUST repeat this in
         // derived classes to achieve covariance (see bug 4182).
         alias concat opCat;
         alias concat_r opCat_r;
         alias add opCatAssign;
http://d.puremagic.com/issues/show_bug.cgi?id=4182 Could you please explain is this a bug or feature? -- Alex Makhotin, the founder of BITPROX, http://bitprox.com
May 21 2010
prev sibling next sibling parent reply Eric Poggel <dnewsgroup yage3d.net> writes:
On 5/21/2010 1:57 PM, retard wrote:
 hat is more interesting is that the majority of D users already use D2,
 which has a huge list of bugs. It just tells that most D users don't use
 D in serious / mission critical / money bringing projects, but instead as
 a hobby.
Or possibly, D newsgroup followers are mostly early-adopters, which makes the poll skewed toward D2.
May 21 2010
parent "Nick Sabalausky" <a a.a> writes:
"Eric Poggel" <dnewsgroup yage3d.net> wrote in message 
news:ht7i0u$hv1$1 digitalmars.com...
 On 5/21/2010 1:57 PM, retard wrote:
 hat is more interesting is that the majority of D users already use D2,
 which has a huge list of bugs. It just tells that most D users don't use
 D in serious / mission critical / money bringing projects, but instead as
 a hobby.
Or possibly, D newsgroup followers are mostly early-adopters, which makes the poll skewed toward D2.
I also posted on the Tango message board, which is definitely skewed towards D1. But then again, that only has 25 views and I know at least five of them were me reloading the page.
May 21 2010
prev sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
retard Wrote:
 
 What is more interesting is that the majority of D users already use D2, 
 which has a huge list of bugs. It just tells that most D users don't use 
 D in serious / mission critical / money bringing projects, but instead as 
 a hobby. 
I've yet to use a compiler that had zero bugs I needed to consider when writing code. What's more important to me is that the bugs that do exist shouldn't unduly inconvenience me, nor should they hurt my ability to easily move to a newer compiler later. VC6 was terrible in this respect--it crashed constantly, and so badly supported the language spec that I had to purposefully write screwed up code just so it would compile. Moving to VC7 took a lot of rewriting. Assuming there are no sufficiently annoying bugs in DMD2 I'd rather work with the latest language spec to make my app more maintainable over time. In a work environment, I think it's more important that a compiler be supported than that it be bug-free.
May 21 2010
parent reply div0 <div0 users.sourceforge.net> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Sean Kelly wrote:
 retard Wrote:
 What is more interesting is that the majority of D users already use D2, 
 which has a huge list of bugs. It just tells that most D users don't use 
 D in serious / mission critical / money bringing projects, but instead as 
 a hobby. 
I've yet to use a compiler that had zero bugs I needed to consider when writing code. What's more important to me is that the bugs that do exist shouldn't unduly inconvenience me, nor should they hurt my ability to easily move to a newer compiler later. VC6 was terrible in this respect--it crashed constantly, and so badly supported the language spec that I had to purposefully write screwed up code just so it would compile. Moving to VC7 took a lot of rewriting. Assuming there are no sufficiently annoying bugs in DMD2 I'd rather work with the latest language spec to make my app more maintainable over time. In a work environment, I think it's more important that a compiler be supported than that it be bug-free.
Well I'm still using 2.028. Every version I've tried since has had a compiler bug that's been a show stopper. However I'm in no major rush, there's enough momentum in progress for me to be confidant that it will work eventually. - -- My enormous talent is exceeded only by my outrageous laziness. http://www.ssTk.co.uk -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iD8DBQFL+BW1T9LetA9XoXwRAthRAJ9lrqaoqWj7QAZxc3Aye4Mldk/OiACfWvT0 sjc5M0Q5/zhzDoCQ7y1V4CQ= =P4EP -----END PGP SIGNATURE-----
May 22 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
div0 wrote:
 Well I'm still using 2.028. Every version I've tried since has had a
 compiler bug that's been a show stopper. However I'm in no major rush,
 there's enough momentum in progress for me to be confidant that it will
 work eventually.
Which one is your current showstopper?
May 22 2010
parent div0 <div0 users.sourceforge.net> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Walter Bright wrote:
 div0 wrote:
 Well I'm still using 2.028. Every version I've tried since has had a
 compiler bug that's been a show stopper. However I'm in no major rush,
 there's enough momentum in progress for me to be confidant that it will
 work eventually.
Which one is your current showstopper?
http://d.puremagic.com/issues/show_bug.cgi?id=3712 Except for me, it's something to do with the change to struct initializers. Error: struct clr has constructors, cannot use { initializers }, use clr( initializers ) instead ty. - -- My enormous talent is exceeded only by my outrageous laziness. http://www.ssTk.co.uk -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iD8DBQFL+Qa/T9LetA9XoXwRApAzAKCTy5B+j+jh/UPsUEuSU7INQIgadACfUlWU t/lY1I7V1FP+yWPx1APeUJg= =ckRX -----END PGP SIGNATURE-----
May 23 2010
prev sibling parent "Viktor H." <viktor.h laposte.net> writes:
On Thu, 2010-05-20 at 14:19 -0400, Nick Sabalausky wrote:=20
 "div0" <div0 users.sourceforge.net> wrote in message=20
 news:ht3tfa$2smm$1 digitalmars.com...
 I'm surprised so many people who don't use D bother to read this news
 group and voted on the poll. Surely they must have better things to do.
=20 I have a few guesses for that phonomenon: =20 - There are a lot of people who are keeping a close eye on D, but aren't=20 ready/able to commit to any use just yet. In fact, I've already been unde=
r=20
 the impression that that's the case.
That's exactly my case. I'm waiting for D2 to stabilise, for D2 compilers to hit Debian unstable, and for me having both some time and a good itch to scratch. For the time being, I find this newsgroup very informative, versatile and yet not too busy to follow as an infrequent visitor. Thanks to everyone who makes this possible, Viktor.
May 21 2010
prev sibling parent user domain.invalid writes:
div0 wrote:
 -----BEGIN PGP SIGNED MESSAGE-----
 Hash: SHA1
 
 Bane wrote:
 OMG! I can't even spell OMG right!

 OBG! I'm minority! (still stuck on 1.030)
rofl. you tool. (i mean that in a good way) I'm surprised so many people who don't use D bother to read this news group and voted on the poll. Surely they must have better things to do.
I'm keeping an eye on D.
Jun 12 2010
prev sibling next sibling parent Kagamin <spam here.lot> writes:
Nick Sabalausky Wrote:

 I apologize for using MicroPoll (and all its manditory-JavaScript-ness). I 
 personally hate MicroPoll but everything else I've seen is even worse and I 
 don't have time to make a custom one.
I'd appreciate js, but pissed off by flash.
May 21 2010
prev sibling next sibling parent reply Matthias Pleh <matthias.pleh gmx.at> writes:
Am 20.05.2010 08:52, schrieb Nick Sabalausky:
 I'm interested in trying to gauge the current state of D version usage, so
 I've set up a poll:

 http://micropoll.com/t/KEFfsZBH5F

 I apologize for using MicroPoll (and all its manditory-JavaScript-ness). I
 personally hate MicroPoll but everything else I've seen is even worse and I
 don't have time to make a custom one.
Oh god, we have to inform micropoll, there is more than the USA ...
May 21 2010
parent reply "Nick Sabalausky" <a a.a> writes:
"Matthias Pleh" <matthias.pleh gmx.at> wrote in message 
news:ht6p7t$27sk$1 digitalmars.com...
 Oh god, we have to inform micropoll, there is more than the USA ...
"Kagamin" <spam here.lot> wrote in message news:ht6jgv$1tbs$1 digitalmars.com...
 I'd appreciate js, but pissed off by flash.
Yea, micropoll unfortunately has a lot of deficiencies. Like, when I was typing in the possible choices, the text box was lagging by roughly a second. And I've had some discussion with their support staff, and they seem to be the type of support people who are very easily confused. (Either that, or I'm just a hell of a lot worse at explaining things than I think I am ;) )
May 21 2010
parent reply Matthias Pleh <matthias.pleh gmx.at> writes:
Am 21.05.2010 22:27, schrieb Nick Sabalausky:
 "Matthias Pleh"<matthias.pleh gmx.at>  wrote in message
 news:ht6p7t$27sk$1 digitalmars.com...
 Oh god, we have to inform micropoll, there is more than the USA ...
"Kagamin"<spam here.lot> wrote in message news:ht6jgv$1tbs$1 digitalmars.com...
 I'd appreciate js, but pissed off by flash.
Yea, micropoll unfortunately has a lot of deficiencies. Like, when I was typing in the possible choices, the text box was lagging by roughly a second. And I've had some discussion with their support staff, and they seem to be the type of support people who are very easily confused. (Either that, or I'm just a hell of a lot worse at explaining things than I think I am ;) )
Funny is, that it shows me in the result that there were 2 votes in my area of about a radius of 50km, so there must be another D-enthusiast near by me, maybe my grandmother, who knows ....
May 21 2010
next sibling parent Not your grandma <some mail.com> writes:
Matthias Pleh Wrote:

 Am 21.05.2010 22:27, schrieb Nick Sabalausky:
 "Matthias Pleh"<matthias.pleh gmx.at>  wrote in message
 news:ht6p7t$27sk$1 digitalmars.com...
 Oh god, we have to inform micropoll, there is more than the USA ...
"Kagamin"<spam here.lot> wrote in message news:ht6jgv$1tbs$1 digitalmars.com...
 I'd appreciate js, but pissed off by flash.
Yea, micropoll unfortunately has a lot of deficiencies. Like, when I was typing in the possible choices, the text box was lagging by roughly a second. And I've had some discussion with their support staff, and they seem to be the type of support people who are very easily confused. (Either that, or I'm just a hell of a lot worse at explaining things than I think I am ;) )
Funny is, that it shows me in the result that there were 2 votes in my area of about a radius of 50km, so there must be another D-enthusiast near by me, maybe my grandmother, who knows ....
Does it say "See how users are polling in Bohinj : Total Votes : 2" for you too?
May 21 2010
prev sibling next sibling parent "Nick Sabalausky" <a a.a> writes:
"Matthias Pleh" <matthias.pleh gmx.at> wrote in message 
news:ht6t33$2fv3$1 digitalmars.com...
 Am 21.05.2010 22:27, schrieb Nick Sabalausky:
 "Matthias Pleh"<matthias.pleh gmx.at>  wrote in message
 news:ht6p7t$27sk$1 digitalmars.com...
 Oh god, we have to inform micropoll, there is more than the USA ...
"Kagamin"<spam here.lot> wrote in message news:ht6jgv$1tbs$1 digitalmars.com...
 I'd appreciate js, but pissed off by flash.
Yea, micropoll unfortunately has a lot of deficiencies. Like, when I was typing in the possible choices, the text box was lagging by roughly a second. And I've had some discussion with their support staff, and they seem to be the type of support people who are very easily confused. (Either that, or I'm just a hell of a lot worse at explaining things than I think I am ;) )
Funny is, that it shows me in the result that there were 2 votes in my area of about a radius of 50km, so there must be another D-enthusiast near by me, maybe my grandmother, who knows ....
I was surprised to see that there's another D user in Ohio besides me. From what I've seen, all the good programmers usually leave Ohio, and we're just left with the cruft.
May 21 2010
prev sibling parent Matthias Pleh <matthias.pleh gmx.at> writes:
Am 21.05.2010 23:14, schrieb Matthias Pleh:
 Am 21.05.2010 22:27, schrieb Nick Sabalausky:
 "Matthias Pleh"<matthias.pleh gmx.at> wrote in message
 news:ht6p7t$27sk$1 digitalmars.com...
 Oh god, we have to inform micropoll, there is more than the USA ...
"Kagamin"<spam here.lot> wrote in message news:ht6jgv$1tbs$1 digitalmars.com...
 I'd appreciate js, but pissed off by flash.
Yea, micropoll unfortunately has a lot of deficiencies. Like, when I was typing in the possible choices, the text box was lagging by roughly a second. And I've had some discussion with their support staff, and they seem to be the type of support people who are very easily confused. (Either that, or I'm just a hell of a lot worse at explaining things than I think I am ;) )
Funny is, that it shows me in the result that there were 2 votes in my area of about a radius of 50km, so there must be another D-enthusiast near by me, maybe my grandmother, who knows ....
It says "See how users are polling in Vorarlberg : Total Votes : 2" So it seems that in every erea of us life 2 D-enthusiasts ...
May 22 2010
prev sibling next sibling parent Matthias Pleh <matthias.pleh gmx.at> writes:
Am 20.05.2010 08:52, schrieb Nick Sabalausky:
 I'm interested in trying to gauge the current state of D version usage, so
 I've set up a poll:

 http://micropoll.com/t/KEFfsZBH5F

 I apologize for using MicroPoll (and all its manditory-JavaScript-ness). I
 personally hate MicroPoll but everything else I've seen is even worse and I
 don't have time to make a custom one.
For all who want to see the current result, without voting twice: http://www.micropoll.com/akira/mpresult/928402-256449
May 22 2010
prev sibling parent reply Robert Clipsham <robert octarineparrot.com> writes:
On 20/05/10 07:52, Nick Sabalausky wrote:
 I'm interested in trying to gauge the current state of D version usage, so
 I've set up a poll:

 http://micropoll.com/t/KEFfsZBH5F

 I apologize for using MicroPoll (and all its manditory-JavaScript-ness). I
 personally hate MicroPoll but everything else I've seen is even worse and I
 don't have time to make a custom one.
I put my vote with D1 & 2, although truthfully I've moved back to D1 for the most part. I found D2 almost impossible to use, for a few reasons: - Safe D was impossible to use due to phobos not supporting this (although this seems to be close to a fix) - Interfacing to C libraries is now overly complex thanks to const correctness. After updating all the function signatures I found phobos was completely lacking the functions to convert between C and D strings of varying constness or with different encodings (char/wchar/dchar).. I ended up writing my own functions - to!() didn't work in most cases where I tried to use it, I ended up writing my own conversion functions - Various bugs I encountered which have already been reported (I forget which ones). - Lack of an x86_64 compiler. I spent far too long messing around setting up a multilib system, and ended up making a chroot for dmd. This is far too much effort/messing around, and should I ever feel there's a use for my apps outside of localhost people will wonder why they don't support x86_64 natively (I believe this will change after D2 from various comments from Walter). - Lack of containers in phobos, although dcollections may have solved this now, I haven't had chance to look There were some other reasons I've forgotten, but until at least some of these are fixed I'll stick to D1/Tango... I hope I can move back to D2 at some point in the future once it's stabilized a bit more.
May 22 2010
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 05/22/2010 08:29 AM, Robert Clipsham wrote:
 On 20/05/10 07:52, Nick Sabalausky wrote:
 I'm interested in trying to gauge the current state of D version
 usage, so
 I've set up a poll:

 http://micropoll.com/t/KEFfsZBH5F

 I apologize for using MicroPoll (and all its
 manditory-JavaScript-ness). I
 personally hate MicroPoll but everything else I've seen is even worse
 and I
 don't have time to make a custom one.
I put my vote with D1 & 2, although truthfully I've moved back to D1 for the most part. I found D2 almost impossible to use, for a few reasons: - Safe D was impossible to use due to phobos not supporting this (although this seems to be close to a fix) - Interfacing to C libraries is now overly complex thanks to const correctness. After updating all the function signatures I found phobos was completely lacking the functions to convert between C and D strings of varying constness or with different encodings (char/wchar/dchar).. I ended up writing my own functions
Could you please give more detail on that? There should be essentially no problem with using C-style strings with D regardless of constness.
 - to!() didn't work in most cases where I tried to use it, I ended up
 writing my own conversion functions
to is deliberately defined to be restrictive; parse is more forgiving. Anyway, I'd be glad to improve to if you gave me a few hints.
 - Various bugs I encountered which have already been reported (I forget
 which ones).
 - Lack of an x86_64 compiler. I spent far too long messing around
 setting up a multilib system, and ended up making a chroot for dmd. This
 is far too much effort/messing around, and should I ever feel there's a
 use for my apps outside of localhost people will wonder why they don't
 support x86_64 natively (I believe this will change after D2 from
 various comments from Walter).
 - Lack of containers in phobos, although dcollections may have solved
 this now, I haven't had chance to look

 There were some other reasons I've forgotten, but until at least some of
 these are fixed I'll stick to D1/Tango... I hope I can move back to D2
 at some point in the future once it's stabilized a bit more.
Yah, for most of these things are definitely looking up. Thanks, and I'd appreciate any more detail you might have. Andrei
May 22 2010
next sibling parent reply Robert Clipsham <robert octarineparrot.com> writes:
On 22/05/10 17:42, Andrei Alexandrescu wrote:
 - Interfacing to C libraries is now overly complex thanks to const
 correctness. After updating all the function signatures I found phobos
 was completely lacking the functions to convert between C and D strings
 of varying constness or with different encodings (char/wchar/dchar).. I
 ended up writing my own functions
Could you please give more detail on that? There should be essentially no problem with using C-style strings with D regardless of constness.
extern(C)void someFunc(char*); There is no function in phobos which will allow me to call this function using a D string, toStringz() gives: test.d(4): Error: function test.someFunc (char*) is not callable using argument types (const(char)*) Unless I cast away const, which isn't pretty if you've got a lot of these functions, unless you write a wrapper for each one (my current hack). to!() doesn't support it at all, and I can't find another method in phobos for it. extern(C)void someFunc(wchar*); This is impossible with phobos, there's no function to convert a D string to wchar*, not even one where I could cast away constness. This includes dchar* too.
 - to!() didn't work in most cases where I tried to use it, I ended up
 writing my own conversion functions
to is deliberately defined to be restrictive; parse is more forgiving. Anyway, I'd be glad to improve to if you gave me a few hints.
Any of the above conversions would be nice, although I appreciate that there's no way to tell if it's a C style string or a pointer to a single char. There were several other situations which I worked around but didn't note down, so I can't list them here.
May 22 2010
next sibling parent Pelle <pelle.mansson gmail.com> writes:
On 05/22/2010 08:26 PM, Robert Clipsham wrote:
 extern(C)void someFunc(char*);

 There is no function in phobos which will allow me to call this function
 using a D string
You could use (array.dup ~ '\0').ptr, right?
 extern(C)void someFunc(wchar*);

 This is impossible with phobos, there's no function to convert a D
 string to wchar*, not even one where I could cast away constness. This
 includes dchar* too.
to!(wchar[])(chararray) works.
May 22 2010
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Robert Clipsham wrote:
 On 22/05/10 17:42, Andrei Alexandrescu wrote:
 - Interfacing to C libraries is now overly complex thanks to const
 correctness. After updating all the function signatures I found phobos
 was completely lacking the functions to convert between C and D strings
 of varying constness or with different encodings (char/wchar/dchar).. I
 ended up writing my own functions
Could you please give more detail on that? There should be essentially no problem with using C-style strings with D regardless of constness.
extern(C)void someFunc(char*); There is no function in phobos which will allow me to call this function using a D string, toStringz() gives: test.d(4): Error: function test.someFunc (char*) is not callable using argument types (const(char)*) Unless I cast away const, which isn't pretty if you've got a lot of these functions, unless you write a wrapper for each one (my current hack). to!() doesn't support it at all, and I can't find another method in phobos for it.
What's necessary is to decide if someFunc changes the string data or not. If it does not, then it should be prototyped as: extern (C) void someFunc(const char *); If it does, then the char* is the correct declaration, and an immutable string should not be passed to it.
May 22 2010
parent reply Mike Parker <aldacron gmail.com> writes:
Walter Bright wrote:
 Robert Clipsham wrote:
 On 22/05/10 17:42, Andrei Alexandrescu wrote:
 - Interfacing to C libraries is now overly complex thanks to const
 correctness. After updating all the function signatures I found phobos
 was completely lacking the functions to convert between C and D strings
 of varying constness or with different encodings (char/wchar/dchar).. I
 ended up writing my own functions
Could you please give more detail on that? There should be essentially no problem with using C-style strings with D regardless of constness.
extern(C)void someFunc(char*); There is no function in phobos which will allow me to call this function using a D string, toStringz() gives: test.d(4): Error: function test.someFunc (char*) is not callable using argument types (const(char)*) Unless I cast away const, which isn't pretty if you've got a lot of these functions, unless you write a wrapper for each one (my current hack). to!() doesn't support it at all, and I can't find another method in phobos for it.
What's necessary is to decide if someFunc changes the string data or not. If it does not, then it should be prototyped as: extern (C) void someFunc(const char *); If it does, then the char* is the correct declaration, and an immutable string should not be passed to it.
That's not the problem. The problem is this: const(char)* toStringz(const(char)[] s); There's no equivalent for: char *toStringz(char[] s); Hence the need to cast away const or use a wrapper for non-const char* args.
May 22 2010
next sibling parent reply Rainer Deyke <rainerd eldwood.com> writes:
On 5/22/2010 23:16, Mike Parker wrote:
 That's not the problem. The problem is this:
 
 const(char)* toStringz(const(char)[] s);
 
 There's no equivalent for:
 
 char *toStringz(char[] s);
 
 Hence the need to cast away const or use a wrapper for non-const char*
 args.
There is no way to define this function with the correct semantics in D. 'toStringz' must append a null character to the string, therefore it cannot return a pointer to the original string data in the general case. If you pass the resulting string to a function that mutates it, then the changes will not be reflected in the original string. If you pass the resulting string to a function that does /not/ mutate it, then that function should be defined to take a 'const char *'. -- Rainer Deyke - rainerd eldwood.com
May 22 2010
next sibling parent reply Mike Parker <aldacron gmail.com> writes:
Rainer Deyke wrote:
 On 5/22/2010 23:16, Mike Parker wrote:
 That's not the problem. The problem is this:

 const(char)* toStringz(const(char)[] s);

 There's no equivalent for:

 char *toStringz(char[] s);

 Hence the need to cast away const or use a wrapper for non-const char*
 args.
There is no way to define this function with the correct semantics in D. 'toStringz' must append a null character to the string, therefore it cannot return a pointer to the original string data in the general case. If you pass the resulting string to a function that mutates it, then the changes will not be reflected in the original string. If you pass the resulting string to a function that does /not/ mutate it, then that function should be defined to take a 'const char *'.
I understand that. But, ignoring the fact that toStringz in D1 seems to have functioned perfectly fine for several years without a const return, it doesn't change the fact that a C function call that accepts a char* expects it to be null-terminated, regardless of what happens to it on the other side. And I would argue that it's unreasonable to expect the declarations of C functions to be declared const-correct based on their usage. To my knowledge, all of the C bindings for D to date either don't use const at all (because they were created for D1) or use it according to the declarations in the C headers. Which means there are numerous C functions out there with non-const params that do not modify them. Then there's the issue of compatibility between D1/D2. I've bound several C libraries for D that need to support both D1/D2, Phobos/Tango. Supporting const was one of the first headaches I encountered when porting the original D1 bindings to D2. Finding that toStringz returned a const string was a big surprise.
May 23 2010
parent reply Pelle <pelle.mansson gmail.com> writes:
On 05/23/2010 10:14 AM, Mike Parker wrote:
 And I would argue that it's unreasonable to expect the declarations of C
 functions to be declared const-correct based on their usage. To my
 knowledge, all of the C bindings for D to date either don't use const at
 all (because they were created for D1) or use it according to the
 declarations in the C headers. Which means there are numerous C
 functions out there with non-const params that do not modify them.
I do them according to the C headers, and the constness is almost always correct. Otherwise, it's a bug in the C headers!
 Then there's the issue of compatibility between D1/D2. I've bound
 several C libraries for D that need to support both D1/D2, Phobos/Tango.
 Supporting const was one of the first headaches I encountered when
 porting the original D1 bindings to D2. Finding that toStringz returned
 a const string was a big surprise.
It should probably be inout(char)* toStringz(inout(char)[]), or something like that.
May 23 2010
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 05/23/2010 04:47 AM, Pelle wrote:
 On 05/23/2010 10:14 AM, Mike Parker wrote:
 And I would argue that it's unreasonable to expect the declarations of C
 functions to be declared const-correct based on their usage. To my
 knowledge, all of the C bindings for D to date either don't use const at
 all (because they were created for D1) or use it according to the
 declarations in the C headers. Which means there are numerous C
 functions out there with non-const params that do not modify them.
I do them according to the C headers, and the constness is almost always correct. Otherwise, it's a bug in the C headers!
Yes. My experience with C headers is that they're always careful about inserting const for read-only pointer parameters.
 Then there's the issue of compatibility between D1/D2. I've bound
 several C libraries for D that need to support both D1/D2, Phobos/Tango.
 Supporting const was one of the first headaches I encountered when
 porting the original D1 bindings to D2. Finding that toStringz returned
 a const string was a big surprise.
It should probably be inout(char)* toStringz(inout(char)[]), or something like that.
It could, but what C functions output to a zero-terminated char*? I can only think of unsafe ones, such as strcat() and gets(). Both are inherently unsafe. Andrei
May 23 2010
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 05/23/2010 12:30 AM, Rainer Deyke wrote:
 On 5/22/2010 23:16, Mike Parker wrote:
 That's not the problem. The problem is this:

 const(char)* toStringz(const(char)[] s);

 There's no equivalent for:

 char *toStringz(char[] s);

 Hence the need to cast away const or use a wrapper for non-const char*
 args.
There is no way to define this function with the correct semantics in D. 'toStringz' must append a null character to the string, therefore it cannot return a pointer to the original string data in the general case. If you pass the resulting string to a function that mutates it, then the changes will not be reflected in the original string. If you pass the resulting string to a function that does /not/ mutate it, then that function should be defined to take a 'const char *'.
There is a way, you could simply allocate a copy plus the \0 on the GC heap. In fact that's what happens right now. Andrei
May 23 2010
parent Rainer Deyke <rainerd eldwood.com> writes:
On 5/23/2010 07:33, Andrei Alexandrescu wrote:
 On 05/23/2010 12:30 AM, Rainer Deyke wrote:
 There is no way to define this function with the correct semantics in D.
   'toStringz' must append a null character to the string, therefore it
 cannot return a pointer to the original string data in the general case.
   If you pass the resulting string to a function that mutates it, then
 the changes will not be reflected in the original string.

 If you pass the resulting string to a function that does /not/ mutate
 it, then that function should be defined to take a 'const char *'.
There is a way, you could simply allocate a copy plus the \0 on the GC heap. In fact that's what happens right now.
No, the problem is getting any changes to the copy back to the original. It can be done, but not with a simple conversion function. -- Rainer Deyke - rainerd eldwood.com
May 23 2010
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 05/23/2010 12:16 AM, Mike Parker wrote:
 Walter Bright wrote:
 Robert Clipsham wrote:
 On 22/05/10 17:42, Andrei Alexandrescu wrote:
 - Interfacing to C libraries is now overly complex thanks to const
 correctness. After updating all the function signatures I found phobos
 was completely lacking the functions to convert between C and D
 strings
 of varying constness or with different encodings
 (char/wchar/dchar).. I
 ended up writing my own functions
Could you please give more detail on that? There should be essentially no problem with using C-style strings with D regardless of constness.
extern(C)void someFunc(char*); There is no function in phobos which will allow me to call this function using a D string, toStringz() gives: test.d(4): Error: function test.someFunc (char*) is not callable using argument types (const(char)*) Unless I cast away const, which isn't pretty if you've got a lot of these functions, unless you write a wrapper for each one (my current hack). to!() doesn't support it at all, and I can't find another method in phobos for it.
What's necessary is to decide if someFunc changes the string data or not. If it does not, then it should be prototyped as: extern (C) void someFunc(const char *); If it does, then the char* is the correct declaration, and an immutable string should not be passed to it.
That's not the problem. The problem is this: const(char)* toStringz(const(char)[] s); There's no equivalent for: char *toStringz(char[] s); Hence the need to cast away const or use a wrapper for non-const char* args.
Yah, and that's intentional. APIs that use zero-terminated strings for output are very rare and most often inherently unsafe. Andrei
May 23 2010
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

 to is deliberately defined to be restrictive; parse is more forgiving. 
 Anyway, I'd be glad to improve to if you gave me a few hints.
If you are interested, I have written: http://d.puremagic.com/issues/show_bug.cgi?id=3961 http://d.puremagic.com/issues/show_bug.cgi?id=4165 http://d.puremagic.com/issues/show_bug.cgi?id=4168 For the 4168 I can write some kind of patch too... Bye, bearophile
May 22 2010
parent reply Adam Ruppe <destructionator gmail.com> writes:
On 5/22/10, bearophile <bearophileHUGS lycos.com> wrote:
 http://d.puremagic.com/issues/show_bug.cgi?id=4165
I don't think that's a bug. It should only worry about converting, not filtering out bad stuff. That's an orthogonal problem that the other function does well, and easily too.
May 22 2010
parent bearophile <bearophileHUGS lycos.com> writes:
Adam Ruppe:

 I don't think that's a bug. It should only worry about converting, not
 filtering out bad stuff. That's an orthogonal problem that the other
 function does well, and easily too.
It's not a bug, right. But saying that there are other functions orthogonal to it that solve this problem is not enough. There is a balance you have to adopt between having a so flexible language/stdlib that's sloppy and can lead to bugs, and to have as much orthogonal functions as possible that are fussy and can lead to opposite kinds of bugs. Very often if I have to convert strings to numbers I have leading or trailing spaces, often a leading newline. Converting such string to a number is not sloppiness because my experience shows me it can hardly cause bugs in general. The way to!() is currently designed forces me to remove the spaces often. This has caused a bug in one of my small script-like D programs. So if to!() will not strip spaces I'll have to define a stripping+conversion function in my dlibs2, and I'd like dlibs2 to be as small as possible. Bye, bearophile
May 22 2010
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Robert Clipsham" <robert octarineparrot.com> wrote in message 
news:ht8m7t$2qua$1 digitalmars.com...
  - and should I ever feel there's a use for my apps outside of localhost 
 people will wonder why they don't support x86_64 natively (I believe this 
 will change after D2 from various comments from Walter).
Most apps don't need native x86_64. Only things that really push the limits of CPU/memory utilization need it, which, aside from bloatware (which admittedly is at epidemic levels lately), is really only a minority of apps. For the rest, if it already runs fine on 32-bit, then the same exec on a 64-bit machine is only going to run better anyway, and if is already ran fine before, then there's no problem.
May 22 2010
parent reply retard <re tard.com.invalid> writes:
Sat, 22 May 2010 13:59:34 -0400, Nick Sabalausky wrote:

 "Robert Clipsham" <robert octarineparrot.com> wrote in message
 news:ht8m7t$2qua$1 digitalmars.com...
  - and should I ever feel there's a use for my apps outside of
  localhost
 people will wonder why they don't support x86_64 natively (I believe
 this will change after D2 from various comments from Walter).
Most apps don't need native x86_64. Only things that really push the limits of CPU/memory utilization need it, which, aside from bloatware (which admittedly is at epidemic levels lately), is really only a minority of apps. For the rest, if it already runs fine on 32-bit, then the same exec on a 64-bit machine is only going to run better anyway, and if is already ran fine before, then there's no problem.
You're suffering Stockholm syndrome there. Not having a functional 64-bit compiler isn't a positive feature. On a 4 GB system you lose 600+ MB of memory when using a 32-bit operating system without PAE support. In addition, x86 programs might be tuned for i586 or i386, forcing them to not utilize only 50% of the registers available. In the worst case they don't even use SSE at all! Some assembly experts here probably know how much slower x87 is when compared to SSE2+. Guess how much a 64-bit system with 4 GB of RAM costs these days - a quick search gave me the number $379 at http://www.bestbuy.com/site/HP+-+Factory-Refurbished+Desktop+with+AMD +Athlon&%23153;+II+X2+Dual-Core+Processor/9880623.p? id=1218188306780&skuId=9880623 I already have 24 GB in my Core i7 system. I can't imagine how a 32-bit system would benefit modern users.
May 22 2010
next sibling parent reply Adam Ruppe <destructionator gmail.com> writes:
On 5/22/10, retard <re tard.com.invalid> wrote:
 On a 4 GB system you lose 600+ MB of memory when using a 32-bit operating
 system without PAE support.
You can run 32 bit programs on a 64 bit operating system. The point isn't that 64 bits is useless in general, it is just that most *applications* work just fine as 32 bit binaries.
May 22 2010
parent reply retard <re tard.com.invalid> writes:
Sat, 22 May 2010 15:28:54 -0400, Adam Ruppe wrote:

 On 5/22/10, retard <re tard.com.invalid> wrote:
 On a 4 GB system you lose 600+ MB of memory when using a 32-bit
 operating system without PAE support.
You can run 32 bit programs on a 64 bit operating system. The point isn't that 64 bits is useless in general, it is just that most *applications* work just fine as 32 bit binaries.
I can't believe the 64-bit processes are twice as large. Typically the binary size is only a fraction of the amount of data processed by the application. Moreover, most of the memory allocations contain array like structures for storing bitmaps, I/O buffers etc. For example, if you're storing 8-bit pixels in a game, you're not forced to use int64 data type on a 64-bit architecture.
May 22 2010
next sibling parent Adam Ruppe <destructionator gmail.com> writes:
On 5/22/10, retard <re tard.com.invalid> wrote:
 I can't believe the 64-bit processes are twice as large.
They probably aren't. I don't think we're talking about the same thing here. I, and I don't think Nick is either, am not saying that 64-bit is bad. We're just saying not having 64 bit isn't a big deal for most applications, since 32-bit apps aren't bad either. Yes, it would be nice to have a 64 bit compiler for when you need it, but odds are, your application doesn't need it.
May 22 2010
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 05/22/2010 02:38 PM, retard wrote:
 Sat, 22 May 2010 15:28:54 -0400, Adam Ruppe wrote:

 On 5/22/10, retard<re tard.com.invalid>  wrote:
 On a 4 GB system you lose 600+ MB of memory when using a 32-bit
 operating system without PAE support.
You can run 32 bit programs on a 64 bit operating system. The point isn't that 64 bits is useless in general, it is just that most *applications* work just fine as 32 bit binaries.
I can't believe the 64-bit processes are twice as large. Typically the binary size is only a fraction of the amount of data processed by the application. Moreover, most of the memory allocations contain array like structures for storing bitmaps, I/O buffers etc. For example, if you're storing 8-bit pixels in a game, you're not forced to use int64 data type on a 64-bit architecture.
It all depends on what the largest payload is. One of my apps' largest structures was a hash, which was almost twice as large in the 64-bit version. Andrei
May 22 2010
next sibling parent retard <re tard.com.invalid> writes:
Sat, 22 May 2010 16:23:35 -0500, Andrei Alexandrescu wrote:

 On 05/22/2010 02:38 PM, retard wrote:
 Sat, 22 May 2010 15:28:54 -0400, Adam Ruppe wrote:

 On 5/22/10, retard<re tard.com.invalid>  wrote:
 On a 4 GB system you lose 600+ MB of memory when using a 32-bit
 operating system without PAE support.
You can run 32 bit programs on a 64 bit operating system. The point isn't that 64 bits is useless in general, it is just that most *applications* work just fine as 32 bit binaries.
I can't believe the 64-bit processes are twice as large. Typically the binary size is only a fraction of the amount of data processed by the application. Moreover, most of the memory allocations contain array like structures for storing bitmaps, I/O buffers etc. For example, if you're storing 8-bit pixels in a game, you're not forced to use int64 data type on a 64-bit architecture.
It all depends on what the largest payload is. One of my apps' largest structures was a hash, which was almost twice as large in the 64-bit version.
Ah, good to know. I haven't really seen many comparisons of 32-bit code vs 64-bit code. Haven't researched to topic much, either. But it makes sense when the payload is small vs the size of the pointer. I think some VMs deal with the issue by compressing pointers ( http://wikis.sun.com/ display/HotSpotInternals/CompressedOops ). In JVM, the maximum size of compressed pointer heap is only 32 GB, though, so we need to invent something new for desktop systems in 2020.
May 22 2010
prev sibling next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

 It all depends on what the largest payload is. One of my apps' largest 
 structures was a hash, which was almost twice as large in the 64-bit 
 version.
Some of that extra space is used by the pointers that are twice larger. The latest JavaVM are able to compress pointers in some situations, this can contain some info: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.97.8725 The main LLVM designer has studied that in C-like languages too, such ideas can be usable in D too: http://llvm.org/pubs/2005-06-12-MSP-PointerComp.html If you are interested in this, you can find more papers, with examples, benchmarks, etc. Bye, bearophile
May 22 2010
prev sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
Andrei Alexandrescu Wrote:
 
 It all depends on what the largest payload is. One of my apps' largest 
 structures was a hash, which was almost twice as large in the 64-bit 
 version.
It's always possible to trim down the bits used for a pointer inside a data structure if the savings really matters. Doing so can create some really interesting bugs though.
May 22 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 Andrei Alexandrescu Wrote:
 It all depends on what the largest payload is. One of my apps' largest 
 structures was a hash, which was almost twice as large in the 64-bit 
 version.
It's always possible to trim down the bits used for a pointer inside a data structure if the savings really matters. Doing so can create some really interesting bugs though.
The 'reduced' pointer memory model is a bad fit for D because: 1. D has to work with the corresponding C compiler, which does not support such a memory model. This kills it right there. 2. This will kill off a lot of the benefits of having a large address space that have nothing to do with allocating a lot of memory. I mentioned these in another post. 3. Having to support 2 memory models doubles the work. Two libraries, two test suite runs, more documentation, inevitable user confusion and mismatching, everyone shipping a library has to ship two, etc. If you must shrink the space required by pointers for a particular data structure, I suggest instead storing an offset instead of the pointer. Then, follow the indirection by adding said offset to a "base pointer".
May 22 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 1. D has to work with the corresponding C compiler, which does not support
such 
 a memory model. This kills it right there.
But the 'need' to do it can "resurrect" this feature from the dead. Sometimes you just need to do something, even such thing was not seen as "possible" in the past. The Oracle JavaVM is already using this optimization, but indeed it doesn't need to keep compatibility with the C compiler. This shows pointer compression in C and the like: http://llvm.org/pubs/2005-06-12-MSP-PointerComp.html Even if pointer compression can cause problems at the interface between C and D, there can be ways to decompress pointers when they are given to C libraries. So you can perform more efficient computations inside D code, and adapt (inflate) your pointers when they are needed for processing inside C code. There are things (like pointer compression, de-virtualization, dynamic decompilation, and so on) that future C-class languages can find useful to do that C compilers ten years ago didn't even think possible. Things are not set in stone, there's change too. Don't kill an idea just because it was kind of impossible (and probably kind of useless too) fifteen years ago. Bye, bearophile
May 23 2010
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Walter Bright:
 1. D has to work with the corresponding C compiler, which does not support
 such a memory model. This kills it right there.
But the 'need' to do it can "resurrect" this feature from the dead. Sometimes you just need to do something, even such thing was not seen as "possible" in the past. The Oracle JavaVM is already using this optimization, but indeed it doesn't need to keep compatibility with the C compiler. This shows pointer compression in C and the like: http://llvm.org/pubs/2005-06-12-MSP-PointerComp.html Even if pointer compression can cause problems at the interface between C and D, there can be ways to decompress pointers when they are given to C libraries. So you can perform more efficient computations inside D code, and adapt (inflate) your pointers when they are needed for processing inside C code. There are things (like pointer compression, de-virtualization, dynamic decompilation, and so on) that future C-class languages can find useful to do that C compilers ten years ago didn't even think possible. Things are not set in stone, there's change too. Don't kill an idea just because it was kind of impossible (and probably kind of useless too) fifteen years ago.
The paper describes an automatic way to do what I'd suggested previously - replacing the pointers with offsets from a base pointer. This is a lot like how the 'far' memory model in 16 bit code worked. Doing it in an automated way requires whole program analysis, something not entirely practical in a language designed to support separate compilation. On the other hand, D has plenty of abstraction abilities to make this doable by hand for selected data structures.
May 23 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 Doing it in an automated way 
 requires whole program analysis, something not entirely practical in a
language 
 designed to support separate compilation.
Compiling programs of a dynamic language like Lua was seen as hopelessly inefficient. But today programs running on the the Lua JIT are often faster than equivalent FP-heavy D programs compiled with DMD. So it's all in having a positive attitude toward technological problems: if the need to do something grows strong enough, people usually find a way to do it :-) Bye, bearophile
May 23 2010
next sibling parent reply retard <re tard.com.invalid> writes:
Sun, 23 May 2010 04:14:30 -0400, bearophile wrote:

 Walter Bright:
 Doing it in an automated way
 requires whole program analysis, something not entirely practical in a
 language designed to support separate compilation.
Compiling programs of a dynamic language like Lua was seen as hopelessly inefficient. But today programs running on the the Lua JIT are often faster than equivalent FP-heavy D programs compiled with DMD. So it's all in having a positive attitude toward technological problems: if the need to do something grows strong enough, people usually find a way to do it :-)
I don't think the D community is really interested in hearing something positive about dynamically typed non-native languages. Traditionally that's the best way to wreck your efficiency and it's tough to admit that those languages are now better. The traditional native code way is to use primitive compilers and brute force via inline asm.
May 25 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
retard wrote:
 I don't think the D community is really interested in hearing something 
 positive about dynamically typed non-native languages. Traditionally 
 that's the best way to wreck your efficiency and it's tough to admit that 
 those languages are now better. The traditional native code way is to use 
 primitive compilers and brute force via inline asm.
If this were true, C and C++ would be dead languages. C++, for example, is often used in combination with Python. The C++ part is for the bits that need to be fast. BTW, even the best compilers fall far short of what an expert can do with assembler.
May 25 2010
parent retard <re tard.com.invalid> writes:
Tue, 25 May 2010 14:22:47 -0700, Walter Bright wrote:

 retard wrote:
 I don't think the D community is really interested in hearing something
 positive about dynamically typed non-native languages. Traditionally
 that's the best way to wreck your efficiency and it's tough to admit
 that those languages are now better. The traditional native code way is
 to use primitive compilers and brute force via inline asm.
If this were true, C and C++ would be dead languages. C++, for example, is often used in combination with Python. The C++ part is for the bits that need to be fast. BTW, even the best compilers fall far short of what an expert can do with assembler.
It's impossible to say whether e.g. LuaJIT is faster than some C++ compiler. The code matters. Bad code written by a novice programmer often works faster when a higher level language is used because there's more room for optimizations. However, it really depends on the quality of the optimzations done by the compiler. What I wanted to point out was that if a person needs to choose between D (DMD) and Lua (LuaJIT), it would probably make more sense to use LuaJIT if the user wants better performing code. However, D (LDC) and D (some other vendor who uses modern backends like LLVM/GCC) probably win DMD here. Almost all compilers probably beat it.
May 25 2010
prev sibling parent reply Bill Baxter <wbaxter gmail.com> writes:
On Sun, May 23, 2010 at 1:14 AM, bearophile <bearophileHUGS lycos.com> wrote:
 Walter Bright:

 Compiling programs of a dynamic language like Lua was seen as hopelessly
inefficient. But today programs running on the the Lua JIT are often faster
than equivalent FP-heavy D programs compiled with DMD.
Do you have any citations of that? All I can find on LuaJIT.org is comparisons of LuaJIT vs other versions of Lua. --bb
May 25 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
Bill Baxter:
 Do you have any citations of that?  All I can find on LuaJIT.org is
 comparisons of LuaJIT vs other versions of Lua.
On my site you can see a version of the SciMark2 benchmark (that contains several sub-benchmarks, naive scientific kernels, mostly) for D with numerous timings. LDC is able to compile it quite well. You can find a version of that code here: http://luajit.org/download/scimark.lua I have compiled the awesome LUA JIT (it's easy) on Linux, and found timings against ldc, dmd. I have taken similar timings for another benchmark (nboby, from Shootout site). Bye, bearophile
May 25 2010
next sibling parent reply Bill Baxter <wbaxter gmail.com> writes:
On Tue, May 25, 2010 at 12:11 PM, bearophile <bearophileHUGS lycos.com> wro=
te:
 Bill Baxter:
 Do you have any citations of that? =A0All I can find on LuaJIT.org is
 comparisons of LuaJIT vs other versions of Lua.
On my site you can see a version of the SciMark2 benchmark (that contains=
several sub-benchmarks, naive scientific kernels, mostly) for D with numer= ous timings. LDC is able to compile it quite well.
 You can find a version of that code here:
 http://luajit.org/download/scimark.lua
 I have compiled the awesome LUA JIT (it's easy) on Linux, and found timin=
gs against ldc, dmd.
 I have taken similar timings for another benchmark (nboby, from Shootout =
site). So LuaJIT beats D on some or all of those benchmarks? I can't quite remember what your website URL is. But I did find this: http://shootout.alioth.debian.org/u64/benchmark.php?test=3Dall&lang=3Dluaji= t&lang2=3Dgpp I was thinking LuaJIT would be too new and/or fringe for it to be on the Alioth shootout, but it's there.
From that it looks like LuaJIT can't beat g++ for speed on any of the
benchmarks. You disagree with those results? --bb
May 25 2010
parent bearophile <bearophileHUGS lycos.com> writes:
Bill Baxter:
 So LuaJIT beats D on some or all of those benchmarks?
It's faster or close, D code compiled with dmd.
From that it looks like LuaJIT can't beat g++ for speed on any of the
benchmarks. You disagree with those results?
I don't disagree with those results, in my original post I have said: >But today programs running on the the Lua JIT are often faster than equivalent FP-heavy D programs compiled with DMD.< This means comparing FP-heavy programs compiled with LUA jit 2.0a4 and DMD 2.x. I have not given hard timings because the point of my post was qualitative and not quantitative :-) Bye, bearophile
May 25 2010
prev sibling parent Bill Baxter <wbaxter gmail.com> writes:
On Tue, May 25, 2010 at 12:45 PM, Bill Baxter <wbaxter gmail.com> wrote:
 On Tue, May 25, 2010 at 12:11 PM, bearophile <bearophileHUGS lycos.com> w=
rote:
 Bill Baxter:
 Do you have any citations of that? =A0All I can find on LuaJIT.org is
 comparisons of LuaJIT vs other versions of Lua.
On my site you can see a version of the SciMark2 benchmark (that contain=
s several sub-benchmarks, naive scientific kernels, mostly) for D with nume= rous timings. LDC is able to compile it quite well.
 You can find a version of that code here:
 http://luajit.org/download/scimark.lua
 I have compiled the awesome LUA JIT (it's easy) on Linux, and found timi=
ngs against ldc, dmd.
 I have taken similar timings for another benchmark (nboby, from Shootout=
site).
 So LuaJIT beats D on some or all of those benchmarks? =A0I can't quite
 remember what your website URL is.
 But I did find this:
 http://shootout.alioth.debian.org/u64/benchmark.php?test=3Dall&lang=3Dlua=
jit&lang2=3Dgpp
 I was thinking LuaJIT would be too new and/or fringe for it to be on
 the Alioth shootout, but it's there.
 From that it looks like LuaJIT can't beat g++ for speed on any of the
 benchmarks. =A0You disagree with those results?
Nevermind. I realize you didn't say that LuaJIT was faster than g++, just faster than DMD. But that last part made it sound like you thought LuaJIT was on track to eventually outperform all compilers. As in the need for fast JIT is strong enough that eventually people will figure out how to make it faster than everything else out there. --bb
May 25 2010
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 The Oracle JavaVM is already using this optimization, but indeed it doesn't
 need to keep compatibility with the C compiler. This shows pointer
 compression in C and the like: 
 http://llvm.org/pubs/2005-06-12-MSP-PointerComp.html
Oh, I forgot to mention. Back in the 16 bit days, I invented something called a "handle pointer". http://www.digitalmars.com/ctg/handle-pointers.html It was a special pointer type that was dereferenced through a function call. The particular implementation of it was to enable to use bank-switched EMS memory as if it were regularly addressible memory. In D this would be better off making the special pointer types a user defined struct type. Compiler support isn't necessary.
May 23 2010
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 In D this would be better off making the special pointer types a user defined 
 struct type. Compiler support isn't necessary.
Nice, thank you :-) I will try to implement this, I have already written something similar in D2. Bye, bearophile
May 23 2010
prev sibling parent Pelle <pelle.mansson gmail.com> writes:
On 05/23/2010 09:39 AM, Walter Bright wrote:
 Oh, I forgot to mention. Back in the 16 bit days, I invented something
 called a "handle pointer".

 http://www.digitalmars.com/ctg/handle-pointers.html
"You must be sure your program frees memory when it exits; otherwise, it will be unavailable to other programs until the machine is re-booted." Ha ha, oh wow. :)
May 23 2010
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"retard" <re tard.com.invalid> wrote in message 
news:ht9atu$rop$1 digitalmars.com...
 Sat, 22 May 2010 13:59:34 -0400, Nick Sabalausky wrote:
 Most apps don't need native x86_64. Only things that really push the
 limits of CPU/memory utilization need it, which, aside from bloatware
 (which admittedly is at epidemic levels lately), is really only a
 minority of apps. For the rest, if it already runs fine on 32-bit, then
 the same exec on a 64-bit machine is only going to run better anyway,
 and if is already ran fine before, then there's no problem.
You're suffering Stockholm syndrome there. Not having a functional 64-bit compiler isn't a positive feature.
I never said it was. All I said was that most apps don't need native 64-bit versions. Don't go pulling out strawmen.
 On a 4 GB system you lose 600+ MB of memory when using a 32-bit operating
 system without PAE support. In addition, x86 programs might be tuned for
 i586 or i386, forcing them to not utilize only 50% of the registers
 available. In the worst case they don't even use SSE at all! Some
 assembly experts here probably know how much slower x87 is when compared
 to SSE2+.
Take a 32-bit executable optimized for i386 or i586, that runs acceptably well on a 32-bit system (say, a P4, or even a P4-era Celeron). Take that same binary, put it on a 64-bit system (say, your Core i7). It will run *at least* at fast, most likely faster. Could it be made even faster than that with a 64-bit-native recompile? Sure. But if the 32-bit binary already ran acceptably well on the 32-bit system, and even faster on the 64-bit system, then who gives a shit?
 Guess how much a 64-bit system with 4 GB of RAM costs these days - a
 quick search gave me the number $379 at
Guess how much more that costs me than using my 32-bit system that already does everything I need it to do? $379. Keep in mind, I live in a normal place, not some fantasy land like California where a million dollars is pocket change. If I had hundreds of dollars to toss around, I'd get my bad tooth pulled. At least then I'd be getting a non-trivial benefit.
May 22 2010
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Nick Sabalausky wrote:
 If I had hundreds of dollars to toss around, I'd get my bad 
 tooth pulled.
Your original teeth are always better than the replacements, no matter how bad they are, unless they are causing you great pain. Don't let some greedy dentist convince you otherwise. Pulling a tooth can also destabilize its neighbors. 30 years ago, the dentist told me I needed my wisdom teeth pulled. I refused, and I still have them, and they're fine. Another thing - dentistry advances rapidly, along with their ability to save teeth. I don't know your situation, but be reluctant and skeptical about pulling teeth. Get a second opinion.
May 22 2010
parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:ht9ka3$1dqd$1 digitalmars.com...
 Nick Sabalausky wrote:
 If I had hundreds of dollars to toss around, I'd get my bad tooth pulled.
Your original teeth are always better than the replacements, no matter how bad they are, unless they are causing you great pain. Don't let some greedy dentist convince you otherwise. Pulling a tooth can also destabilize its neighbors. 30 years ago, the dentist told me I needed my wisdom teeth pulled. I refused, and I still have them, and they're fine. Another thing - dentistry advances rapidly, along with their ability to save teeth. I don't know your situation, but be reluctant and skeptical about pulling teeth. Get a second opinion.
Good to know. In my case though, it's a wisdom tooth and, I would say that it's chipped, but it would be more accurate to say that half of it is gone, and what remains has two sharp edges (Kinda like a two-pronged fork pointing into the gums). It's not really causing much pain, but I do conciously try to chew on the other side (unnatural for me) because if I were to bite down on something the wrong way, then it would hurt like hell. Plus two dentists have already said it should go. The first one did wanted me to get all the wisdoms out. But the second one just said the one needed it and that the other three didn't matter either way. I know I didn't actually need to say any of that, but, well, HIPAA be damnned ;)
May 22 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Nick Sabalausky wrote:
 Good to know. In my case though, it's a wisdom tooth and, I would say that 
 it's chipped, but it would be more accurate to say that half of it is gone, 
 and what remains has two sharp edges (Kinda like a two-pronged fork pointing 
 into the gums). It's not really causing much pain, but I do conciously try 
 to chew on the other side (unnatural for me) because if I were to bite down 
 on something the wrong way, then it would hurt like hell. Plus two dentists 
 have already said it should go.
My dentist told me they can now grind off the rotten top of a tooth and add a new one attached to the original root. He told me that because one of my molars has a growing crack in it. It's not a crisis now, but it will be. Wisdom teeth can be hard to work on because they're so far back. Dentists have a tendency to just pull them out rather than try to save them. I'd ask your dentist if he can just grind off the sharp edges, add a bit of epoxy (they use epoxy now to fill cavities), and keep the root in place. Or maybe do a crown.
 The first one did wanted me to get all the
 wisdoms out. But the second one just said the one needed it and that the
 other three didn't matter either way.
Never go back to that first one. I'd question the judgment of the 2nd in saying it "didn't matter" if they were pulled. I switched dentists when that one greedy jerk wanted to pull my wisdom teeth, and found one that shared my views on preserving the natural teeth as much as possible.
May 22 2010
next sibling parent reply retard <re tard.com.invalid> writes:
Sat, 22 May 2010 16:56:39 -0700, Walter Bright wrote:

 Nick Sabalausky wrote:
 Good to know. In my case though, it's a wisdom tooth and, I would say
 that it's chipped, but it would be more accurate to say that half of it
 is gone, and what remains has two sharp edges (Kinda like a two-pronged
 fork pointing into the gums). It's not really causing much pain, but I
 do conciously try to chew on the other side (unnatural for me) because
 if I were to bite down on something the wrong way, then it would hurt
 like hell. Plus two dentists have already said it should go.
My dentist told me they can now grind off the rotten top of a tooth and add a new one attached to the original root. He told me that because one of my molars has a growing crack in it. It's not a crisis now, but it will be. Wisdom teeth can be hard to work on because they're so far back. Dentists have a tendency to just pull them out rather than try to save them. I'd ask your dentist if he can just grind off the sharp edges, add a bit of epoxy (they use epoxy now to fill cavities), and keep the root in place. Or maybe do a crown. > The first one did wanted me to get all the wisdoms out. But the > second one just said the one needed it and that the other three > didn't matter either way. Never go back to that first one. I'd question the judgment of the 2nd in saying it "didn't matter" if they were pulled. I switched dentists when that one greedy jerk wanted to pull my wisdom teeth, and found one that shared my views on preserving the natural teeth as much as possible.
In my case the wisdom teeth didn't really fit in my mouth and harmed the occlusion. The doctors first tried to fix it with braces, but the growing wisdow tooth pushed it so hard that the first iteration of braces broke. After they removed the wisdom teeth, the treatment has again improved the occlusion to its previous pre wisdom teeth state.
May 22 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
retard wrote:
 In my case the wisdom teeth didn't really fit in my mouth and harmed the 
 occlusion. The doctors first tried to fix it with braces, but the growing 
 wisdow tooth pushed it so hard that the first iteration of braces broke. 
 After they removed the wisdom teeth, the treatment has again improved the 
 occlusion to its previous pre wisdom teeth state.
There can be good reasons to remove wisdom teeth. In my case, the dentist did not have a good reason, and in Nick's case the first clearly did not, either. I've also seen dentists that do astonishingly good work. Just be careful. They're your teeth, not the dentist's, and you're the one who has to live with the results. There's no going back from having one pulled. For me, this all went back to when I was a kid my dad told me a story about a colleague of his with bad teeth. He had a lot of trouble with them, and his dentist eventually convinced him to have them pulled and replaced with dentures. My father said he sure was sorry, because no matter the trouble he had with his teeth, they were a heluva lot better than dentures.
May 22 2010
parent BCS <none anon.com> writes:
Hello Walter,

 They're your teeth, not the dentist's,
But only as long as they are in your mouth. Once they get pulled they are a bio-hazard and you can't have it back. Or so says some dentists. :Þ -- ... <IXOYE><
May 23 2010
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:ht9qvs$1po0$1 digitalmars.com...
 Nick Sabalausky wrote:
 Good to know. In my case though, it's a wisdom tooth and, I would say 
 that it's chipped, but it would be more accurate to say that half of it 
 is gone, and what remains has two sharp edges (Kinda like a two-pronged 
 fork pointing into the gums). It's not really causing much pain, but I do 
 conciously try to chew on the other side (unnatural for me) because if I 
 were to bite down on something the wrong way, then it would hurt like 
 hell. Plus two dentists have already said it should go.
My dentist told me they can now grind off the rotten top of a tooth and add a new one attached to the original root. He told me that because one of my molars has a growing crack in it. It's not a crisis now, but it will be. I'd ask your dentist if he can just grind off the sharp edges, add a bit of epoxy (they use epoxy now to fill cavities), and keep the root in place.
Good stuff to know.
 Wisdom teeth can be hard to work on because they're so far back. Dentists 
 have a tendency to just pull them out rather than try to save them.
Yea, not surprised.
 Or maybe do a crown.
The second one did say something like "It wouldn't be worth it to do a crown". Seemed to be implying that would be overkill. But I may check into that and the other stuff though. I checked into the dental surgeons he recommended (apparently, ordinary dentists don't do teeth pulling, or at least not wisdom teeth, or at least not busted up wisdom teeth...I dunno), and it was around $700 at either place. That's including being put under, but I'm an absolute baby about these sorts of things: if I'm going to have a tooth pulled, I'm sure as hell not gonna be awake for it ;)
 The first one did wanted me to get all the
 wisdoms out. But the second one just said the one needed it and that the
 other three didn't matter either way.
Never go back to that first one.
Yea, never planned to. For various reasons actually. That one was actually the instructor of a student dentist that was working on me (because they're cheap and I'm poor ;) ). What she was telling me did seem slightly suspicious and I was convinced right then and there I needed a second opinion. Another thing though, is that this was part of a hospital system we have in Cleveland called University Hospitals, and I've had MAJOR problems with other offices in that system since then. I've always considered the other major system, Cleveland Clinic, to have absolute piss-poor management, but University Hospitals makes Cleveland Clinic look downright competent by comparison.
 I'd question the judgment of the 2nd in saying it "didn't matter" if they 
 were pulled.
Well, his exact words were more like "It's up to you if you want them pulled." Same thing, I suppose.
 I switched dentists when that one greedy jerk wanted to pull my wisdom 
 teeth, and found one that shared my views on preserving the natural teeth 
 as much as possible.
Yea, it can be difficult to find good people.
May 22 2010
parent Walter Bright <newshound1 digitalmars.com> writes:
Nick Sabalausky wrote:
 The second one did say something like "It wouldn't be worth it to do a 
 crown". Seemed to be implying that would be overkill. But I may check into 
 that and the other stuff though. I checked into the dental surgeons he 
 recommended (apparently, ordinary dentists don't do teeth pulling, or at 
 least not wisdom teeth, or at least not busted up wisdom teeth...I dunno), 
 and it was around $700 at either place. That's including being put under, 
 but I'm an absolute baby about these sorts of things: if I'm going to have a 
 tooth pulled, I'm sure as hell not gonna be awake for it ;)
Shopping around for a good price is a good idea, asking for a discount also can give good results. That said, the health of one's mouth is a canary for the rest of the body. Good teeth also are a major quality of life issue. It's not a good place to cut corners. I once knew a woman who came here from Russia. She had bad teeth, and a cringe-worthy smile. She had no money, but saved every dime until she could afford the dentist. The results were amazing. It completely transformed her face in a good way, and I think it turned around her life for the better.
May 22 2010
prev sibling parent reply retard <re tard.com.invalid> writes:
Sat, 22 May 2010 16:25:55 -0400, Nick Sabalausky wrote:

 "retard" <re tard.com.invalid> wrote in message
 news:ht9atu$rop$1 digitalmars.com...
 Sat, 22 May 2010 13:59:34 -0400, Nick Sabalausky wrote:
 Most apps don't need native x86_64. Only things that really push the
 limits of CPU/memory utilization need it, which, aside from bloatware
 (which admittedly is at epidemic levels lately), is really only a
 minority of apps. For the rest, if it already runs fine on 32-bit,
 then the same exec on a 64-bit machine is only going to run better
 anyway, and if is already ran fine before, then there's no problem.
You're suffering Stockholm syndrome there. Not having a functional 64-bit compiler isn't a positive feature.
I never said it was. All I said was that most apps don't need native 64-bit versions. Don't go pulling out strawmen.
Sorry for pulling out that, but I thought the claim "most apps" was a bit overoptimistic. If D is The next gen language, it probably also should solve the next generation of problems. I don't see much point in rewriting notepad, mspaint or solitaire in D. If you only need to deal with a small amount of data, why use native low-level languages? The fact is that resource usage will grow and artificial limitations (32-bit code) just makes the language irrelevant a lot faster. Another problem with x86 code is that you need to install all kinds of 32- bit libraries on a x86-64 (Linux) system. You also don't have the full 2.5 - 3.7 GB (depending on of RAM available for processes, the limit is something like 2 or 3 GB depending on the OS settings [1] (in some cases you need to assume the user has only allowed 2 GB for user mode processes). So in reality you could probably have even less than 2 GB available. Is that a problem? Yes, it is a serious problem in professional audio/video/photo applications, soon games (huge game worlds, complex SVM/ANN AI), and all kinds of servers.
 Take a 32-bit executable optimized for i386 or i586, that runs
 acceptably well on a 32-bit system (say, a P4, or even a P4-era
 Celeron). Take that same binary, put it on a 64-bit system (say, your
 Core i7). It will run *at least* at fast, most likely faster.
Yea, it will run faster, but who said the original application ran fast enough? CPU demanding applications never run fast enough. The applications tend to require more and more resources. It seems the x87 instructions (i386/586) have 5-6x larger latency than SSE2+ and SSE2+ has 2-4x greater throughput. Combined, that could mean 20x slower loops.
 Could it be made even faster than that with a 64-bit-native recompile?
 Sure. But if the 32-bit binary already ran acceptably well on the 32-bit
 system, and even faster on the 64-bit system, then who gives a shit?
 
 Guess how much a 64-bit system with 4 GB of RAM costs these days - a
 quick search gave me the number $379 at
Guess how much more that costs me than using my 32-bit system that already does everything I need it to do? $379.
Sure. And I have to admit I don't really know what's your target audience. It might even be 8086 systems since IIRC dmd/dmc support old 16- bit dos environments. But most commercial applications aren't geared towards Your 32-bit system. There's a good reason - people do upgrade their systems at least once in 5 years (x86-64 appeared 7 years ago..). Your system *will* physically break at some point and you have to replace it, probably with a faster one, because they won't be selling compatible parts anymore. Computers have a limited life time. Ordinary people don't lubricate their fans or replace bad capacitors themselves. You can find used parts, but those are more expensive than new ones. For example a used 128MB/SDRAM-100 module typically costs as much as a 1GB/ DDR2-800 here. Budget GPUs for the PCI bus cost 4x as much as similar PCI Express cards. A 750GB PATA disk costs as much as a 1500GB SATA-2 disk. And let's be honest, $379 isn't that much - if you only upgrade the cpu +mobo+ram+gpu, it's closer to $100-150. If you can't afford that much once in 5 years, you should stop developing software, seriously. If Your application doesn't require new hardware, the 3rd party software forces you to upgrade. For example, recently I noticed that the Ati/ Nvidia GPU control panel requires a .NET version that is not available for Windows 2000 (and that's not the only program not working on Windows 2000 anymore..). So I must buy a new operating system.. but people can't legally sell their used OEM Windows without also selling me their 64-bit machines =) And I can't buy a new Windows XP/Vista license, only Windows 7 available on stores. So basically I'm forced to also upgrade the hardware. [1] http://www.brianmadden.com/blogs/brianmadden/archive/2004/02/19/ the-4gb-windows-memory-limit-what-does-it-really-mean.aspx
May 22 2010
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
retard wrote:
 Sorry for pulling out that, but I thought the claim "most apps" was a bit 
 overoptimistic. If D is The next gen language, it probably also should 
 solve the next generation of problems.
FWIW, I fully agree with the notion that D needs to fully support 64 bit compilation.
May 22 2010
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 retard wrote:
 Sorry for pulling out that, but I thought the claim "most apps" was a bit
 overoptimistic. If D is The next gen language, it probably also should
 solve the next generation of problems.
FWIW, I fully agree with the notion that D needs to fully support 64 bit compilation.
Is that still the next priority after finishing D2? Now that most of the more annoying bugs in DMD and Phobos have been fixed, lack of 64-bit support has become by far my number 1 complaint about D. I do bioinformatics research in D and overall I love it because it's about the only language that allows me to get prototypes up and running quickly and yet still have them execute efficiently. On the other hand, we have compute nodes at my university with 100+ _gigabytes_ of RAM that I can't use. Even the lack of libraries in D doesn't bother me much because I can roll my own and they end up having better APIs than would be possible in an older, cruftier, less featureful language. The only time I've ever seriously regretted using D as my language of choice is when I'm working with huge datasets and want access to all this memory and had to resort to kludges to cram my programs into the artificial limits of 32-bit address space.
May 22 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
dsimcha wrote:
 Is that still the next priority after finishing D2?
Yes. I think no support for 64 bits is a serious deficit.
May 22 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Walter Bright wrote:
 dsimcha wrote:
 Is that still the next priority after finishing D2?
Yes. I think no support for 64 bits is a serious deficit.
I should amend that by saying that post D2 emphasis will be on addressing problems with the toolchain, of which 64 bit support is a big one. Language changes will be a very low priority. Other toolchain problems are things like shared libraries, installation, bugzilla bugs, etc.
May 22 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

 I should amend that by saying that post D2 emphasis will be on addressing 
 problems with the toolchain, of which 64 bit support is a big one.
 
 Language changes will be a very low priority.
 
 Other toolchain problems are things like shared libraries, installation, 
 bugzilla bugs, etc.
Things are not as simple as you say: among the Bugzilla bugs there is more than just pure implementation bugs, there are design bugs too, and they are essentially "language changes". I can list you many of them. Bye, bearophile
May 22 2010
parent Bane <branimir.milosavljevic gmail.com> writes:
bearophile Wrote:

 Walter Bright:
 
 I should amend that by saying that post D2 emphasis will be on addressing 
 problems with the toolchain, of which 64 bit support is a big one.
 
 Language changes will be a very low priority.
 
 Other toolchain problems are things like shared libraries, installation, 
 bugzilla bugs, etc.
Things are not as simple as you say: among the Bugzilla bugs there is more than just pure implementation bugs, there are design bugs too, and they are essentially "language changes". I can list you many of them. Bye, bearophile
I just hope D won't try to become Jack of All Trades, ad least not to early :D
May 23 2010
prev sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 23/05/2010 01:45, Walter Bright wrote:
 Walter Bright wrote:

 Other toolchain problems are things like shared libraries, installation,
 bugzilla bugs, etc.
Installation? What kind of problems are those? -- Bruno Medeiros - Software Engineer
May 24 2010
next sibling parent reply dsimcha <dsimcha gmail.com> writes:
== Quote from Bruno Medeiros (brunodomedeiros+spam com.gmail)'s article
 On 23/05/2010 01:45, Walter Bright wrote:
 Walter Bright wrote:

 Other toolchain problems are things like shared libraries, installation,
 bugzilla bugs, etc.
Installation? What kind of problems are those?
On Linux, DMD can be a PITA to install if you're using an ancient distribution due to glibc being a different version than what DMD expects. I use such a machine and the only way to get DMD to work is to compile from source. On Windows there's been some talk of making an installer. Personally, I think this should be a very low priority. Unpacking a zip file may not be the most friendly installation method for someone who's completely computer illiterate, but we're talking about programmers here. Even novice ones should be able to figure out how to unpack a zip file into a reasonable directory.
May 24 2010
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
dsimcha:
 On Windows there's been some talk of making an installer.  Personally, I think
 this should be a very low priority.  Unpacking a zip file may not be the most
 friendly installation method for someone who's completely computer illiterate,
but
 we're talking about programmers here.  Even novice ones should be able to
figure
 out how to unpack a zip file into a reasonable directory.
Some cases: - A person that is expert of Linux but is ignorant of Windows, and has to install it on Windows. This person can solve the install problem, but can enjoy some help. - An university student of the first year that wants to try D. Such person knows basic programming, data structures, and some OOP, but doesn't know much else about computers, OS, GUI, etc. A very easy installer can help. Expert C++ programmers can be more interested in keep using C++, while young people that don't know C++ yet can be interested in learning D. Such kind of people must be taken into account by D designers because they are the future. - A person that at work has just 10 free minutes, and wants to install and take a quick look at D language. Even reading the pages that explain how to install D can take time better spent in other ways. - A person that is kind of interested about some kind of language different from Java, but is not sure what to try, D, Scala, or some other language. So for such person it's good to lower the entry costs of D usage as much as possible. If D catches the interest of such person, this person can later spend even months learning D. So I'd the digitalmars (or other site) to keep the zip version, but I think a very easy and fast installer can be useful (For the Windows version of ShedSkin we use a similar simple installer based on WinRar). Even a few clicks installer can be a lot for the casual person that is not sure about trying D. For such persons the codepad and ideone sites can offer a way to try to compile and run D snippets online. Even faster than an installer, for people that just want to try the language quickly it can be useful a single-file zero-install download-and-go thing (this doesn't replace the installer and the zip, this can be a third download option). This thing when clicked on can show a basic editor window based on Scintilla. Bye, bearophile
May 24 2010
parent Adam Ruppe <destructionator gmail.com> writes:
On 5/24/10, bearophile <bearophileHUGS lycos.com> wrote:
 Even faster than an installer, for people that just want to try the language
 quickly it can be useful a single-file zero-install download-and-go thing
That's exactly what the zip is. Unzip it and go. To uninstall, just delete the folder. On Linux, you could argue that you have to chmod +x it (not strictly true, but granted). Don't even have that on Windows.
May 24 2010
prev sibling next sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
dsimcha, el 24 de mayo a las 13:05 me escribiste:
 == Quote from Bruno Medeiros (brunodomedeiros+spam com.gmail)'s article
 On 23/05/2010 01:45, Walter Bright wrote:
 Walter Bright wrote:

 Other toolchain problems are things like shared libraries, installation,
 bugzilla bugs, etc.
Installation? What kind of problems are those?
On Linux, DMD can be a PITA to install if you're using an ancient distribution due to glibc being a different version than what DMD expects. I use such a machine and the only way to get DMD to work is to compile from source.
BTW, distributing a huge .zip with the binaries for all platforms is not ideal either. In Linux you have to make the binaries executables. The only straighforward option for Linux is the .deb, but it's only straightforward for Ubuntu 32-bits, anything else needs some (non-trivial) work. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- I would love to fix this world but I'm so lazy... so lazy...
May 24 2010
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Leandro Lucarella (llucax gmail.com)'s article
 dsimcha, el 24 de mayo a las 13:05 me escribiste:
 == Quote from Bruno Medeiros (brunodomedeiros+spam com.gmail)'s article
 On 23/05/2010 01:45, Walter Bright wrote:
 Walter Bright wrote:

 Other toolchain problems are things like shared libraries, installation,
 bugzilla bugs, etc.
Installation? What kind of problems are those?
On Linux, DMD can be a PITA to install if you're using an ancient distribution due to glibc being a different version than what DMD expects. I use such a machine and the only way to get DMD to work is to compile from source.
BTW, distributing a huge .zip with the binaries for all platforms is not ideal either. In Linux you have to make the binaries executables. The only straighforward option for Linux is the .deb, but it's only straightforward for Ubuntu 32-bits, anything else needs some (non-trivial) work.
If packaging nightmares like this don't explain why Linux hasn't succeeded on the desktop, then nothing will.
May 24 2010
parent reply retard <re tard.com.invalid> writes:
Mon, 24 May 2010 17:45:01 +0000, dsimcha wrote:

 == Quote from Leandro Lucarella (llucax gmail.com)'s article
 dsimcha, el 24 de mayo a las 13:05 me escribiste:
 == Quote from Bruno Medeiros (brunodomedeiros+spam com.gmail)'s
 article
 On 23/05/2010 01:45, Walter Bright wrote:
 Walter Bright wrote:

 Other toolchain problems are things like shared libraries,
 installation, bugzilla bugs, etc.
Installation? What kind of problems are those?
On Linux, DMD can be a PITA to install if you're using an ancient distribution due to glibc being a different version than what DMD expects. I use such a machine and the only way to get DMD to work is to compile from source.
BTW, distributing a huge .zip with the binaries for all platforms is not ideal either. In Linux you have to make the binaries executables. The only straighforward option for Linux is the .deb, but it's only straightforward for Ubuntu 32-bits, anything else needs some (non-trivial) work.
If packaging nightmares like this don't explain why Linux hasn't succeeded on the desktop, then nothing will.
The files inside the .zip won't run because one particular Mr. Bright doesn't set the +x flag on. It's not a fault of Linux if he is using retarded Windows version of the zip packager. It's easy to fix, he just doesn't care. The zip works just fine even on a 64-bit system if the 32- bit libraries have been installed. The Microsoft installer stuff doesn't work well either. Try running 64- bit installers on a 32-bit Windows system or the latest .NET expecting .msi files on Windows 95/98/ME or Windows NT4/2000.. now how does it handle package dependencies - the answer is it doesn't. A 32-bit .deb works in most (if not all) 32-bit Debian derivatives unless the package is expecting some Ubuntu related configuration. Your solution seems to be: "because it's too complex to build packages for every distro, don't provide anything". Yay, nothing works.
May 25 2010
parent reply Justin Johansson <no spam.com> writes:
retard wrote:
 The files inside the .zip won't run because one particular Mr. Bright 
 doesn't set the +x flag on. It's not a fault of Linux if he is using 
 retarded Windows version of the zip packager. It's easy to fix, he just 
 doesn't care. The zip works just fine even on a 64-bit system if the 32-
 bit libraries have been installed.
Hey retard, while I enjoy reading a lot of the controversy that you like to create on this NG, sorry, on this occasion I think you are being somewhat unfair towards one particular person here. My understanding is that .zip files are traditionally a DOS (originally PKZIP) then come Windows thing then come Unix available. http://en.wikipedia.org/wiki/ZIP_%28file_format%29 Being so, .zip files do not inherently/traditionally support recording Unix file permissions such as +x within the archive. If such facilities exist today in Unix .zip utilities (and I am unaware of the same) these would have to be extensions over and above what .zip files are commonly understood to support given the DOS/PKZIP history of this file format. Recording of Unix file permissions in archives is traditionally achieved with .tar files (and compressed variants) as I am sure you are well aware. When downloading archive from the net, I look for .zip files if wanting to install on Windows and .tar or .tar.gz if wanting to install on Unixes. I imagine that most Unix-aware folks would do the same. In this instance I think you should be asking that archives be available in both .tar and .zip variants for the respective platforms and not accusing a certain somebody of being delinquent in not setting a +x flag on a file in a .zip file. Cheers Justin Johansson
May 25 2010
parent reply Leandro Lucarella <llucax gmail.com> writes:
Justin Johansson, el 25 de mayo a las 22:42 me escribiste:
 retard wrote:
The files inside the .zip won't run because one particular Mr.
Bright doesn't set the +x flag on. It's not a fault of Linux if he
is using retarded Windows version of the zip packager. It's easy
to fix, he just doesn't care. The zip works just fine even on a
64-bit system if the 32-
bit libraries have been installed.
Hey retard, while I enjoy reading a lot of the controversy that you like to create on this NG, sorry, on this occasion I think you are being somewhat unfair towards one particular person here. My understanding is that .zip files are traditionally a DOS (originally PKZIP) then come Windows thing then come Unix available. http://en.wikipedia.org/wiki/ZIP_%28file_format%29 Being so, .zip files do not inherently/traditionally support recording Unix file permissions such as +x within the archive. If such facilities exist today in Unix .zip utilities (and I am unaware of the same) these would have to be extensions over and above what .zip files are commonly understood to support given the DOS/PKZIP history of this file format.
Yes, it does: $ touch bin $ chmod a+x bin $ ls -l bin -rwxr-xr-x 1 luca luca 0 may 25 12:27 bin $ zip bin.zip bin adding: bin (stored 0%) $ rm bin $ ls -l bin ls: cannot access bin: No such file or directory $ unzip bin.zip Archive: bin.zip extracting: bin $ ls -l bin -rwxr-xr-x 1 luca luca 0 may 25 12:27 bin
 Recording of Unix file permissions in archives is traditionally
 achieved with .tar files (and compressed variants) as I am sure you
 are well aware.
 
 When downloading archive from the net, I look for .zip files if
 wanting to install on Windows and .tar or .tar.gz if wanting to
 install on Unixes.  I imagine that most Unix-aware folks would do
 the same.
That makes no sense. Even when history is interesting, now both zip and tar works just fine in both Unix and Windows, so retard is right, the zip being broken is entirely Walter's fault. And I think he knows it, that's why he said he wanted to give some love to the toolchain and distribution issues when D2 is finished. I don't think either attacking Walter gratuitously or defending him blindly is a good for D. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- He cometido pecados, he hecho el mal, he sido víctima de la envidia, el egoísmo, la ambición, la mentira y la frivolidad, pero siempre he sido un padre argentino que quiere que su hijo triunfe en la vida. -- Ricardo Vaporeso
May 25 2010
parent reply retard <re tard.com.invalid> writes:
Tue, 25 May 2010 13:38:00 -0300, Leandro Lucarella wrote:

 Justin Johansson, el 25 de mayo a las 22:42 me escribiste:
 retard wrote:
The files inside the .zip won't run because one particular Mr. Bright
doesn't set the +x flag on. It's not a fault of Linux if he is using
retarded Windows version of the zip packager. It's easy to fix, he
just doesn't care. The zip works just fine even on a 64-bit system if
the 32-
bit libraries have been installed.
Hey retard, while I enjoy reading a lot of the controversy that you like to create on this NG, sorry, on this occasion I think you are being somewhat unfair towards one particular person here. My understanding is that .zip files are traditionally a DOS (originally PKZIP) then come Windows thing then come Unix available. http://en.wikipedia.org/wiki/ZIP_%28file_format%29 Being so, .zip files do not inherently/traditionally support recording Unix file permissions such as +x within the archive. If such facilities exist today in Unix .zip utilities (and I am unaware of the same) these would have to be extensions over and above what .zip files are commonly understood to support given the DOS/PKZIP history of this file format.
Yes, it does: $ touch bin $ chmod a+x bin $ ls -l bin -rwxr-xr-x 1 luca luca 0 may 25 12:27 bin $ zip bin.zip bin adding: bin (stored 0%) $ rm bin $ ls -l bin ls: cannot access bin: No such file or directory $ unzip bin.zip Archive: bin.zip extracting: bin $ ls -l bin -rwxr-xr-x 1 luca luca 0 may 25 12:27 bin
 Recording of Unix file permissions in archives is traditionally
 achieved with .tar files (and compressed variants) as I am sure you are
 well aware.
 
 When downloading archive from the net, I look for .zip files if wanting
 to install on Windows and .tar or .tar.gz if wanting to install on
 Unixes.  I imagine that most Unix-aware folks would do the same.
That makes no sense. Even when history is interesting, now both zip and tar works just fine in both Unix and Windows, so retard is right, the zip being broken is entirely Walter's fault. And I think he knows it, that's why he said he wanted to give some love to the toolchain and distribution issues when D2 is finished. I don't think either attacking Walter gratuitously or defending him blindly is a good for D.
I wasn't attacking anyone, just pointing out the cause. Yes, it's because he uses a windows version of zip so it's his decision to make it harder for *nix users. Because of the non-free license he is the only person who can fix this -- I can't officially redistribute a fixed .zip package or any other repackaged dmd. And Justin is also right, I wouldn't mind having a .tar.gz package with the executable flags correctly set (and without win32 executables). Just repacking the distribution on a *nix computer would be enough to fix it and would probably be the easiest solution if windows zip archivers don't support setting the flag.
May 25 2010
parent Michel Fortin <michel.fortin michelf.com> writes:
On 2010-05-25 20:00:44 -0400, retard <re tard.com.invalid> said:

 Because of the non-free license he is the only person who
 can fix this -- I can't officially redistribute a fixed .zip package or
 any other repackaged dmd.
Well, there is a way: create something that automatically downloads, extract, and set the executable bits on the proper file. This is exactly what D for Xcode does. If anyone is interested, I've put the scripts it uses for that here: <http://michelf.com/docs/dmd-install/> I expect they'll work fine on Linux, but you may want to change the DX_INSTALL_DIR variable in the dmd1download.sh and dmd2download.sh files (D for Xcode installs in /Library/Compilers). Feel free to adapt and redistribute the scripts as you like; they're available under GPL 2 or later, same as the rest of D for Xcode. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
May 26 2010
prev sibling next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
dsimcha wrote:
 On Windows there's been some talk of making an installer.  Personally, I think
 this should be a very low priority.  Unpacking a zip file may not be the most
 friendly installation method for someone who's completely computer illiterate,
but
 we're talking about programmers here.  Even novice ones should be able to
figure
 out how to unpack a zip file into a reasonable directory.
There is an installer for Windows. See http://www.digitalmars.com/d/download.html
May 24 2010
prev sibling next sibling parent sybrandy <sybrandy gmail.com> writes:
On 05/24/2010 09:05 AM, dsimcha wrote:
 == Quote from Bruno Medeiros (brunodomedeiros+spam com.gmail)'s article
 On 23/05/2010 01:45, Walter Bright wrote:
 Walter Bright wrote:

 Other toolchain problems are things like shared libraries, installation,
 bugzilla bugs, etc.
Installation? What kind of problems are those?
On Linux, DMD can be a PITA to install if you're using an ancient distribution due to glibc being a different version than what DMD expects. I use such a machine and the only way to get DMD to work is to compile from source. On Windows there's been some talk of making an installer. Personally, I think this should be a very low priority. Unpacking a zip file may not be the most friendly installation method for someone who's completely computer illiterate, but we're talking about programmers here. Even novice ones should be able to figure out how to unpack a zip file into a reasonable directory.
Actually, if you use InstallJammer, creating an installer could/should be very easy. I used it at work for a project and it will allow you to create an installer for a number of different platforms and a plain-old .zip/.tar.gz. Though, IMHO, I really like the way it's currently distributed. I've grown to appreciate the simple "unzip to install"/"delete directory to uninstall" way of doing things a lot. I just really hate the dependency that damn near everything has on the Windows registry. Casey
May 24 2010
prev sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 24/05/2010 14:05, dsimcha wrote:
 == Quote from Bruno Medeiros (brunodomedeiros+spam com.gmail)'s article
 On 23/05/2010 01:45, Walter Bright wrote:
 Walter Bright wrote:

 Other toolchain problems are things like shared libraries, installation,
 bugzilla bugs, etc.
Installation? What kind of problems are those?
On Linux, DMD can be a PITA to install if you're using an ancient distribution due to glibc being a different version than what DMD expects. I use such a machine and the only way to get DMD to work is to compile from source. On Windows there's been some talk of making an installer. Personally, I think this should be a very low priority. Unpacking a zip file may not be the most friendly installation method for someone who's completely computer illiterate, but we're talking about programmers here. Even novice ones should be able to figure out how to unpack a zip file into a reasonable directory.
Ah, ok, I've only used DMD windows so far, thus I wasn't aware of those problems. (in windows its fine) And I think the zip file installation is fine versus using an installer, in fact I even prefer it. -- Bruno Medeiros - Software Engineer
May 26 2010
parent reply Don <nospam nospam.com> writes:
Bruno Medeiros wrote:
 On 24/05/2010 14:05, dsimcha wrote:
 == Quote from Bruno Medeiros (brunodomedeiros+spam com.gmail)'s article
 On 23/05/2010 01:45, Walter Bright wrote:
 Walter Bright wrote:

 Other toolchain problems are things like shared libraries, 
 installation,
 bugzilla bugs, etc.
Installation? What kind of problems are those?
On Linux, DMD can be a PITA to install if you're using an ancient distribution due to glibc being a different version than what DMD expects. I use such a machine and the only way to get DMD to work is to compile from source. On Windows there's been some talk of making an installer. Personally, I think this should be a very low priority. Unpacking a zip file may not be the most friendly installation method for someone who's completely computer illiterate, but we're talking about programmers here. Even novice ones should be able to figure out how to unpack a zip file into a reasonable directory.
Ah, ok, I've only used DMD windows so far, thus I wasn't aware of those problems. (in windows its fine) And I think the zip file installation is fine versus using an installer, in fact I even prefer it.
Ditto. Windows installers always make me nervous -- you're never quite sure what they're going to do, and what problems they're about to cause.
May 26 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 05/26/2010 05:07 AM, Don wrote:
 Bruno Medeiros wrote:
 On 24/05/2010 14:05, dsimcha wrote:
 == Quote from Bruno Medeiros (brunodomedeiros+spam com.gmail)'s article
 On 23/05/2010 01:45, Walter Bright wrote:
 Walter Bright wrote:

 Other toolchain problems are things like shared libraries,
 installation,
 bugzilla bugs, etc.
Installation? What kind of problems are those?
On Linux, DMD can be a PITA to install if you're using an ancient distribution due to glibc being a different version than what DMD expects. I use such a machine and the only way to get DMD to work is to compile from source. On Windows there's been some talk of making an installer. Personally, I think this should be a very low priority. Unpacking a zip file may not be the most friendly installation method for someone who's completely computer illiterate, but we're talking about programmers here. Even novice ones should be able to figure out how to unpack a zip file into a reasonable directory.
Ah, ok, I've only used DMD windows so far, thus I wasn't aware of those problems. (in windows its fine) And I think the zip file installation is fine versus using an installer, in fact I even prefer it.
Ditto. Windows installers always make me nervous -- you're never quite sure what they're going to do, and what problems they're about to cause.
Hmmm, that's quite a change of attitude since my Windows days. I remember I wouldn't look twice at an application that didn't come with an installer. Andrei
May 26 2010
next sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 26/05/2010 14:23, Andrei Alexandrescu wrote:
 On 05/26/2010 05:07 AM, Don wrote:
 Bruno Medeiros wrote:
 On 24/05/2010 14:05, dsimcha wrote:
 == Quote from Bruno Medeiros (brunodomedeiros+spam com.gmail)'s article
 On 23/05/2010 01:45, Walter Bright wrote:
 Walter Bright wrote:

 Other toolchain problems are things like shared libraries,
 installation,
 bugzilla bugs, etc.
Installation? What kind of problems are those?
On Linux, DMD can be a PITA to install if you're using an ancient distribution due to glibc being a different version than what DMD expects. I use such a machine and the only way to get DMD to work is to compile from source. On Windows there's been some talk of making an installer. Personally, I think this should be a very low priority. Unpacking a zip file may not be the most friendly installation method for someone who's completely computer illiterate, but we're talking about programmers here. Even novice ones should be able to figure out how to unpack a zip file into a reasonable directory.
Ah, ok, I've only used DMD windows so far, thus I wasn't aware of those problems. (in windows its fine) And I think the zip file installation is fine versus using an installer, in fact I even prefer it.
Ditto. Windows installers always make me nervous -- you're never quite sure what they're going to do, and what problems they're about to cause.
Hmmm, that's quite a change of attitude since my Windows days. I remember I wouldn't look twice at an application that didn't come with an installer. Andrei
I may not agree entirely with Don, because I my preference for zip files was referring to the DMD case only, its not a general preference, it depends on the application. I would say an installer makes sense when the application needs to do other OS tasks other than just extracting its files onto a folder, such as creating Program menu shortcuts, setting up file associations, configuring environment variables or OS services. Also if the application stores data or configuration in user home folders, or in the registry. Any of these reasons most likely merit an installer (and uninstaller). -- Bruno Medeiros - Software Engineer
May 26 2010
prev sibling parent sybrandy <sybrandy gmail.com> writes:
 And I think the zip file installation is fine versus using an
 installer, in fact I even prefer it.
Ditto. Windows installers always make me nervous -- you're never quite sure what they're going to do, and what problems they're about to cause.
Hmmm, that's quite a change of attitude since my Windows days. I remember I wouldn't look twice at an application that didn't come with an installer. Andrei
I too prefer the zip file approach. I guess what turned me to it was simplicity of installation (especially if you have limited rights), portability across machines, ease of removal, and it doesn't bloat your registry which, AFAIK, will still slow your machine down since it's read every time you launch a program. Though, if you package the compiler as a portable app (PortableApps.com), that would be cool too. You get the benefits of an installer and the installation is still unobtrusive. Casey
May 26 2010
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 05/24/2010 06:21 AM, Bruno Medeiros wrote:
 On 23/05/2010 01:45, Walter Bright wrote:
 Walter Bright wrote:

 Other toolchain problems are things like shared libraries, installation,
 bugzilla bugs, etc.
Installation? What kind of problems are those?
E.g. I can't install the .deb file on my 64-bit Linux. Andrei
May 24 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 E.g. I can't install the .deb file on my 64-bit Linux.
I think the current .deb files can be.
May 24 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 05/24/2010 05:20 PM, Walter Bright wrote:
 Andrei Alexandrescu wrote:
 E.g. I can't install the .deb file on my 64-bit Linux.
I think the current .deb files can be.
Just tried again, same error message: Error: Wrong architecture 'i386' Let me know how I can help. Andrei
May 24 2010
next sibling parent reply eles <eles eles.com> writes:
== Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s
article
 On 05/24/2010 05:20 PM, Walter Bright wrote:
 Andrei Alexandrescu wrote:
 E.g. I can't install the .deb file on my 64-bit Linux.
I think the current .deb files can be.
Just tried again, same error message: Error: Wrong architecture 'i386' Let me know how I can help. Andrei
just type sudo dpkg -i --force-architecture dmd_X.XXX-0_i386.deb where dmd_X.XXX-0_i386.deb is the name of the .deb file
May 24 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 05/24/2010 07:16 PM, eles wrote:
 == Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s
 article
 On 05/24/2010 05:20 PM, Walter Bright wrote:
 Andrei Alexandrescu wrote:
 E.g. I can't install the .deb file on my 64-bit Linux.
I think the current .deb files can be.
Just tried again, same error message: Error: Wrong architecture 'i386' Let me know how I can help. Andrei
just type sudo dpkg -i --force-architecture dmd_X.XXX-0_i386.deb where dmd_X.XXX-0_i386.deb is the name of the .deb file
Thanks. Is there a way to make that directive automatic inside the .deb file? Andrei
May 25 2010
parent Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el 25 de mayo a las 08:27 me escribiste:
 On 05/24/2010 07:16 PM, eles wrote:
== Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s
article
On 05/24/2010 05:20 PM, Walter Bright wrote:
Andrei Alexandrescu wrote:
E.g. I can't install the .deb file on my 64-bit Linux.
I think the current .deb files can be.
Just tried again, same error message: Error: Wrong architecture 'i386' Let me know how I can help. Andrei
just type sudo dpkg -i --force-architecture dmd_X.XXX-0_i386.deb where dmd_X.XXX-0_i386.deb is the name of the .deb file
Thanks. Is there a way to make that directive automatic inside the .deb file?
No, that's a broken deb file. The "right thing to do" is make 2 packages, one for i386 and one for amd64. The amd64 packages should depend on the necessary 32-bit libraries like ia32-libs. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Every day 21 new born babies will be given to the wrong parents
May 25 2010
prev sibling next sibling parent eles <eles eles.com> writes:
installation instructions for linux (incl. 32-bit or 64 bit) are here:

http://www.digitalmars.com/d/2.0/dmd-linux.html

however, i would like to have it ported on 64-bit

== Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s
article
 On 05/24/2010 05:20 PM, Walter Bright wrote:
 Andrei Alexandrescu wrote:
 E.g. I can't install the .deb file on my 64-bit Linux.
I think the current .deb files can be.
Just tried again, same error message: Error: Wrong architecture 'i386' Let me know how I can help. Andrei
May 24 2010
prev sibling parent reply Jesse Phillips <jessekphillips+D gmail.com> writes:
Andrei Alexandrescu wrote:

 On 05/24/2010 05:20 PM, Walter Bright wrote:
 Andrei Alexandrescu wrote:
 E.g. I can't install the .deb file on my 64-bit Linux.
I think the current .deb files can be.
Just tried again, same error message: Error: Wrong architecture 'i386' Let me know how I can help. Andrei
DDebber will build packages for i386 and AMD64. The main difference is that the AMD64 package will depended on the required ia32 libraries which will not be pulled in with -force-architecture. Just say'n Ok, it still isn't that simple because if you don't have the required packages then dmd will be left unconfigured since dpkg will not install
May 24 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 05/24/2010 08:03 PM, Jesse Phillips wrote:
 Andrei Alexandrescu wrote:

 On 05/24/2010 05:20 PM, Walter Bright wrote:
 Andrei Alexandrescu wrote:
 E.g. I can't install the .deb file on my 64-bit Linux.
I think the current .deb files can be.
Just tried again, same error message: Error: Wrong architecture 'i386' Let me know how I can help. Andrei
DDebber will build packages for i386 and AMD64. The main difference is that the AMD64 package will depended on the required ia32 libraries which will not be pulled in with -force-architecture. Just say'n Ok, it still isn't that simple because if you don't have the required packages then dmd will be left unconfigured since dpkg will not install
I think at the end of the day we need a link that people can click on and that's that. How can we make that work? Do we need a 64-bit .deb, or is it possible to automatically instruct the package manager (in the case of Ubuntu gdebi) to install it with dependencies and all? Andrei
May 25 2010
parent reply Jesse Phillips <jessekphillips+D gmail.com> writes:
Andrei Alexandrescu wrote:

 I think at the end of the day we need a link that people can click on 
 and that's that. How can we make that work? Do we need a 64-bit .deb, or 
 is it possible to automatically instruct the package manager (in the 
 case of Ubuntu gdebi) to install it with dependencies and all?

 Andrei
Ubuntu (and family) is probably the only distro that you can expect gdebi to be installed on. And the only way to have it install the proper packages is to install a package with the required dependencies e.g. an AMD64 package. To really make many Linux users happy would be to provide a repository. Even Google doesn't provide a one-click install for their programs (I bring them up because they try very hard to be user friendly).
May 25 2010
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 05/25/2010 09:22 AM, Jesse Phillips wrote:
 Andrei Alexandrescu wrote:

 I think at the end of the day we need a link that people can click on
 and that's that. How can we make that work? Do we need a 64-bit .deb, or
 is it possible to automatically instruct the package manager (in the
 case of Ubuntu gdebi) to install it with dependencies and all?

 Andrei
Ubuntu (and family) is probably the only distro that you can expect gdebi to be installed on. And the only way to have it install the proper packages is to install a package with the required dependencies e.g. an AMD64 package.
OK, thank you.
 To really make many Linux users happy would be to provide a repository.
 Even Google doesn't provide a one-click install for their programs (I
 bring them up because they try very hard to be user friendly).
Good point. Who here knows what steps need be taken to create a repository? Andrei
May 25 2010
parent reply Jesse Phillips <jessekphillips+D gmail.com> writes:
Andrei Alexandrescu wrote:

 Good point. Who here knows what steps need be taken to create a repository?

 Andrei
I haven't tried myself, someone has for the Tango side. It doesn't look to be too difficult: http://www.debian-administration.org/articles/286 If you would like I could try to come up with a configuration file this week/weekend
May 25 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 05/25/2010 10:38 PM, Jesse Phillips wrote:
 Andrei Alexandrescu wrote:

 Good point. Who here knows what steps need be taken to create a repository?

 Andrei
I haven't tried myself, someone has for the Tango side. It doesn't look to be too difficult: http://www.debian-administration.org/articles/286 If you would like I could try to come up with a configuration file this week/weekend
That would be awesome, thanks! Walter, it would be also great if you could contact the person who did the .deb file to also kindly ask for a 64-bit .deb. Thanks all, Andrei
May 26 2010
parent Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el 26 de mayo a las 08:19 me escribiste:
 On 05/25/2010 10:38 PM, Jesse Phillips wrote:
Andrei Alexandrescu wrote:

Good point. Who here knows what steps need be taken to create a repository?

Andrei
I haven't tried myself, someone has for the Tango side. It doesn't look to be too difficult: http://www.debian-administration.org/articles/286 If you would like I could try to come up with a configuration file this week/weekend
That would be awesome, thanks! Walter, it would be also great if you could contact the person who did the .deb file to also kindly ask for a 64-bit .deb.
As Jesse Phillips said: DDebber will build packages for i386 and AMD64. The main difference is that the AMD64 package will depended on the required ia32 libraries which will not be pulled in with -force-architecture. http://dsource.org/projects/ddebber "The goal is to give this program to Walter so he is able to build .deb packages and host them on digitalmars.com" Maybe he can take a look at that. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Ever tried? Ever failed? - Try again! Fail better!
May 26 2010
prev sibling parent Leandro Lucarella <llucax gmail.com> writes:
Jesse Phillips, el 25 de mayo a las 14:22 me escribiste:
 Andrei Alexandrescu wrote:
 
 I think at the end of the day we need a link that people can click on 
 and that's that. How can we make that work? Do we need a 64-bit .deb, or 
 is it possible to automatically instruct the package manager (in the 
 case of Ubuntu gdebi) to install it with dependencies and all?

 Andrei
Ubuntu (and family) is probably the only distro that you can expect gdebi to be installed on. And the only way to have it install the proper packages is to install a package with the required dependencies e.g. an AMD64 package. To really make many Linux users happy would be to provide a repository. Even Google doesn't provide a one-click install for their programs (I bring them up because they try very hard to be user friendly).
In Ubuntu is extremely easy, just create a PPA[1]. For Debian is not that east but is not that hard either and I think providing a (well done) .deb is acceptable. In Debian (or even Ubuntu) it could be possible to pull the package "upstream" (to the non-free repositories in Debian and to the multiverse repositories in Ubuntu, I think). *That* would be the ideal for a Debian/Ubuntu user. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Yo soy Peperino, mártir latino, venid al asado pero traed el vino. -- Peperino Pómoro
May 25 2010
prev sibling parent "Nick Sabalausky" <a a.a> writes:
"retard" <re tard.com.invalid> wrote in message 
news:ht9n8n$rop$4 digitalmars.com...
 Sat, 22 May 2010 16:25:55 -0400, Nick Sabalausky wrote:

 "retard" <re tard.com.invalid> wrote in message
 news:ht9atu$rop$1 digitalmars.com...
 Sat, 22 May 2010 13:59:34 -0400, Nick Sabalausky wrote:
 Most apps don't need native x86_64. Only things that really push the
 limits of CPU/memory utilization need it, which, aside from bloatware
 (which admittedly is at epidemic levels lately), is really only a
 minority of apps. For the rest, if it already runs fine on 32-bit,
 then the same exec on a 64-bit machine is only going to run better
 anyway, and if is already ran fine before, then there's no problem.
You're suffering Stockholm syndrome there. Not having a functional 64-bit compiler isn't a positive feature.
I never said it was. All I said was that most apps don't need native 64-bit versions. Don't go pulling out strawmen.
Sorry for pulling out that, but I thought the claim "most apps" was a bit overoptimistic. If D is The next gen language, it probably also should solve the next generation of problems.
I never said D or DMD shouldn't support x64, hence, yea, I agree that it should support 64-bit to, as you say "solve the next generation of problems". I just think that, for most apps, it's silly for someone to feel that they must have a 64-bit binary.
 I don't see much point in
 rewriting notepad, mspaint or solitaire in D.
For someone who seems to believe so strongly in "out with the old, in with the new", it seems rather odd that you expect apps along the lines of notepad, mspaint, etc to all stick around forever and with the same old langauge.
 If you only need to deal
 with a small amount of data, why use native low-level languages?
1. Why not? There's nothing about smaller apps that necessitates anything like a VM. 2. To avoid pointless bloat. Just because some people have a gajillion cores and petabytes of ram doesn't mean there's any reason to settle for a notepad that eats up a third of that. 3. I know this isn't in line with the current trendiness-compliant viewpoints, but I'd much rather find a good language and be able to stick with it for whatever I need than constantly bounce around a hundred different languages for every little thing.
 The fact
 is that resource usage will grow and artificial limitations (32-bit code)
 just makes the language irrelevant a lot faster.
Why do you still imply that I've advocated keeping D/DMD 32-bit-only?
 Another problem with x86 code is that you need to install all kinds of 32-
 bit libraries on a x86-64 (Linux) system.
If your Core i7 system with 24 GB RAM has any trouble keeping around the 32-bit libs in addition to 64-bit ones, then you got royally screwed over. But if you're talking about the bother of installing the 32-bit libs, well, that's Linux for you (Not that I'm a fan of any particular OS).
 You also don't have the full
 2.5 - 3.7 GB (depending on of RAM available for processes, the limit is
 something like 2 or 3 GB depending on the OS settings [1] (in some cases
 you need to assume the user has only allowed 2 GB for user mode
 processes). So in reality you could probably have even less than 2 GB
 available.
I only have 1 GB installed, so I couldn't care less. (Although I could have sworn I had 2 GB. Weird...Maybe I moved the other stick over to my Linux box when I built it...)
 Is that a problem? Yes, it is a serious problem in
 professional audio/video/photo applications, soon games (huge game
 worlds, complex SVM/ANN AI), and all kinds of servers.
Video: By what I admit is a rather big coincidence, I have some video processing going on in the background right as I type this. Seriously. I do video processing on this machine and I get by fine. Obviously, I would want to get a 64-bit multi-core with gobs of RAM if I were doing it professionally, but I think you're seriously underestimating the capabilities of P4-era hardware. Games: I'll put it this way: I bought a Wii, and not a 360 or a PS3, specifically because I don't give a rat's ass about game graphics that are anything more than, frankly, Gamecube or XBox 1 level. One of my favorite games is Megaman 9. So you can't tell me that all that fancy hardware is, or will ever be, necessary for games. It is necessary for *some* types of games, but these days that's only because developers like Epic are in bed with the hardware manufacturers and refuse to care one bit about anyone who isn't a High-Def graphics whore. Also, I hate playing games at a desk (or on a laptop, for that matter), so I almost always play on a console system, and therefore don't need my PC to play games anyway. A lot of gamers feel the same way (and there's plenty of other issues with games-on-a-PC too, like rootkit DRM and lack of second-hand-market). And for that matter, most PC users never go anywhere near the sorts of games that require fancy hardware anyway. So again, not a particularly compelling argument. Servers: Sure, for a lot of servers. But that's only a subset of software in general. Besides, half of the sites out there run on notably slow platforms like PHP and Python, so really, there's lot of people who clearly don't care about speed even for a server.
 Take a 32-bit executable optimized for i386 or i586, that runs
 acceptably well on a 32-bit system (say, a P4, or even a P4-era
 Celeron). Take that same binary, put it on a 64-bit system (say, your
 Core i7). It will run *at least* at fast, most likely faster.
Yea, it will run faster, but who said the original application ran fast enough?
I did when I said "**that runs acceptably well** on a 32-bit system". And contrary to popular belief, "software that runs acceptably well on a 32-bit system" is far from unreasonable. Most of the software I use runs acceptably well on my 32-bit system (a Celeron, even). And as for the software I use that doesn't run particularly well on this system (like FireFox 2), well, I can still get by fine with it, and there are always alternatives out there that do run much faster (meaning it's not an issue with my hardware being too slow), but that I just don't use because they have other drawbacks (like Chrome) and speed isn't always my top priority in choosing an app (or in choosing hardware, for that matter).
 CPU demanding applications never run fast enough. The
 applications tend to require more and more resources. It seems the x87
 instructions (i386/586) have 5-6x larger latency than SSE2+ and SSE2+ has
 2-4x greater throughput. Combined, that could mean 20x slower loops.
I used to have a double-speed CD burner. Got it way back when they were $200-$300 and triple-/quad-speeds didn't exist. Used to set up a burn, get something else useful done for half an hour, and then it'd be done. Later on, I got one of those super-fast burners, something like 32x, maybe 50x or so, I don't remember. God was it fast by comparison. Practically instant, it seemed. Soon after, I realized that it hardly ever made any real difference at all in overall time savings. Sometimes a 20x speed-up matters, sure. But not as often as people think. When it improves responsiveness, that's when it usually matters. When it improves something that's already inherently slow, it doesn't always matter quite as much as people think it does.
 Guess how much more that costs me than using my 32-bit system that
 already does everything I need it to do? $379.
Sure. And I have to admit I don't really know what's your target audience. It might even be 8086 systems since IIRC dmd/dmc support old 16- bit dos environments.
I don't know about DMC, but DMD, and even the D spec itself, doesn't do 16-bit. (A number of embedded developers (low-level, naturally) have voiced disappointment with that.)
 But most commercial applications aren't geared towards Your 32-bit
 system. There's a good reason - people do upgrade their systems at least
 once in 5 years (x86-64 appeared 7 years ago..). Your system *will*
 physically break at some point and you have to replace it, probably with
 a faster one, because they won't be selling compatible parts anymore.
 Computers have a limited life time.
If your computers break down in just five years, you're buying crap.
 (x86-64 appeared 7 years ago..)
Which hardly counts, since at the time it cost an arm and a leg and nothing used it. Basically, your average Joe was definitely *not* buying it.
 Ordinary people don't lubricate their
 fans or replace bad capacitors themselves.
Neither do I.
 You can find used parts, but those are more expensive than new ones. For
 example a used 128MB/SDRAM-100 module typically costs as much as a 1GB/
 DDR2-800 here. Budget GPUs for the PCI bus cost 4x as much as similar PCI
 Express cards. A 750GB PATA disk costs as much as a 1500GB SATA-2 disk.
 And let's be honest, $379 isn't that much - if you only upgrade the cpu
 +mobo+ram+gpu, it's closer to $100-150. If you can't afford that much
 once in 5 years, you should stop developing software, seriously.
First of all, I have gotten upgrades to this machine. Replaced the CD burner with a dual-layer DVD burner. Constantly add bigger hard drives (I admit, I can never have enough HDD space). Got a Bluetooth dongle to mess around with the Wii remote. Got a video capture card. Stuck in one of those memory card readers that fits in a 3.5" floppy bay (also have a 3.5" floppy drive, of course ;) ). Replaced my GeForce 2MX with a cheap Radeon 9200SE (IIRC) so I could mess around with pixel shading (which turned out to be a waste, since I never did get around to it). But you're still sidestepping the bigger picture here: If my computer is working fine for me, does what I need it to do, why in the world should I be expected to replace it at all? (Just to please some damn gear-heads, or lazy developers?) Especially if I have other things to spend money on that actually *do* matter.
 If Your application doesn't require new hardware, the 3rd party software
 forces you to upgrade. For example, recently I noticed that the Ati/
 Nvidia GPU control panel requires a .NET version that is not available
 for Windows 2000 (and that's not the only program not working on Windows
 2000 anymore..).
So what? Most 3rd party software is crap anyway. And as far as OEM software goes (and I'm definitely including both ATI and NVIDIA, as well as Toshiba, HP, and others, all from personal experience), well, I *would* say OEM software is always crap, but "crap" is far too kind a word for it. There's crap software, and then there's OEM software. And I've long been saying that hardware companies don't know a damn thing about software. Besides, ever since XP came out, Win2k was primarily just used by businesses. And MS-businesses always flock unquestioningly to the latest versions within about a year of release, and since Win2k, there's been XP, 2k3, Vista, 7, and there may have been another server one I'm forgetting. *I* probably wouldn't have yanked Win2k support, but it's not a great example of software forcing a hardware upgrade.
 So I must buy a new operating system.. but people can't
 legally sell their used OEM Windows without also selling me their 64-bit
 machines =) And I can't buy a new Windows XP/Vista license, only Windows
 7 available on stores. So basically I'm forced to also upgrade the
 hardware.
I don't know first hand, but I keep hearing [non-MS] people claiming that Win7 runs faster than XP even on XP-era hardware.
May 22 2010
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 05/22/2010 02:22 PM, retard wrote:
 Sat, 22 May 2010 13:59:34 -0400, Nick Sabalausky wrote:

 "Robert Clipsham"<robert octarineparrot.com>  wrote in message
 news:ht8m7t$2qua$1 digitalmars.com...
   - and should I ever feel there's a use for my apps outside of
   localhost
 people will wonder why they don't support x86_64 natively (I believe
 this will change after D2 from various comments from Walter).
Most apps don't need native x86_64. Only things that really push the limits of CPU/memory utilization need it, which, aside from bloatware (which admittedly is at epidemic levels lately), is really only a minority of apps. For the rest, if it already runs fine on 32-bit, then the same exec on a 64-bit machine is only going to run better anyway, and if is already ran fine before, then there's no problem.
You're suffering Stockholm syndrome there. Not having a functional 64-bit compiler isn't a positive feature. On a 4 GB system you lose 600+ MB of memory when using a 32-bit operating system without PAE support. In addition, x86 programs might be tuned for i586 or i386, forcing them to not utilize only 50% of the registers available. In the worst case they don't even use SSE at all! Some assembly experts here probably know how much slower x87 is when compared to SSE2+. Guess how much a 64-bit system with 4 GB of RAM costs these days - a quick search gave me the number $379 at http://www.bestbuy.com/site/HP+-+Factory-Refurbished+Desktop+with+AMD +Athlon&%23153;+II+X2+Dual-Core+Processor/9880623.p? id=1218188306780&skuId=9880623 I already have 24 GB in my Core i7 system. I can't imagine how a 32-bit system would benefit modern users.
You both have a point. Clearly not a lot of individual applications really need more than 4GB (though unfortunately, many are pushing up for the wrong reasons), but then a whole category of them would greatly benefit of expanded RAM availability. Andrei
May 22 2010
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 You both have a point. Clearly not a lot of individual applications 
 really need more than 4GB (though unfortunately, many are pushing up for 
 the wrong reasons), but then a whole category of them would greatly 
 benefit of expanded RAM availability.
I would phrase it as the greatly expanded address space. This offers a ton of benefits even if your app uses very little actual memory. For example, the stack size problem for threads pretty much goes away. Garbage collection can get much better. You can have much better hardware detection of wild pointers.
May 22 2010
prev sibling parent Jonathan M Davis <jmdavisProg gmail.com> writes:
Andrei Alexandrescu wrote:
 
 You both have a point. Clearly not a lot of individual applications
 really need more than 4GB (though unfortunately, many are pushing up for
 the wrong reasons), but then a whole category of them would greatly
 benefit of expanded RAM availability.
 
 Andrei
I've written at least one application (for my thesis) which ended up using all of my 4GB RAM and 6GB swap. Of course, that was at least partially because I was writing it in Haskell and hadn't taken its laziness into proper account. It was reading in the hundreds of files before it actually calculated anything since it didn't write anything to disk until it was done processing (which it naturally never did since it ran out of memory). Fixing it to write to disk after processing each file (thereby forcing it to actually process each file before reading in the next one) made it only take 3+ GB of RAM. But I was doing a lot of string processing, and it wasn't at all a typical app. Haskell was a poor match for the problem as it turns out, but given D's current lack of 64-bit support, it would have been too - though for very different reasons. Still, you work with what you've got. We'll get 64-bit support eventually. At least I can say that I wrote a program that used up all of my memory and swap doing something useful (or trying anyway). I don't think that many people can say that - especially when it was around 10GB total. That project definitely lead me to upgrade my RAM. But anywho, D is great. And for the most part, 64-bit isn't necessary. But it will be nice when we do get it. - Jonathan M Davis
May 24 2010