www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Linux: How to statically link against system libs?

reply "Nick Sabalausky" <a a.a> writes:
On Linux, how do I get DMD to statically link against the necessary system 
libs like libc?

Someone suggested trying -L-static, but that just gives me this:

/usr/bin/ld: cannot find -lgcc_s
collect2: ld returned 1 exit status
--- errorlevel 1
Apr 26 2011
next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
 On Linux, how do I get DMD to statically link against the necessary system
 libs like libc?
 
 Someone suggested trying -L-static, but that just gives me this:
 
 /usr/bin/ld: cannot find -lgcc_s
 collect2: ld returned 1 exit status
 --- errorlevel 1
http://d.puremagic.com/issues/show_bug.cgi?id=4376 You're stuck, linking manually with gcc, which means copying all of the appropriate linker flags from dmd.conf. It also means that you don't get symbols in your stack traces for some reason (I've never been able to figure out why). It does work though. Still, I'd love it if this bug could be fixed. I hate dynamically linking anything if I can help it. - Jonathan M Davis
Apr 26 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 26 Apr 2011 14:49:56 -0400, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:

 On Linux, how do I get DMD to statically link against the necessary  
 system
 libs like libc?

 Someone suggested trying -L-static, but that just gives me this:

 /usr/bin/ld: cannot find -lgcc_s
 collect2: ld returned 1 exit status
 --- errorlevel 1
http://d.puremagic.com/issues/show_bug.cgi?id=4376 You're stuck, linking manually with gcc, which means copying all of the appropriate linker flags from dmd.conf. It also means that you don't get symbols in your stack traces for some reason (I've never been able to figure out why). It does work though. Still, I'd love it if this bug could be fixed. I hate dynamically linking anything if I can help it.
It's a bad idea to statically link libc. This is from the glibc maintainer: http://www.akkadia.org/drepper/no_static_linking.html I don't think static linking is officially supported any more. -Steve
Apr 26 2011
next sibling parent reply Spacen Jasset <spacenjasset yahoo.co.uk> writes:
On 26/04/2011 20:10, Steven Schveighoffer wrote:
...

 I don't think static linking is officially supported any more.
It is but only for OS binaries. Some systems infact, like AIX and Windows, do no support it at all. i.e. linking to kernel32.dll statically, this is what you are asking for, and remember libc calls into the kernel like kernel32.dll does, its more than just a c library despite it's name.
Apr 26 2011
parent reply Alexander <aldem+dmars nk7.net> writes:
On 26.04.2011 21:44, Spacen Jasset wrote:

 On 26/04/2011 20:10, Steven Schveighoffer wrote:
 ...
 I don't think static linking is officially supported any more.
It is but only for OS binaries. Some systems infact, like AIX and Windows, do no support it at all.
Windows does - there are static versions of C runtime and some others. Linux libc is not really very special, it only provides nice interface to syscalls. Actually, static linking is useful sometimes, and it is not always possible to link dynamically. Example: Fedora 14 and CentOS 5.5 shared libs are incompatible, though static binaries run on both - flawlessly. The main problem is that some dependencies are not satisfied through compatibility layer, as not all libs provide that (like libxml2 or some other), thus, building on CentOS wouldn't really help (unless everything but libc will be linked statically anyway). Linux is now in "DLL hell" more than any Windows ever... And, by the way, D can be used for OS binaries as well ;) Though, I would look into direction of uClibc - smaller footprint and almost the same functionality, and it can be safely linked statically. /Alexander
Apr 28 2011
parent reply Spacen Jasset <spacenjasset yahoo.co.uk> writes:
On 29/04/2011 01:46, Alexander wrote:
 On 26.04.2011 21:44, Spacen Jasset wrote:

 On 26/04/2011 20:10, Steven Schveighoffer wrote:
 ...
 I don't think static linking is officially supported any more.
It is but only for OS binaries. Some systems infact, like AIX and Windows, do no support it at all.
Windows does - there are static versions of C runtime and some others. Linux libc is not really very special, it only provides nice interface to syscalls.
That's because in windows the syscalls and the c libraries are in different libraries. You still *cannot* link statically to kernel32.dll. That's the difference. Linux glibc contains the C library functions *and* the syscalls, which is the bit that causes the problems. msvcrt.dll and msvcrt.lib don't have any syscalls in them. they call though kernel32.dll dynamically. The answer therefore on linux as it is on windows: do not to statically link anything that calls the kernel, which in this case is glibc Arguably the ability to link statically to libc should be removed from the compiler as it seems to confuse everyone as to what is actually happening and what the outcome will be.
    Actually, static linking is useful sometimes, and it is not always possible
to link dynamically.

    Example: Fedora 14 and CentOS 5.5 shared libs are incompatible, though
static binaries run on both - flawlessly. The main problem is that some
dependencies are not satisfied through compatibility layer, as not all libs
provide that (like libxml2 or
 some other), thus, building on CentOS wouldn't really help (unless everything
but libc will be linked statically anyway). Linux is now in "DLL hell" more
than any Windows ever...

    And, by the way, D can be used for OS binaries as well ;) Though, I would
look into direction of uClibc - smaller footprint and almost the same
functionality, and it can be safely linked statically.

 /Alexander
I don't know about any of that. All I say is software was built on Centos 3 and it runs on the then company I was working for supported platforms. Which is redhat 3,4,5 + and Suse 9.something + That is 32bit and 64 bit by the way too. It also runs on ubuntu (since about version 6ish +, upto 10, and I dare say beyond) and fedora, but rekon it hasn't been tried recently on Fedora 14 as it's not a supported platform. This all happens from one binary compiled on Centos 3 There was a bug, that I had to fix, and that was a crash on something like Redhat 4, because at the time libc was being statically linked. I can't remember the syscall that caused problem now, I have a feeling it was BSD sockets related. libc is designed to be forward compatible only, if you dynamically link it. The symbols within are versioned and the correct ones bound at runtime. I pipe up about all this because I've been though it all, and did not understand at the time what was wrong with static linking, but then you see the difference between Posix type platforms and windows, and what libc *actually is*, then it all makes sense.
Apr 29 2011
next sibling parent reply Alexander <aldem+dmars nk7.net> writes:
On 29.04.2011 11:41, Spacen Jasset wrote:

 You still *cannot* link statically to kernel32.dll. That's the difference.
Linux glibc contains the C library functions *and* the syscalls, which is the
bit that causes the problems.
But at least I know, that no matter where I am, as long as I am using kernel32 only (no more dependencies), it will work on *any* Windows system (obviously, keeping backward compatibility in mind - something compiled on WinXP will work on all later versions) - which is not he case of Linux/glibc, unfortunately.
 msvcrt.dll and msvcrt.lib don't have any syscalls in them. they call though
kernel32.dll dynamically.
Actually, they do. Calling kernel32 is like making a syscall, that's the base of Win32 API, which is equivalent of Linux syscall.
 The answer therefore on linux as it is on windows: do not to statically link
anything that calls the kernel, which in this case is glibc
As there alternatives exists (like mentioned uClibc), which can be linked statically - this is more policy than technical limitation. glibc still can be linked statically, and will work - but it becomes *very* dependent on other stuff, which is dynamic only (sometimes) - that's why developers do not want to support static versions.
 I don't know about any of that. All I say is software was built on Centos 3
and it runs on the then company I was working for supported platforms.
That's the keyword - "supported platform". In Windows world, any version is "supported" - in Linux world, OTOH - there are dozens of platforms, sharing same kernel but hardly compatible.
 libc is designed to be forward compatible only, if you dynamically link it.
The symbols within are versioned and the correct ones bound at runtime.
Probably you mean "backward compatible"? Forward compatibility means that all previous versions will accept applications compiled with newer versions, which is obviously not the case of glibc. So I'd say that unless and until *any* binary compiled on *any* Linux distribution (same or newer kernel version) will be accepted by all other Linux systems (I bet it will never happen though), it is a bit too early to rule out static linking. /Alexander
Apr 29 2011
parent reply Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes:
Alexander wrote:

 On 29.04.2011 11:41, Spacen Jasset wrote:
 
 You still *cannot* link statically to kernel32.dll. That's the
 difference. Linux glibc contains the C library functions *and* the
 syscalls, which is the bit that causes the problems.
But at least I know, that no matter where I am, as long as I am using kernel32 only (no more dependencies), it will work on *any* Windows system (obviously, keeping backward compatibility in mind - something compiled on WinXP will work on all later versions) - which is not he case of Linux/glibc, unfortunately.
 msvcrt.dll and msvcrt.lib don't have any syscalls in them. they call
 though kernel32.dll dynamically.
Actually, they do. Calling kernel32 is like making a syscall, that's the base of Win32 API, which is equivalent of Linux syscall.
A syscall is generally understood to be a call into the kernel for doing something that can't be done with user level privileges. So a call is either a syscall or it isn't, and none of kernel32 are. There are even functions in kernel32 which do not make a syscall.
May 03 2011
parent Alexander <aldem+dmars nk7.net> writes:
On 03.05.2011 16:29, Lutger Blijdestijn wrote:

 A syscall is generally understood to be a call into the kernel for doing
something that can't be done with user level privileges.
Not really. syscalls are interface of user space to the OS kernel, and obviously, they can be made with user level privileges, otherwise nothing would be possible :) From the Linux syscalls manpage: "The system call is the fundamental interface between an application and the Linux kernel."
 So a call is either a syscall or it isn't, and none of kernel32 are.
kernel32 provides Win32 API, which is exactly (from application point of view) what syscalls in Linux are. In turn, kernel32 is interfacing to ntdll, which is near direct interface to the kernel.
 There are even functions in kernel32 which do not make a syscall.
Sure, not all of Win32 API functions require syscalls. But again: "In lots of cases, KERNEL32 APIs are just wrappers to NTDLL APIs." Unlike kernel32, though, libc is providing direct interface to syscalls - like open(), socket() etc. - yes, those are not libc functions, those are syscalls (wrapped a bit to follow C calling convention). Probably, nowadays libc has wrappers around syscalls, checking arguments etc - but those are not necessary, the difference is only in calling convention. /Alexander
May 03 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Spacen Jasset" <spacenjasset yahoo.co.uk> wrote in message 
news:ipe1ar$1e1f$1 digitalmars.com...
 I don't know about any of that. All I say is software was built on Centos 
 3 and it runs on the then company I was working for supported platforms.

 Which is redhat 3,4,5 + and Suse 9.something + That is 32bit and 64 bit by 
 the way too.

 It also runs on ubuntu (since about version 6ish +, upto 10, and I dare 
 say beyond) and fedora, but rekon it hasn't been tried recently on Fedora 
 14 as it's not a supported platform. This all happens from one binary 
 compiled on Centos 3


 There was a bug, that I had to fix, and that was a crash on something like 
 Redhat 4, because at the time libc was being statically linked. I can't 
 remember the syscall that caused problem now, I have a feeling it was BSD 
 sockets related.

 libc is designed to be forward compatible only, if you dynamically link 
 it. The symbols within are versioned and the correct ones bound at 
 runtime.

 I pipe up about all this because I've been though it all, and did not 
 understand at the time what was wrong with static linking, but then you 
 see the difference between Posix type platforms and windows, and what libc 
 *actually is*, then it all makes sense.
I'm having a rediculously hard time trying to find a CentOS 3 installation disc image (or any other version before 5.6). This is the closest I've been able to find: http://mirrors.cmich.edu/centos/3.1/ But I don't see anything in any of those directories that looks remotely like a disc image. Even though the chart on this page seems to say that mirror should have them: http://www.centos.org/modules/tinycontent/index.php?id=30 On this page... http://isoredirect.centos.org/centos/4/isos/i386/ ...it *says* that that I can "download the .torrent files provided", but there's no torrent files listed there or at the mirrors, and the only other promising-looking link that page has is here: http://packages.sw.be/bittorrent/ Which, despite the url, doesn't even have any torrents at all, just a bunch of rpms. How's a torrent client supposed to use an rpm? I seem to remember having pretty much the same problem about a year ago when I blew a full day trying to get ahold of a copy of Debian. Eventually I gave up trying to find it and just went back to Ubuntu. I don't have a problem using VirtualBox for this if I need to (Assuming I can actually get ahold of an appropriate OS). I actually quite like VirtualBox; been using it a lot the past year, and I have plenty of disk space. But, if all I need to do is get my app to link againt an older version of libc, shouldn't there be a way to do that right there on my Kubuntu 10.04 system? I had been shying away from Alexander's suggestion of uClibc because uClibc's website says it sacrifices speed for size, and because it looks like a royal pain to get set up. But if I can't get this CentOS solution to work, I may give it a try anyway. Also, I've found on Google that the message I got ("linux.so.2: bad ELF interpreter: No such file or directory") is known to sometimes occur when running a 32-bit binary on a 64-bit system that doesn't have the 32-bit libs installed. In case that turns out to be my real problem and my webhosts are unwilling to install the 32-bit libs, can DMD still output 64-bit binaries when building on a 32-bit system? My guess would be "no" since linux seems enjoy crapping out when trying to compile for anything but the local system.
Apr 29 2011
next sibling parent reply Alexander <aldem+dmars nk7.net> writes:
On 29.04.2011 22:02, Nick Sabalausky wrote:

 I had been shying away from Alexander's suggestion of uClibc because 
 uClibc's website says it sacrifices speed for size, and because it looks 
 like a royal pain to get set up.
I am not sure that you would really notice speed difference - but it depends on your application. If you only use it for file/socket I/O, you will hardly notice anything, everything else is in Phobos, thus, independent on libc. Setting it up, in my experience, is quite easy - but, to be honest, I didn't try it recently, my last experience with uClibc was few years ago. Perhaps, I should try it again... /Alexander
Apr 29 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Alexander" <aldem+dmars nk7.net> wrote in message 
news:ipfhnu$194o$1 digitalmars.com...
 On 29.04.2011 22:02, Nick Sabalausky wrote:

 I had been shying away from Alexander's suggestion of uClibc because
 uClibc's website says it sacrifices speed for size, and because it looks
 like a royal pain to get set up.
I am not sure that you would really notice speed difference - but it depends on your application. If you only use it for file/socket I/O, you will hardly notice anything, everything else is in Phobos, thus, independent on libc.
Yea, you're probably right.
  Setting it up, in my experience, is quite easy - but, to be honest, I 
 didn't try it recently, my last experience with uClibc was few years ago. 
 Perhaps, I should try it again...
According to the FAQ, they used to support a way that you could use your existing GCC toolchain to build uClibc apps, but that turned out to have fundamental problems, so now the only way to do it is with a whole separate specially-built-for-uClibc version of GCC. And from what I can tell, linux (and just "the unix way" in general) doesn't seem to be very good at handling multiple forks of the same program installed on the same system (unless you use a heretical portable-installation approach like DMD thankfully uses).
Apr 29 2011
parent Alexander <aldem+dmars nk7.net> writes:
I see... Well, then, perhaps, you may get lucky with this tool:
http://statifier.sourceforge.net/

/Alexander
Apr 30 2011
prev sibling next sibling parent Spacen Jasset <spacenjasset yahoo.co.uk> writes:
On 29/04/2011 21:02, Nick Sabalausky wrote:
...

 I'm having a rediculously hard time trying to find a CentOS 3 installation
 disc image (or any other version before 5.6). This is the closest I've been
 able to find:
... It seems that the older versions that are no longer supported have generally speaking been removed for download. So I guess you would have to use the latest supported version, or find the version you need somewhere. I have some old .iso images I think. It would be quite handy to know what the oldest platform you need to support is, then you can "simply" get that distribution. Anyway for redhat, that appears to be version 4. http://ftp.heanet.ie/pub/centos/4.9/isos/ Since previous versions are no longer supported, then none *should* really be using those versions as there are no security updates any more. There goes the logic anyway. I will have a look for an older .iso image that may be useful. What you don't want to do is download something, which is too new a glibc
May 01 2011
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:ipf5pg$j80$1 digitalmars.com...
 "Spacen Jasset" <spacenjasset yahoo.co.uk> wrote in message 
 news:ipe1ar$1e1f$1 digitalmars.com...
 I don't know about any of that. All I say is software was built on Centos 
 3 and it runs on the then company I was working for supported platforms.

 Which is redhat 3,4,5 + and Suse 9.something + That is 32bit and 64 bit 
 by the way too.

 It also runs on ubuntu (since about version 6ish +, upto 10, and I dare 
 say beyond) and fedora, but rekon it hasn't been tried recently on Fedora 
 14 as it's not a supported platform. This all happens from one binary 
 compiled on Centos 3


 There was a bug, that I had to fix, and that was a crash on something 
 like Redhat 4, because at the time libc was being statically linked. I 
 can't remember the syscall that caused problem now, I have a feeling it 
 was BSD sockets related.

 libc is designed to be forward compatible only, if you dynamically link 
 it. The symbols within are versioned and the correct ones bound at 
 runtime.

 I pipe up about all this because I've been though it all, and did not 
 understand at the time what was wrong with static linking, but then you 
 see the difference between Posix type platforms and windows, and what 
 libc *actually is*, then it all makes sense.
I'm having a rediculously hard time trying to find a CentOS 3 installation disc image (or any other version before 5.6). This is the closest I've been able to find: http://mirrors.cmich.edu/centos/3.1/ But I don't see anything in any of those directories that looks remotely like a disc image. Even though the chart on this page seems to say that mirror should have them: http://www.centos.org/modules/tinycontent/index.php?id=30 On this page... http://isoredirect.centos.org/centos/4/isos/i386/ ...it *says* that that I can "download the .torrent files provided", but there's no torrent files listed there or at the mirrors, and the only other promising-looking link that page has is here: http://packages.sw.be/bittorrent/ Which, despite the url, doesn't even have any torrents at all, just a bunch of rpms. How's a torrent client supposed to use an rpm? I seem to remember having pretty much the same problem about a year ago when I blew a full day trying to get ahold of a copy of Debian. Eventually I gave up trying to find it and just went back to Ubuntu. I don't have a problem using VirtualBox for this if I need to (Assuming I can actually get ahold of an appropriate OS). I actually quite like VirtualBox; been using it a lot the past year, and I have plenty of disk space. But, if all I need to do is get my app to link againt an older version of libc, shouldn't there be a way to do that right there on my Kubuntu 10.04 system? I had been shying away from Alexander's suggestion of uClibc because uClibc's website says it sacrifices speed for size, and because it looks like a royal pain to get set up. But if I can't get this CentOS solution to work, I may give it a try anyway. Also, I've found on Google that the message I got ("linux.so.2: bad ELF interpreter: No such file or directory") is known to sometimes occur when running a 32-bit binary on a 64-bit system that doesn't have the 32-bit libs installed. In case that turns out to be my real problem and my webhosts are unwilling to install the 32-bit libs, can DMD still output 64-bit binaries when building on a 32-bit system? My guess would be "no" since linux seems enjoy crapping out when trying to compile for anything but the local system.
Aggghhhh!!! God damnnit, I officially fucking hate linux now... (not that I'm a win, mac or bsd fan, but whatever...) I temporarily gave up trying to actually get ahold of an old distro, so I tried the other angles (not counting just simply *wishing* it was like win and I could just copy the damn binary over to another linux box...nooo, that would be too simple for a unix-style system): I got my web host to switch me to a server that has 32-bit libs installed (a pain in and of itself because I had to coordinate with a client to find a convenient downtime, and then I ended up needing to change my domain's DNS entires, so now my whole domain's down for a couple days)...And it make no difference. So I guess in my particular case it wasn't a 32-bit/64-bit issue at all (or maybe there still would have been that problem too, I dunno). So I went to try uClibc: I started my Linux box...and it decides to hang mid-startup. So I reboot and at least this time the dumb thing finishes booting (I had problems with linux randomly breaking for no apperent reason ten years ago with Mandrake and Red Hat. I can't believe it's still happening now). Anyway, at the uClibc site, I saw the "simple steps" here: http://uclibc.org/toolchains.html and thought "Uhh, hell no, not if I don't have to" and went to the link for the pre-built verison instead. The link was broken. Then the page says those are really old versions anyway. Great :/ So I go through the steps: I get to the part where I download buildroot. Copy/paste the link over to my linux box...and discover that Synergy+ has suddenly decided it no longer feels like offering the "shared clipboard" feature that always worked before. Ok, so I type the URL into my linux box manually, download buildroot, unpack it...so far so good...and follow the instruction to run "make menuconfig"...BARF. It fails with some error about ncurses being missing, and that I should get ncurses-devel. "sudo apt-get install ncurses-devel": Can't find package. "sudo apt-get install ncurses": Can't find package. "sudo apt-get install fuck-shit-cock": Can't find package. Google "ncurses deb package". Actually found it. Download. Run...You ready for this? Here's the message: "Error: A later version is already installed." SERIOUSLY?! This is the point where I would normally say "fuck this shit", but the thought of continuing to use PHP (even if it is via Haxe) is enough to keep me bashing my head against this wall. Next stop: See if I can get ahold of *some* version of CentOS and see if using that in a VM will manage to work. (And rip Kubuntu off my Linux box and see if I can replace it with Debian+XFCE. How is it possible that GNOME and KDE were both fairly ok ten years ago, at least as far as I can remember, but the latest versions of both are complete shit? And then there's that iOS garbage that Ubuntu is moving to now (The one main thing I've always disliked about Ubuntu is their incompresensible Apple-envy, which only seems to be increasing). And fuck, the latest KDE actually makes the Win7 UI seem good (at least the Win7 UI actually *works* and has some semblance of consistency, even as obnoxious as it is), and I could have sworn that KDE never used to be so completely broken before. Or broken at all, for that matter. Which is too bad, because Dolphin actually shows some promise...at least when it isn't doing the random-horizontal-scrolling-for-no-apparent-reason dance.)
May 06 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:iq2g72$ngp$1 digitalmars.com...
 Aggghhhh!!! God damnnit, I officially fucking hate linux now... (not that 
 I'm a win, mac or bsd fan, but whatever...)

 I temporarily gave up trying to actually get ahold of an old distro, so I 
 tried the other angles (not counting just simply *wishing* it was like win 
 and I could just copy the damn binary over to another linux box...nooo, 
 that would be too simple for a unix-style system):

 I got my web host to switch me to a server that has 32-bit libs installed 
 (a pain in and of itself because I had to coordinate with a client to find 
 a convenient downtime, and then I ended up needing to change my domain's 
 DNS entires, so now my whole domain's down for a couple days)...And it 
 make no difference. So I guess in my particular case it wasn't a 
 32-bit/64-bit issue at all (or maybe there still would have been that 
 problem too, I dunno).

 So I went to try uClibc:

 I started my Linux box...and it decides to hang mid-startup. So I reboot 
 and at least this time the dumb thing finishes booting (I had problems 
 with linux randomly breaking for no apperent reason ten years ago with 
 Mandrake and Red Hat. I can't believe it's still happening now).

 Anyway, at the uClibc site, I saw the "simple steps" here: 
 http://uclibc.org/toolchains.html and thought "Uhh, hell no, not if I 
 don't have to" and went to the link for the pre-built verison instead. The 
 link was broken. Then the page says those are really old versions anyway. 
 Great :/

 So I go through the steps: I get to the part where I download buildroot. 
 Copy/paste the link over to my linux box...and discover that Synergy+ has 
 suddenly decided it no longer feels like offering the "shared clipboard" 
 feature that always worked before.

 Ok, so I type the URL into my linux box manually, download buildroot, 
 unpack it...so far so good...and follow the instruction to run "make 
 menuconfig"...BARF. It fails with some error about ncurses being missing, 
 and that I should get ncurses-devel. "sudo apt-get install ncurses-devel": 
 Can't find package. "sudo apt-get install ncurses": Can't find package. 
 "sudo apt-get install fuck-shit-cock": Can't find package.

 Google "ncurses deb package". Actually found it. Download. Run...You ready 
 for this? Here's the message: "Error: A later version is already 
 installed." SERIOUSLY?!

 This is the point where I would normally say "fuck this shit", but the 
 thought of continuing to use PHP (even if it is via Haxe) is enough to 
 keep me bashing my head against this wall. Next stop: See if I can get 
 ahold of *some* version of CentOS and see if using that in a VM will 
 manage to work. (And rip Kubuntu off my Linux box and see if I can replace 
 it with Debian+XFCE. How is it possible that GNOME and KDE were both 
 fairly ok ten years ago, at least as far as I can remember, but the latest 
 versions of both are complete shit? And then there's that iOS garbage that 
 Ubuntu is moving to now (The one main thing I've always disliked about 
 Ubuntu is their incompresensible Apple-envy, which only seems to be 
 increasing). And fuck, the latest KDE actually makes the Win7 UI seem good 
 (at least the Win7 UI actually *works* and has some semblance of 
 consistency, even as obnoxious as it is), and I could have sworn that KDE 
 never used to be so completely broken before. Or broken at all, for that 
 matter. Which is too bad, because Dolphin actually shows some promise...at 
 least when it isn't doing the 
 random-horizontal-scrolling-for-no-apparent-reason dance.)
Yay! I've just had some success! I managed to find this: http://vault.centos.org/ Which has all the CentOS ISOs. (You'd think I would have had an easier time finding that URL...) I downloaded 4.2 (picked pretty much at random), installed it in VirtualBox, compiled a trivial test C program in the included GCC, uploaded that to the server, and it worked! :) Next step: Install DMD on this CentOS VM and try for a D cgi... And then later, I may try 4.7, see if that'll work for me too. And I still have another web host I need to get CGI working on (although that one has some pretty bad support, so I'm a little nervous about that). But it's looking good so far. Finegrs crossed... I'd be nice to not have to use a VM to compile, of course. But as long as I can I have some way to do my server-side web stuff in D, and *completely* sidestep the entire PHP runtime, then it'll certainly still be well worth it.
May 08 2011
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:iq60qt$pm0$1 digitalmars.com...
 I downloaded 4.2 (picked pretty much at random), installed it in 
 VirtualBox, compiled a trivial test C program in the included GCC, 
 uploaded that to the server, and it worked! :)
Actually, I did have to remove the HTTP status code output from my little hello world cgi test in forder for Apache to not throw up a 500. That kind of surprised me, actually. But maybe it just means it's been far too long since I've done CGI... *shrug*
May 08 2011
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
Nick Sabalausky wrote:
 Actually, I did have to remove the HTTP status code output from my
 little hello world cgi test in forder for Apache to not throw up a
 500.
HTTP status is normally done with a Status: header in cgi. (Actually writing the line works too but only with certain settings.) writefln("Status: 200 OK"); // note: optional; 200 OK is assumed writefln("Content-Type: text/plain"); writefln(); // blank line ends headers writefln("Hello, world!"); --- I just saw this thread, but when I do my cgi apps, I actually recompile them right on the live server. If that's an option for you, it's a bit of a pain to set up, but it's less painful than dealing with Linux's generally retarded library/bits situation.
May 08 2011
parent "Nick Sabalausky" <a a.a> writes:
"Adam D. Ruppe" <destructionator gmail.com> wrote in message 
news:iq6osh$25di$1 digitalmars.com...
 Nick Sabalausky wrote:
 Actually, I did have to remove the HTTP status code output from my
 little hello world cgi test in forder for Apache to not throw up a
 500.
HTTP status is normally done with a Status: header in cgi. (Actually writing the line works too but only with certain settings.) writefln("Status: 200 OK"); // note: optional; 200 OK is assumed writefln("Content-Type: text/plain"); writefln(); // blank line ends headers writefln("Hello, world!");
Ahh, sweet. Didn't know about that "Status:" thing. Or maybe I did and forgot... ;) BTW, another thing I just learned (posting here in case anyone reads this and has similar problems) is that on some servers, like mine, the permissions on the executable have to be set to *not* be writable by group or world, or Apache will just throw a 500. Apperently there's a whole list of conditions that have to be met here: http://httpd.apache.org/docs/current/suexec.html When I first compiled the app on my local CentOS VM, gcc set the binary's permissions to 775, which were preserved by ftp, and then I could run it on the server through ssh, but not through Apache/HTTP. Changing them to 755 fixed it.
 I just saw this thread, but when I do my cgi apps, I actually
 recompile them right on the live server. If that's an option
 for you, it's a bit of a pain to set up, but it's less painful
 than dealing with Linux's generally retarded library/bits situation.
Yea, that would definitely a better way to go. Unfortunately I normally have to deal with shared hosts, so the stuff I can actually do on the server is usually very limited. On this particular server, I know I can't run gcc. (At least not the system's gcc anyway. Maybe there's some way to have a portable install of gcc? But knowing gcc, if there is a way, I'm sure it would be a royal pain.) DMD *is* a portable install, which is nice, but on linux it still needs gcc to link. And then on the other server I need to use, I don't think ssh is even available.
May 08 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:iq6159$q3q$1 digitalmars.com...
 "Nick Sabalausky" <a a.a> wrote in message 
 news:iq60qt$pm0$1 digitalmars.com...
 I downloaded 4.2 (picked pretty much at random), installed it in 
 VirtualBox, compiled a trivial test C program in the included GCC, 
 uploaded that to the server, and it worked! :)
Actually, I did have to remove the HTTP status code output from my little hello world cgi test in forder for Apache to not throw up a 500. That kind of surprised me, actually. But maybe it just means it's been far too long since I've done CGI... *shrug*
If anyone's curious, I did get a basic D cgi app to work, too (ie, Compiling on CentOS 4.2 in a VM and uploading to my shared host server), but I had to: 1. Recompile DMD (Because the precompiled DMD would immediately quit with a "Floating point exception" message, even if called with zero args). 2. Remove "-L--no-warn-search-mismatch" (Because otherwise, when it tried to link, the GCC in CentOS 4.2 would error out and complain that wasn't a valid switch.) As a little bonus, the C test app I compiled in the CentOS 4.2 VM also ran fine on my physical Kubuntu 10.04 box. Although the D one segfaulted. No big deal deal though, it's easy enough to compile on that box. The only problem I'm having now (aside from the fact that I haven't attempted to deal with the other shared host server yet - the debian one from the horrible ipower company), is that CentOS 4.2 (or maybe it's just KDE) runs so slow in a VM that it frequently doesn't recognize when I let go of a key and so then it goes off doing crazy shit. :/ Or it'll swap my key presses if I type too fast. At one point I had a hell of a time just getting it to let me type in "cd dmd" correctly. (I don't think it's entirely because of my computer though. XP runs just fine in a VM for me, even with only 192MB RAM allocated to it instead of the 512MB given to CentOS 4.2) So I'm going to try putting CentOS 4.9 in a VM and replacing KDE with XFCE. And I'll also have VirtualBox enable 3D accel and see if maybe then the "VirtualBox Guest Additions" package will be able to use OpenGL.
May 09 2011
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
Nick Sabalausky wrote:
 2. Remove "-L--no-warn-search-mismatch"
Note for readers: this is in dmd.conf and is a relatively new thing. My dmd 2.051 and older installs always worked, but with the 2.053 beta I just played with, had to make this change as well as recompile dmd for stupid centos to work with it.
  CentOS 4.2 (or maybe it's just KDE) runs so slow in a VM
KDE sucks. The best thing to do is probably to not bother with a gui in the vm at all as well as to not use the virtual machine screen - they are slow as sin. Instead, run sshd on the linux vm, make inbound networking work to port 22 (however you do that in virtual box) and then access it through PuTTY or something. That way, you bypass the slow ass VM graphics entirely. (similarly, if you virtualize Windows, Remote Desktop into the VM works a lot better than the vm's own graphics in my experience).
May 09 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Adam D. Ruppe" <destructionator gmail.com> wrote in message 
news:iqa7bi$1djh$1 digitalmars.com...
 Nick Sabalausky wrote:
 2. Remove "-L--no-warn-search-mismatch"
Note for readers: this is in dmd.conf and is a relatively new thing. My dmd 2.051 and older installs always worked, but with the 2.053 beta I just played with, had to make this change as well as recompile dmd for stupid centos to work with it.
Do we know what that switch is for? Just curious.
  CentOS 4.2 (or maybe it's just KDE) runs so slow in a VM
KDE sucks. The best thing to do is probably to not bother with a gui in the vm at all as well as to not use the virtual machine screen - they are slow as sin.
Actually I just realized it was Gnome. (I don't know I could have mixed those two up...)
 Instead, run sshd on the linux vm, make inbound networking work
 to port 22 (however you do that in virtual box) and then access
 it through PuTTY or something.

 That way, you bypass the slow ass VM graphics entirely.


 (similarly, if you virtualize Windows, Remote Desktop into the
 VM works a lot better than the vm's own graphics in my experience).
XP seems to work fine for me in VirtualBox (And my CPU doesn't even have hardware virtualization support). But I may go ahead and try something like you're suggesting.
May 09 2011
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
Nick Sabalausky wrote:
 Do we know what that switch is for?
--no-warn-search-mismatch Normally ld will give a warning if it finds an incompatible library during a library search. This option silences the warning. In the DMD changelog, there was a note about making the linker a little less noisy. I assume that's the reasoning behind the change.
 Actually I just realized it was Gnome.
Gnome sucks too! (I actually run a mostly custom linux gui. Customly hacked up window manager, custom theme, custom taskbar, hacked up terminals, hacked up IM client.... my own linux install is one of the very few on the planet that doesn't suck ass. It still sucks, mind you, just not ass anymore.)
 XP seems to work fine for me in VirtualBox
Yeah, it's not bad on my comp either, but I always find some annoying lag as menus pop up and things like that. Other benefits of remote desktop though are easier sound and file/clipboard sharing without installing anything in the guest. Whatever floats your boat, but after I tried out the remote desktop strategy I was very pleased.
May 09 2011
next sibling parent reply Andrew Wiley <wiley.andrew.j gmail.com> writes:
On Mon, May 9, 2011 at 11:01 PM, Adam D. Ruppe <destructionator gmail.com>wrote:

 Nick Sabalausky wrote:
 Do we know what that switch is for?
--no-warn-search-mismatch Normally ld will give a warning if it finds an incompatible library during a library search. This option silences the warning. In the DMD changelog, there was a note about making the linker a little less noisy. I assume that's the reasoning behind the change.
 Actually I just realized it was Gnome.
Gnome sucks too! (I actually run a mostly custom linux gui. Customly hacked up window manager, custom theme, custom taskbar, hacked up terminals, hacked up IM client.... my own linux install is one of the very few on the planet that doesn't suck ass. It still sucks, mind you, just not ass anymore.)
 XP seems to work fine for me in VirtualBox
Yeah, it's not bad on my comp either, but I always find some annoying lag as menus pop up and things like that. Other benefits of remote desktop though are easier sound and file/clipboard sharing without installing anything in the guest. Whatever floats your boat, but after I tried out the remote desktop strategy I was very pleased.
I run Arch/Gnome in Virtualbox and get reasonable performance. There's some menu lag, but it's not significant enough to bother me too much. It may just be that my standards are lower :D Virtualbox's guest additions and accelerated video definitely made a difference for me, although apparently they don't yet support enough OpenGL calls to allow Gnome 3's shell to run in a VM.
May 09 2011
parent "Nick Sabalausky" <a a.a> writes:
"Andrew Wiley" <wiley.andrew.j gmail.com> wrote in message 
news:mailman.96.1305003352.14074.digitalmars-d-learn puremagic.com...
 I run Arch/Gnome in Virtualbox and get reasonable performance. There's 
 some
 menu lag, but it's not significant enough to bother me too much. It may 
 just
 be that my standards are lower :D
Nah, more likely your CPU just runs circles around mine. I'm pretty well known around here for running on "antique" hardware ;)
 Virtualbox's guest additions and accelerated video definitely made a
 difference for me, although apparently they don't yet support enough 
 OpenGL
 calls to allow Gnome 3's shell to run in a VM.
Yea, Virtualbox's guest additions are freaking awesome :)
May 09 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Adam D. Ruppe" <destructionator gmail.com> wrote in message 
news:iqadbi$m16$1 digitalmars.com...
 Nick Sabalausky wrote:
 Do we know what that switch is for?
--no-warn-search-mismatch Normally ld will give a warning if it finds an incompatible library during a library search. This option silences the warning. In the DMD changelog, there was a note about making the linker a little less noisy. I assume that's the reasoning behind the change.
Well I feel a little better about that, then. I was (perhaps irrationally) worried it might be something that was required by some major thing in dmd/druntime/phobos that my "hello world" just happened to not use.
 Actually I just realized it was Gnome.
Gnome sucks too! (I actually run a mostly custom linux gui. Customly hacked up window manager, custom theme, custom taskbar, hacked up terminals, hacked up IM client.... my own linux install is one of the very few on the planet that doesn't suck ass. It still sucks, mind you, just not ass anymore.)
Heh. Hardcore :) I've actually come close to making my own IM client out of frustration, but libpidgin was a pain, and then everyone I knew abandoned IM so they could shell out money for SMS (Go figure). I finally stopped even bothering to run Pidgin a few weeks ago since it had been about a year since anyone on my list had even been on. But yea. When I was on Ubuntu 9.04 (or was it 9.06?) I was getting tired of a few annoying little Gnome quirks here and there. It was pretty zippy with the NVidia drivers and hardware accel though, and I got a real kick out of the...ummm...what my brother called "jelly windows". I'm not normally an eye-candy guy (at least not these days - I used to be), but I never got tired of that feature :) But when I tried upgrading to 10.04, the jelly windows suddenly worked like crap (ie, ultra-slow) no matter what I did. Plus, again, I was getting tired of some other weird Gnome quirks, so I ended up going with Kubuntu 10.04 instead of Ubuntu 10.04 (and gave up on my beloved jelly windows entirely :( ) Unfortunately, KDE 4 has turned out to be even worse. And I know it had a notably botched introduction and then got better, but I have 4.5 running on my machine and it's still by far the buggiest, most inconsistent window manager I've ever used. I think I'd actually be happier with CDE: It had an incredibly bizarre UI, but at least it seemed to run smoothly and consistently once you learned how to work it. Compared to KDE4 anyway. Then there's Gnome 3 which I've never used, but it sounds terrible. And then Ubuntu's upcoming Unity (I think that's what it's called?) just looks like iOS/Android to me, and I can't stand those devices. (I really, really miss PalmOS. It's the only OS in existence I can honestly say I genuinely like. Not perfect (especially with Grafitti 2 replacing Grafitti1), but overall very good and the upcoming versions were looking great before they killed it off in favor of that WebOS junk. Boy did I get offtopic there...) So anyway, I'm getting ready to try going Debian+XFCE (for my actual physical linux box). I had tried Xubuntu about a couple years ago, but probably the biggest problem I had with it was the fact I still ended up needing to use at least a few Gnome and KDE apps, so all that bloat just got loaded in anyway and also made the desktop that much more inconsistent. Meh. Oh well. Maybe I'll be happier this time around. And if not, maybe I'll try Trinity DE. It's got to at least be better than KDE 4. Lol, or maybe I'll go back to blackbox ;) Unfortunately, I don't have the time or patience to do any heavy customizing/configuring. If I did, I might not have been driven away from Linux when I first tried it ten years ago. :/
 XP seems to work fine for me in VirtualBox
Yeah, it's not bad on my comp either, but I always find some annoying lag as menus pop up and things like that. Other benefits of remote desktop though are easier sound and file/clipboard sharing without installing anything in the guest. Whatever floats your boat, but after I tried out the remote desktop strategy I was very pleased.
I've been hesitent to bother figuring out how to set up such things with a VM since the built-in GUI seemed to at least basically work. But yea, maybe I'll give those things a shot. After all, I did at least manage to figure out how to work VirtualBox's shared folders :)
May 09 2011
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
Nick Sabalausky wrote:
 I've actually come close to making my own IM client out of
 frustration, but libpidgin was a pain
Hah, I started one as well, with only aim support. It actually kinda works (and written in D), but I keep changing my D gui stuff around (this program is one of the D Windowing System test apps, which changes then stagnates then changes then is tabled then changes...) that it's not really usable. I basically settled for gaim 1.5 with a few modifications. Annoyingly, bugs have crept in as the stupid gtk or glib updates and gaim doesn't, but I know it's limits so if it crashes when I close a tab, I just don't close tabs like that. Gaim is one of those programs that went to pure crap with a new version though. Pidgin, gaim 2.0, is just utterly unusable to me.
  Lol, or maybe I'll go back to blackbox ;)
Blackbox rocks. It's what I used as my starting point. Actually, my custom WM is a fairly short diff from blackbox. It's biggest shortcoming, but that's a gift as well as a curse - it leaves the door open to put in a customized one!
 Unfortunately, I don't have the time or patience to do any heavy
 customizing/configuring. If I did, I might not have been driven
 away from Linux when I first tried it ten years ago.
What happened with me is I kinda locked myself into it. Switching away would be an even bigger hassle due to moving files and such, so I stuck with it, slowly excising the worst of the suck. The thing that still pisses me off beyond belief is something I can't fix myself - interoperability, the topic of this thread. (see I'm still on topic!)
May 10 2011
parent "Nick Sabalausky" <a a.a> writes:
"Adam D. Ruppe" <destructionator gmail.com> wrote in message 
news:iqbjs4$2vrm$1 digitalmars.com...
 Nick Sabalausky wrote:
 I've actually come close to making my own IM client out of
 frustration, but libpidgin was a pain
Hah, I started one as well, with only aim support. It actually kinda works (and written in D), but I keep changing my D gui stuff around (this program is one of the D Windowing System test apps, which changes then stagnates then changes then is tabled then changes...) that it's not really usable. I basically settled for gaim 1.5 with a few modifications. Annoyingly, bugs have crept in as the stupid gtk or glib updates and gaim doesn't, but I know it's limits so if it crashes when I close a tab, I just don't close tabs like that. Gaim is one of those programs that went to pure crap with a new version though. Pidgin, gaim 2.0, is just utterly unusable to me.
I found it basically usable (back when there were actually other people on it ;) ), but it's inability to use the default away message, or even just log in, without it sitting there first doing nothing but waiting for a custom message that I'd never type in annoyed the hell out of me. There were a few other annoyances, too, like how the table in the "saved statuses" screen sets the vertical-align of each cell to middle instead of top. (I hate it when web pages do it, and now a desktop app is doing it, too?) It might be better for me though, because I run it on windows, so I rarely have GTK getting updated behind Pidgin's back. I could come up with a huge list of programs that just get worse with newer releases...iTunes, FireFox, pretty much anything from Adobe, Windows (post-XP anyway), Nero, Roxio, KDE3->KDE4, Azureus->Vuze, McAffee (It's hard to imagine there was ever a time it wasn't worse than the disease), Visual Studio, id Software FPSes after Doom 2 (although they didn't actually reach "bad" until Q3A), all just off the top of my head.
  Lol, or maybe I'll go back to blackbox ;)
Blackbox rocks. It's what I used as my starting point. Actually, my custom WM is a fairly short diff from blackbox. It's biggest shortcoming, but that's a gift as well as a curse - it leaves the door open to put in a customized one!
Yea, I used Blackbox a little bit when I first tried Linux ten years ago. It held it's own pretty well against the alternatives.
 Unfortunately, I don't have the time or patience to do any heavy
 customizing/configuring. If I did, I might not have been driven
 away from Linux when I first tried it ten years ago.
What happened with me is I kinda locked myself into it. Switching away would be an even bigger hassle due to moving files and such, so I stuck with it, slowly excising the worst of the suck. The thing that still pisses me off beyond belief is something I can't fix myself - interoperability, the topic of this thread. (see I'm still on topic!)
Heh :) And then I go mercilessly killing the topic ;)
May 10 2011
prev sibling next sibling parent Robert Clipsham <robert octarineparrot.com> writes:
On 10/05/2011 04:48, Nick Sabalausky wrote:
 "Adam D. Ruppe"<destructionator gmail.com>  wrote in message
 news:iqa7bi$1djh$1 digitalmars.com...
 Nick Sabalausky wrote:
 2. Remove "-L--no-warn-search-mismatch"
Note for readers: this is in dmd.conf and is a relatively new thing. My dmd 2.051 and older installs always worked, but with the 2.053 beta I just played with, had to make this change as well as recompile dmd for stupid centos to work with it.
Do we know what that switch is for? Just curious.
I believe it was added along side 64 bit support so the linker didn't complain about both 64 bit and 32 bit libraries being available. Or something like that. -- Robert http://octarineparrot.com/
May 10 2011
prev sibling parent Spacen Jasset <spacenjasset yahoo.co.uk> writes:
On 10/05/2011 04:48, Nick Sabalausky wrote:
 "Adam D. Ruppe"<destructionator gmail.com>  wrote in message
 news:iqa7bi$1djh$1 digitalmars.com...
 Nick Sabalausky wrote:
 2. Remove "-L--no-warn-search-mismatch"
Note for readers: this is in dmd.conf and is a relatively new thing. My dmd 2.051 and older installs always worked, but with the 2.053 beta I just played with, had to make this change as well as recompile dmd for stupid centos to work with it.
Do we know what that switch is for? Just curious.
   CentOS 4.2 (or maybe it's just KDE) runs so slow in a VM
KDE sucks. The best thing to do is probably to not bother with a gui in the vm at all as well as to not use the virtual machine screen - they are slow as sin.
Actually I just realized it was Gnome. (I don't know I could have mixed those two up...)
 Instead, run sshd on the linux vm, make inbound networking work
 to port 22 (however you do that in virtual box) and then access
 it through PuTTY or something.

 That way, you bypass the slow ass VM graphics entirely.


 (similarly, if you virtualize Windows, Remote Desktop into the
 VM works a lot better than the vm's own graphics in my experience).
XP seems to work fine for me in VirtualBox (And my CPU doesn't even have hardware virtualization support). But I may go ahead and try something like you're suggesting.
I have had trouble with this same thing before using versions of VMware. However, we use vmware virtual server now (free) to run centos 4. to power a media wiki site, which does work without any trouble. Although I have found virtualbox is generally very good with all this. Otherwise yes, I would try to ssh in instead.
May 10 2011
prev sibling next sibling parent "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:iq9ujn$111t$1 digitalmars.com...
 If anyone's curious, I did get a basic D cgi app to work, too (ie, 
 Compiling on CentOS 4.2 in a VM and uploading to my shared host server), 
 but I had to:

 1. Recompile DMD (Because the precompiled DMD would immediately quit with 
 a "Floating point exception" message, even if called with zero args).

 2. Remove "-L--no-warn-search-mismatch" (Because otherwise, when it tried 
 to link, the GCC in CentOS 4.2 would error out and complain that wasn't a 
 valid switch.)

 As a little bonus, the C test app I compiled in the CentOS 4.2 VM also ran 
 fine on my physical Kubuntu 10.04 box. Although the D one segfaulted. No 
 big deal deal though, it's easy enough to compile on that box.

 The only problem I'm having now (aside from the fact that I haven't 
 attempted to deal with the other shared host server yet - the debian one 
 from the horrible ipower company),
Damn, it seems that ipower's CGI support is limited to perl and python (even though they conveniently make no mention of that anywhere except *inside* the logged-in member-only section). Oh well, maybe I'll luck out and be able to convine the client to use a less sucky host on my second attempt :/ But I dunno, he seems to be pretty in love with them.
May 09 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:iq9ujn$111t$1 digitalmars.com...
 The only problem I'm having now (aside from the fact that I haven't 
 attempted to deal with the other shared host server yet - the debian one 
 from the horrible ipower company), is that CentOS 4.2 (or maybe it's just 
 KDE) runs so slow in a VM that it frequently doesn't recognize when I let 
 go of a key and so then it goes off doing crazy shit. :/  Or it'll swap my 
 key presses if I type too fast. At one point I had a hell of a time just 
 getting it to let me type in "cd dmd" correctly. (I don't think it's 
 entirely because of my computer though. XP runs just fine in a VM for me, 
 even with only 192MB RAM allocated to it instead of the 512MB given to 
 CentOS 4.2) So I'm going to try putting CentOS 4.9 in a VM and replacing 
 KDE with XFCE. And I'll also have VirtualBox enable 3D accel and see if 
 maybe then the "VirtualBox Guest Additions" package will be able to use 
 OpenGL.
It turns out the problem is rooted in the fact that 2.6 kernel uses 1,000Hz for...umm...something or other...whereas the 2.4 kernel only used 100Hz. Seems that's caused a lot of big performance problems in VMs. Apperently this was sorted out in one of the CentOS 5.x point releases, but CentOS 4 needs to use a specially-built kernel. Which, of course, I don't have a f'ing clue how to do. I did find some pre-made "VM-ified CentOS" VMs here: http://people.centos.org/tru/vmware/ I got the "centos-4-20100321/CentOS-4_desktop.i386.zip" one, and it seems to work except that X doesn't run because it complains it can't find any screens (or something like that). Not a clue on how to fix that, but the text-mode commandline + VirtualBox's shared folder's should hopefully be enough for me to at least get by.
May 10 2011
next sibling parent "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:iqd84f$2bv3$1 digitalmars.com...
 "Nick Sabalausky" <a a.a> wrote in message 
 news:iq9ujn$111t$1 digitalmars.com...
 The only problem I'm having now (aside from the fact that I haven't 
 attempted to deal with the other shared host server yet - the debian one 
 from the horrible ipower company), is that CentOS 4.2 (or maybe it's just 
 KDE) runs so slow in a VM that it frequently doesn't recognize when I let 
 go of a key and so then it goes off doing crazy shit. :/  Or it'll swap 
 my key presses if I type too fast. At one point I had a hell of a time 
 just getting it to let me type in "cd dmd" correctly. (I don't think it's 
 entirely because of my computer though. XP runs just fine in a VM for me, 
 even with only 192MB RAM allocated to it instead of the 512MB given to 
 CentOS 4.2) So I'm going to try putting CentOS 4.9 in a VM and replacing 
 KDE with XFCE. And I'll also have VirtualBox enable 3D accel and see if 
 maybe then the "VirtualBox Guest Additions" package will be able to use 
 OpenGL.
It turns out the problem is rooted in the fact that 2.6 kernel uses 1,000Hz for...umm...something or other...whereas the 2.4 kernel only used 100Hz. Seems that's caused a lot of big performance problems in VMs. Apperently this was sorted out in one of the CentOS 5.x point releases, but CentOS 4 needs to use a specially-built kernel. Which, of course, I don't have a f'ing clue how to do.
Hmm, it seems what it needs are some "Kernel Parameters", however the hell those are applied: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006427
 I did find some pre-made "VM-ified CentOS" VMs here: 
 http://people.centos.org/tru/vmware/  I got the 
 "centos-4-20100321/CentOS-4_desktop.i386.zip" one, and it seems to work 
 except that X doesn't run because it complains it can't find any screens 
 (or something like that). Not a clue on how to fix that, but the text-mode 
 commandline + VirtualBox's shared folder's should hopefully be enough for 
 me to at least get by.
Ah. I installed the VirtualBox's Guest Additions, rebooted, and then X was working just fine :) And nice and zippy too this time (relatively speaking, of course, but it's *much* better now and actually usable). I do need to re-install the guest additions again now to get the fancy seamless integrated-mouse stuff that's part of the guest additions. But it all seems good now :)
May 10 2011
prev sibling parent Spacen Jasset <spacenjasset yahoo.co.uk> writes:
On 11/05/2011 06:47, Nick Sabalausky wrote:
 "Nick Sabalausky"<a a.a>  wrote in message
 news:iq9ujn$111t$1 digitalmars.com...
 The only problem I'm having now (aside from the fact that I haven't
 attempted to deal with the other shared host server yet - the debian one
 from the horrible ipower company), is that CentOS 4.2 (or maybe it's just
 KDE) runs so slow in a VM that it frequently doesn't recognize when I let
 go of a key and so then it goes off doing crazy shit. :/  Or it'll swap my
 key presses if I type too fast. At one point I had a hell of a time just
 getting it to let me type in "cd dmd" correctly. (I don't think it's
 entirely because of my computer though. XP runs just fine in a VM for me,
 even with only 192MB RAM allocated to it instead of the 512MB given to
 CentOS 4.2) So I'm going to try putting CentOS 4.9 in a VM and replacing
 KDE with XFCE. And I'll also have VirtualBox enable 3D accel and see if
 maybe then the "VirtualBox Guest Additions" package will be able to use
 OpenGL.
It turns out the problem is rooted in the fact that 2.6 kernel uses 1,000Hz for...umm...something or other...whereas the 2.4 kernel only used 100Hz. Seems that's caused a lot of big performance problems in VMs. Apperently this was sorted out in one of the CentOS 5.x point releases, but CentOS 4 needs to use a specially-built kernel. Which, of course, I don't have a f'ing clue how to do. I did find some pre-made "VM-ified CentOS" VMs here: http://people.centos.org/tru/vmware/ I got the "centos-4-20100321/CentOS-4_desktop.i386.zip" one, and it seems to work except that X doesn't run because it complains it can't find any screens (or something like that). Not a clue on how to fix that, but the text-mode commandline + VirtualBox's shared folder's should hopefully be enough for me to at least get by.
When you choose what OS to install in virtualbox it gives you an option of redhat , ubuntu etc. Try choosing redhat (aka centos), which *may* fix this problem for you. Or Try "Linux 2.4" which is in the list too.
May 11 2011
prev sibling parent reply Spacen Jasset <spacenjasset yahoo.co.uk> writes:
On 08/05/2011 12:59, Nick Sabalausky wrote:
 "Nick Sabalausky"<a a.a>  wrote in message
 news:iq2g72$ngp$1 digitalmars.com...
 Aggghhhh!!! God damnnit, I officially fucking hate linux now... (not that
 I'm a win, mac or bsd fan, but whatever...)

 I temporarily gave up trying to actually get ahold of an old distro, so I
 tried the other angles (not counting just simply *wishing* it was like win
 and I could just copy the damn binary over to another linux box...nooo,
 that would be too simple for a unix-style system):

 I got my web host to switch me to a server that has 32-bit libs installed
 (a pain in and of itself because I had to coordinate with a client to find
 a convenient downtime, and then I ended up needing to change my domain's
 DNS entires, so now my whole domain's down for a couple days)...And it
 make no difference. So I guess in my particular case it wasn't a
 32-bit/64-bit issue at all (or maybe there still would have been that
 problem too, I dunno).

 So I went to try uClibc:

 I started my Linux box...and it decides to hang mid-startup. So I reboot
 and at least this time the dumb thing finishes booting (I had problems
 with linux randomly breaking for no apperent reason ten years ago with
 Mandrake and Red Hat. I can't believe it's still happening now).

 Anyway, at the uClibc site, I saw the "simple steps" here:
 http://uclibc.org/toolchains.html and thought "Uhh, hell no, not if I
 don't have to" and went to the link for the pre-built verison instead. The
 link was broken. Then the page says those are really old versions anyway.
 Great :/

 So I go through the steps: I get to the part where I download buildroot.
 Copy/paste the link over to my linux box...and discover that Synergy+ has
 suddenly decided it no longer feels like offering the "shared clipboard"
 feature that always worked before.

 Ok, so I type the URL into my linux box manually, download buildroot,
 unpack it...so far so good...and follow the instruction to run "make
 menuconfig"...BARF. It fails with some error about ncurses being missing,
 and that I should get ncurses-devel. "sudo apt-get install ncurses-devel":
 Can't find package. "sudo apt-get install ncurses": Can't find package.
 "sudo apt-get install fuck-shit-cock": Can't find package.

 Google "ncurses deb package". Actually found it. Download. Run...You ready
 for this? Here's the message: "Error: A later version is already
 installed." SERIOUSLY?!

 This is the point where I would normally say "fuck this shit", but the
 thought of continuing to use PHP (even if it is via Haxe) is enough to
 keep me bashing my head against this wall. Next stop: See if I can get
 ahold of *some* version of CentOS and see if using that in a VM will
 manage to work. (And rip Kubuntu off my Linux box and see if I can replace
 it with Debian+XFCE. How is it possible that GNOME and KDE were both
 fairly ok ten years ago, at least as far as I can remember, but the latest
 versions of both are complete shit? And then there's that iOS garbage that
 Ubuntu is moving to now (The one main thing I've always disliked about
 Ubuntu is their incompresensible Apple-envy, which only seems to be
 increasing). And fuck, the latest KDE actually makes the Win7 UI seem good
 (at least the Win7 UI actually *works* and has some semblance of
 consistency, even as obnoxious as it is), and I could have sworn that KDE
 never used to be so completely broken before. Or broken at all, for that
 matter. Which is too bad, because Dolphin actually shows some promise...at
 least when it isn't doing the
 random-horizontal-scrolling-for-no-apparent-reason dance.)
Yay! I've just had some success! I managed to find this: http://vault.centos.org/ Which has all the CentOS ISOs. (You'd think I would have had an easier time finding that URL...) I downloaded 4.2 (picked pretty much at random), installed it in VirtualBox, compiled a trivial test C program in the included GCC, uploaded that to the server, and it worked! :) Next step: Install DMD on this CentOS VM and try for a D cgi... And then later, I may try 4.7, see if that'll work for me too. And I still have another web host I need to get CGI working on (although that one has some pretty bad support, so I'm a little nervous about that). But it's looking good so far. Finegrs crossed... I'd be nice to not have to use a VM to compile, of course. But as long as I can I have some way to do my server-side web stuff in D, and *completely* sidestep the entire PHP runtime, then it'll certainly still be well worth it.
It should work,but again is depends what your target platform is. It's quite important that - Even on windows. At the company I am now contracting for we compile the software agents using visual studio 2003 because later versions do not let the agent work with windows 98. This is not just a Linux phenomenon. Centos 4 is fairly new, and it's possible that your hosting providers use older, even unsupported versions of distributions. Centos 3 might have been a wiser bet. In any case centos 4.7 is a point release of 4.0 and as such there should be no breaking libc changes.
May 08 2011
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Spacen Jasset" <spacenjasset yahoo.co.uk> wrote in message 
news:iq69q1$1ack$1 digitalmars.com...
 It should work,but again is depends what your target platform is. It's 
 quite important that - Even on windows. At the company I am now 
 contracting for we compile the software agents using visual studio 2003 
 because later versions do not let the agent work with windows 98. This is 
 not just a Linux phenomenon.
But at least you can still compile *on* XP+ and get the result to work on 98. And fairly easily (ie: Just use VS 2003). That sort of thing *might* be true on Linux as well, but it seems to be either difficult or really obscure. For instance, no one here seems to know: http://ubuntuforums.org/showthread.php?t=1740277 Plus, there's the fact that Linux has not just different versions, but many different distros, too. And the distros apperently tend to be somewhat divergent in different things. (But again, it's not like I'm saying "Windows kicks Linux's ass" or anything like that.)
 In any case centos 4.7 is a point release of 4.0 and as such there should 
 be no breaking libc changes.
Ah! Thanks, that's good to know. In fact, I was specifically wondering about that, but I wasn't certain and didn't want to assume.
May 08 2011
parent Spacen Jasset <spacenjasset yahoo.co.uk> writes:
On 08/05/2011 20:42, Nick Sabalausky wrote:
 "Spacen Jasset"<spacenjasset yahoo.co.uk>  wrote in message
 news:iq69q1$1ack$1 digitalmars.com...
 It should work,but again is depends what your target platform is. It's
 quite important that - Even on windows. At the company I am now
 contracting for we compile the software agents using visual studio 2003
 because later versions do not let the agent work with windows 98. This is
 not just a Linux phenomenon.
But at least you can still compile *on* XP+ and get the result to work on 98. And fairly easily (ie: Just use VS 2003). That sort of thing *might* be true on Linux as well, but it seems to be either difficult or really obscure. For instance, no one here seems to know: http://ubuntuforums.org/showthread.php?t=1740277 Plus, there's the fact that Linux has not just different versions, but many different distros, too. And the distros apperently tend to be somewhat divergent in different things. (But again, it's not like I'm saying "Windows kicks Linux's ass" or anything like that.)
 In any case centos 4.7 is a point release of 4.0 and as such there should
 be no breaking libc changes.
Ah! Thanks, that's good to know. In fact, I was specifically wondering about that, but I wasn't certain and didn't want to assume.
Yes, what you say it true. The most important thing usually is the glibc version. In terms of distributions, yes they all have different libraries installed, which can be a pain. For expat, we do infact statically link that, and boost, and zlib. - but you can ship dlls instead in that case, again of course, using some voodoo magic, RPATH linker setting. It was the case back in the day, that to install something on unix systems, you would compile it, and in fact, sys admins would not install binaries... If you get stuck again, or need assistance, then you can look me up on linkedin, facebook and send me a message. "Jason Spashett"
May 08 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Spacen Jasset" <spacenjasset yahoo.co.uk> wrote in message 
news:iq69q1$1ack$1 digitalmars.com...
 It should work,but again is depends what your target platform is. It's 
 quite important that - Even on windows. At the company I am now 
 contracting for we compile the software agents using visual studio 2003 
 because later versions do not let the agent work with windows 98. This is 
 not just a Linux phenomenon.

 Centos 4 is fairly new, and it's possible that your hosting providers use 
 older, even unsupported versions of distributions. Centos 3 might have 
 been a wiser bet. In any case centos 4.7 is a point release of 4.0 and as 
 such there should be no breaking libc changes.
I noticed the 4.7+ installers have an option for i586, but there seems to be a lot of conflicting info about whether the non-i586 install is i386 or i686. Any idea? I've heard that CentOS 5 is i686 despite claiming to be i386, but I can't find any concrete info about whether that's true of 4.x as well.
May 09 2011
parent Spacen Jasset <spacenjasset yahoo.co.uk> writes:
On 09/05/2011 22:28, Nick Sabalausky wrote:
 "Spacen Jasset"<spacenjasset yahoo.co.uk>  wrote in message
 news:iq69q1$1ack$1 digitalmars.com...
 It should work,but again is depends what your target platform is. It's
 quite important that - Even on windows. At the company I am now
 contracting for we compile the software agents using visual studio 2003
 because later versions do not let the agent work with windows 98. This is
 not just a Linux phenomenon.

 Centos 4 is fairly new, and it's possible that your hosting providers use
 older, even unsupported versions of distributions. Centos 3 might have
 been a wiser bet. In any case centos 4.7 is a point release of 4.0 and as
 such there should be no breaking libc changes.
I noticed the 4.7+ installers have an option for i586, but there seems to be a lot of conflicting info about whether the non-i586 install is i386 or i686. Any idea? I've heard that CentOS 5 is i686 despite claiming to be i386, but I can't find any concrete info about whether that's true of 4.x as well.
Well It shouldn't matter, as long as it doesn't day x86_64, in which case it's 64 bit. i.e. parts of the kernel may use i686 instructions, if available, which doesn't matter for you at all I guess. It's got nothing to do with dmd. Or ld unless you tell it to generate something for a specific processor.
May 11 2011
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-04-29 22:02, Nick Sabalausky wrote:
 I'm having a rediculously hard time trying to find a CentOS 3 installation
 disc image (or any other version before 5.6). This is the closest I've been
 able to find:
Have a look at this: http://vault.centos.org/ -- /Jacob Carlborg
May 08 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-05-08 19:50, Jacob Carlborg wrote:
 On 2011-04-29 22:02, Nick Sabalausky wrote:
 I'm having a rediculously hard time trying to find a CentOS 3
 installation
 disc image (or any other version before 5.6). This is the closest I've
 been
 able to find:
Have a look at this: http://vault.centos.org/
I see now that you've already found this link, never mind. -- /Jacob Carlborg
May 08 2011
parent "Nick Sabalausky" <a a.a> writes:
"Jacob Carlborg" <doob me.com> wrote in message 
news:iq6llk$20ch$1 digitalmars.com...
 On 2011-05-08 19:50, Jacob Carlborg wrote:
 On 2011-04-29 22:02, Nick Sabalausky wrote:
 I'm having a rediculously hard time trying to find a CentOS 3
 installation
 disc image (or any other version before 5.6). This is the closest I've
 been
 able to find:
Have a look at this: http://vault.centos.org/
I see now that you've already found this link, never mind.
Missed it by *that* much. ;)
May 08 2011
prev sibling parent Alexander <aldem+dmars nk7.net> writes:
On 26.04.2011 21:10, Steven Schveighoffer wrote:

 It's a bad idea to statically link libc.  This is from the glibc maintainer:
Well, I would say - bad idea to statically link glibc, but there are alternatives (mentioned previously uClibc). Good point from the link: "...more efficient use of physical memory. All processes share the same physical pages for the code in the DSOs." makes me wonder, when druntime/phobos will be in DSO? ;) /Alexander
Apr 28 2011
prev sibling next sibling parent Spacen Jasset <spacenjasset yahoo.co.uk> writes:
On 26/04/2011 19:09, Nick Sabalausky wrote:
 On Linux, how do I get DMD to statically link against the necessary system
 libs like libc?

 Someone suggested trying -L-static, but that just gives me this:

 /usr/bin/ld: cannot find -lgcc_s
 collect2: ld returned 1 exit status
 --- errorlevel 1
As I suggest in the other thread you probably shouldn't ever do it. In the same way you shouldn't (and can't) statically link to kernel32 on windows. I forget exactly, but you could try this, don't put any extra -L or anything, like you appear to have above. -static -static-libgcc it's possible perhaps that you don't have the static libgcc installed on your system, seems a bit unlikely, though.
Apr 26 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
Ok, so I guess statically linking against the stuff isn't the way to go, and 
apparently DLL hell is worse on linux. Sooo...What do I do?

In the other thread, Spacen said: "The way to do this is to link against the 
oldest libc you need to
support, thus making the binaries forward compatible"

I know my way around Linux as a user, but with deeper system stuff like that 
I'm pretty much lost. I don't have a clue how to do what Spacen suggests or 
how to determine what version of libc I need. Can anyone help me out with 
that?
Apr 26 2011
next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 26 Apr 2011 16:28:22 -0400, Nick Sabalausky <a a.a> wrote:

 Ok, so I guess statically linking against the stuff isn't the way to go,  
 and
 apparently DLL hell is worse on linux. Sooo...What do I do?

 In the other thread, Spacen said: "The way to do this is to link against  
 the
 oldest libc you need to
 support, thus making the binaries forward compatible"

 I know my way around Linux as a user, but with deeper system stuff like  
 that
 I'm pretty much lost. I don't have a clue how to do what Spacen suggests  
 or
 how to determine what version of libc I need. Can anyone help me out with
 that?
It's been a while since I had to deal with old libs, but usually a linux distro will provide 'compatibility' versions of the standard libraries. Look for packages using your package manager that end with compat -Steve
Apr 26 2011
prev sibling parent reply Kai Meyer <kai unixlords.com> writes:
On 04/26/2011 02:28 PM, Nick Sabalausky wrote:
 Ok, so I guess statically linking against the stuff isn't the way to go, and
 apparently DLL hell is worse on linux. Sooo...What do I do?

 In the other thread, Spacen said: "The way to do this is to link against the
 oldest libc you need to
 support, thus making the binaries forward compatible"

 I know my way around Linux as a user, but with deeper system stuff like that
 I'm pretty much lost. I don't have a clue how to do what Spacen suggests or
 how to determine what version of libc I need. Can anyone help me out with
 that?
Can you backup, and help me understand what the first problem was? The one you thought was solvable by statically linking against glibc? -Kai Meyer
Apr 27 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Kai Meyer" <kai unixlords.com> wrote in message 
news:ip9bro$1lak$1 digitalmars.com...
 On 04/26/2011 02:28 PM, Nick Sabalausky wrote:
 Ok, so I guess statically linking against the stuff isn't the way to go, 
 and
 apparently DLL hell is worse on linux. Sooo...What do I do?

 In the other thread, Spacen said: "The way to do this is to link against 
 the
 oldest libc you need to
 support, thus making the binaries forward compatible"

 I know my way around Linux as a user, but with deeper system stuff like 
 that
 I'm pretty much lost. I don't have a clue how to do what Spacen suggests 
 or
 how to determine what version of libc I need. Can anyone help me out with
 that?
Can you backup, and help me understand what the first problem was? The one you thought was solvable by statically linking against glibc?
It was the thread "D CGI test: linux.so.2: bad ELF interpreter: No such file or directory". Reposted here: ------------------------- I've made a little test CGI app: import std.conv; import std.stdio; void main() { auto content = "<b><i>Hello world</i></b>"; auto headers = `HTTP/1.1 200 OK Content-Type: text/html; charset=UTF-8 Content-Length: `~to!string(content.length); while(readln().length > 1) {} writeln(headers); writeln(); writeln(content); } Works on Windows command line and through IIS. And it works on my Kubuntu 10.6 (CORRECTION: It's v10.04) command line. But if I copy the executable from my Kubuntu box to my web host's Debian server (CORRECTION: It's Red Hat, but there is another server I'd like to also run on that is Debian): Running it through Apache gives me a 500, and running it directly with ssh gives me: linux.so.2: bad ELF interpreter: No such file or directory I assume that error message is the cause of the 500 (can't tell for sure because the 500 isn't even showing up in my Apache error logs). But I'm not enough of a linux expert to have the slightest clue what that error message is all about. I don't need to actually compile it *on* the server do I? I would have thought that all (or at least most) Linux distros used the same executable format - especially (K)Ubuntu and Debian.
Apr 27 2011
next sibling parent reply Spacen Jasset <spacenjasset yahoo.co.uk> writes:
On 27/04/2011 18:51, Nick Sabalausky wrote:
 "Kai Meyer"<kai unixlords.com>  wrote in message
 news:ip9bro$1lak$1 digitalmars.com...
 On 04/26/2011 02:28 PM, Nick Sabalausky wrote:
 Ok, so I guess statically linking against the stuff isn't the way to go,
 and
 apparently DLL hell is worse on linux. Sooo...What do I do?

 In the other thread, Spacen said: "The way to do this is to link against
 the
 oldest libc you need to
 support, thus making the binaries forward compatible"

 I know my way around Linux as a user, but with deeper system stuff like
 that
 I'm pretty much lost. I don't have a clue how to do what Spacen suggests
 or
 how to determine what version of libc I need. Can anyone help me out with
 that?
Can you backup, and help me understand what the first problem was? The one you thought was solvable by statically linking against glibc?
It was the thread "D CGI test: linux.so.2: bad ELF interpreter: No such file or directory". Reposted here: ------------------------- I've made a little test CGI app: import std.conv; import std.stdio; void main() { auto content = "<b><i>Hello world</i></b>"; auto headers = `HTTP/1.1 200 OK Content-Type: text/html; charset=UTF-8 Content-Length: `~to!string(content.length); while(readln().length> 1) {} writeln(headers); writeln(); writeln(content); } Works on Windows command line and through IIS. And it works on my Kubuntu 10.6 (CORRECTION: It's v10.04) command line. But if I copy the executable from my Kubuntu box to my web host's Debian server (CORRECTION: It's Red Hat, but there is another server I'd like to also run on that is Debian): Running it through Apache gives me a 500, and running it directly with ssh gives me: linux.so.2: bad ELF interpreter: No such file or directory I assume that error message is the cause of the 500 (can't tell for sure because the 500 isn't even showing up in my Apache error logs). But I'm not enough of a linux expert to have the slightest clue what that error message is all about. I don't need to actually compile it *on* the server do I? I would have thought that all (or at least most) Linux distros used the same executable format - especially (K)Ubuntu and Debian.
Yes well hmm. I've done this type of thing before, that is want to make something run on newer systems. And lo and behold our makefile used static linking with libc, so I can say authoritatively that in certain circumstances it does not work. (it it only going to work without doubt if you run it on the exact same system) I've just been to court today (small claims) and it's been a hard day, so before I rant on, can you tell us what version you want to run said binary on (distro and version) and what version you are compiling on. As an example, to solve this problem I have compiled on redhat 2ES and all binaries now work on redhat 2-3-4 ubuntu 10.10 and so on, i.e. those that are later in generation than redhat 2. And it all works fine. This is what I suggest, if it is possible. i.e. compile on redhat 2, or perhaps 3. Perhaps if you post your build system version and flavour and target system I'll be able to give you a better answer. try lsb_release for this, if you aren't sure (and it's available as a command) jason ionrift:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.2 LTS Release: 10.04 Codename: lucid otherwise cat /etc/*release*
Apr 27 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Spacen Jasset" <spacenjasset yahoo.co.uk> wrote in message 
news:ip9n5d$27je$1 digitalmars.com...
 On 27/04/2011 18:51, Nick Sabalausky wrote:
 "Kai Meyer"<kai unixlords.com>  wrote in message
 news:ip9bro$1lak$1 digitalmars.com...
 Can you backup, and help me understand what the first problem was? The 
 one
 you thought was solvable by statically linking against glibc?
It was the thread "D CGI test: linux.so.2: bad ELF interpreter: No such file or directory". Reposted here: ------------------------- I've made a little test CGI app: import std.conv; import std.stdio; void main() { auto content = "<b><i>Hello world</i></b>"; auto headers = `HTTP/1.1 200 OK Content-Type: text/html; charset=UTF-8 Content-Length: `~to!string(content.length); while(readln().length> 1) {} writeln(headers); writeln(); writeln(content); } Works on Windows command line and through IIS. And it works on my Kubuntu 10.6 (CORRECTION: It's v10.04) command line. But if I copy the executable from my Kubuntu box to my web host's Debian server (CORRECTION: It's Red Hat, but there is another server I'd like to also run on that is Debian): Running it through Apache gives me a 500, and running it directly with ssh gives me: linux.so.2: bad ELF interpreter: No such file or directory I assume that error message is the cause of the 500 (can't tell for sure because the 500 isn't even showing up in my Apache error logs). But I'm not enough of a linux expert to have the slightest clue what that error message is all about. I don't need to actually compile it *on* the server do I? I would have thought that all (or at least most) Linux distros used the same executable format - especially (K)Ubuntu and Debian.
Yes well hmm. I've done this type of thing before, that is want to make something run on newer systems. And lo and behold our makefile used static linking with libc, so I can say authoritatively that in certain circumstances it does not work. (it it only going to work without doubt if you run it on the exact same system) I've just been to court today (small claims) and it's been a hard day, so before I rant on, can you tell us what version you want to run said binary on (distro and version) and what version you are compiling on.
I'm compiling on Kubuntu v10.04 (32-bit). There's two servers I want to run on, although info on them seems to be difficult to get: 1. Main server: I googled for ways to find the distro and version, and most didn't work (I think my SSH access is sandboxed.) But I was able to get this: $ cat /proc/version Linux version 2.6.18-164.15.1.el5.028stab068.9 (root rhel5-build-x64) (gcc 2010 So I guess it's Red Hat 4.1.2? 2. Another server: This one is some shitty host ( ipower.com ) that my client insists on using. There's no SSH access and they're extremely tight-lipped about server details. I couldn't even get them to confirm whether or not it was x86 - and that was after a half hour of trying to get those jokers to comprehend what "x86" and "CPU architecture" even *meant*. All I know is that their control panel reports the system as being "debian", and that despite all of that they still *claim* to support CGI.
 As an example, to solve this problem I have compiled on redhat 2ES and all 
 binaries now work on redhat 2-3-4 ubuntu 10.10 and so on, i.e. those that 
 are later in generation than redhat 2. And it all works fine.

 This is what I suggest, if it is possible. i.e. compile on redhat 2, or 
 perhaps 3.

 Perhaps if you post your build system version and flavour and target 
 system I'll be able to give you a better answer.

 try lsb_release for this, if you aren't sure (and it's available as a 
 command)

 jason ionrift:~$ lsb_release -a
 No LSB modules are available.
 Distributor ID: Ubuntu
 Description:    Ubuntu 10.04.2 LTS
 Release:        10.04
 Codename:       lucid

 otherwise cat /etc/*release*
On my system, the one I'm compiling on, I get: $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.1 LTS Release: 10.04 Codename: lucid On the main server I just get: $ lsb_release -a -jailshell: lsb_release: command not found
Apr 27 2011
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:ip9va1$2lbe$1 digitalmars.com...
 "Spacen Jasset" <spacenjasset yahoo.co.uk> wrote in message 
 news:ip9n5d$27je$1 digitalmars.com...
 try lsb_release for this, if you aren't sure (and it's available as a 
 command)

 jason ionrift:~$ lsb_release -a
 No LSB modules are available.
 Distributor ID: Ubuntu
 Description:    Ubuntu 10.04.2 LTS
 Release:        10.04
 Codename:       lucid

 otherwise cat /etc/*release*
On my system, the one I'm compiling on, I get: $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.1 LTS Release: 10.04 Codename: lucid On the main server I just get: $ lsb_release -a -jailshell: lsb_release: command not found
On the main server, cat /etc/*release* doesn't work either: $ cat /etc/*release* cat: cat /etc/*release*: No such file or directory
Apr 27 2011
parent Spacen Jasset <spacenjasset yahoo.co.uk> writes:
On 27/04/2011 21:56, Nick Sabalausky wrote:
 "Nick Sabalausky"<a a.a>  wrote in message
 news:ip9va1$2lbe$1 digitalmars.com...
 "Spacen Jasset"<spacenjasset yahoo.co.uk>  wrote in message
 news:ip9n5d$27je$1 digitalmars.com...
 try lsb_release for this, if you aren't sure (and it's available as a
 command)

 jason ionrift:~$ lsb_release -a
 No LSB modules are available.
 Distributor ID: Ubuntu
 Description:    Ubuntu 10.04.2 LTS
 Release:        10.04
 Codename:       lucid

 otherwise cat /etc/*release*
On my system, the one I'm compiling on, I get: $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.1 LTS Release: 10.04 Codename: lucid On the main server I just get: $ lsb_release -a -jailshell: lsb_release: command not found
On the main server, cat /etc/*release* doesn't work either: $ cat /etc/*release* cat: cat /etc/*release*: No such file or directory
I see. It looks like you are trying to run at least on debian 4 yes. What I would suggest you do is get the oddest debian or centos distribution you can, use a virtual box, and build on that. e.g. centos 3 It is possible to get "compat libraries" for some distributions, but that may just be more hassle. You *can* by the way statically link any libraries (if you need to), except libc.so. As other libraries don't call the kernel directly. something like this: gcc obects.o -Wl,-Bstatic -lc++ -lfoo -lfish -Wl,-Bdynamic The way to then check if the binary will run on an older (or newer system) is ldd <executable> or library. It will then tell you what it will bind to, or if it cannot find any particular library.
Apr 28 2011
prev sibling parent reply Mike Wey <mike-wey example.com> writes:
On 04/27/2011 10:40 PM, Nick Sabalausky wrote:
 1. Main server: I googled for ways to find the distro and version, and most
 didn't work (I think my SSH access is sandboxed.) But I was able to get
 this:

 $ cat /proc/version
 Linux version 2.6.18-164.15.1.el5.028stab068.9 (root rhel5-build-x64) (gcc

 2010

 So I guess it's Red Hat 4.1.2?
It looks like the 4.1.2 is the gcc version, and you are using Red Hat 5 (rhel5). Also it looks like the sever is 64 bits, and you're compiling on a 32 bits machine, so does the server support multilib? -- Mike Wey
Apr 28 2011
parent "Nick Sabalausky" <a a.a> writes:
"Mike Wey" <mike-wey example.com> wrote in message 
news:ipca9t$tml$1 digitalmars.com...
 On 04/27/2011 10:40 PM, Nick Sabalausky wrote:
 1. Main server: I googled for ways to find the distro and version, and 
 most
 didn't work (I think my SSH access is sandboxed.) But I was able to get
 this:

 $ cat /proc/version
 Linux version 2.6.18-164.15.1.el5.028stab068.9 (root rhel5-build-x64) 
 (gcc

 2010

 So I guess it's Red Hat 4.1.2?
It looks like the 4.1.2 is the gcc version, and you are using Red Hat 5 (rhel5). Also it looks like the sever is 64 bits, and you're compiling on a 32 bits machine, so does the server support multilib?
I'm sure I'll end up finding out... ;)
Apr 28 2011
prev sibling parent Kai Meyer <kai unixlords.com> writes:
On 04/27/2011 11:51 AM, Nick Sabalausky wrote:
 "Kai Meyer"<kai unixlords.com>  wrote in message
 news:ip9bro$1lak$1 digitalmars.com...
 On 04/26/2011 02:28 PM, Nick Sabalausky wrote:
 Ok, so I guess statically linking against the stuff isn't the way to go,
 and
 apparently DLL hell is worse on linux. Sooo...What do I do?

 In the other thread, Spacen said: "The way to do this is to link against
 the
 oldest libc you need to
 support, thus making the binaries forward compatible"

 I know my way around Linux as a user, but with deeper system stuff like
 that
 I'm pretty much lost. I don't have a clue how to do what Spacen suggests
 or
 how to determine what version of libc I need. Can anyone help me out with
 that?
Can you backup, and help me understand what the first problem was? The one you thought was solvable by statically linking against glibc?
It was the thread "D CGI test: linux.so.2: bad ELF interpreter: No such file or directory". Reposted here: ------------------------- I've made a little test CGI app: import std.conv; import std.stdio; void main() { auto content = "<b><i>Hello world</i></b>"; auto headers = `HTTP/1.1 200 OK Content-Type: text/html; charset=UTF-8 Content-Length: `~to!string(content.length); while(readln().length> 1) {} writeln(headers); writeln(); writeln(content); } Works on Windows command line and through IIS. And it works on my Kubuntu 10.6 (CORRECTION: It's v10.04) command line. But if I copy the executable from my Kubuntu box to my web host's Debian server (CORRECTION: It's Red Hat, but there is another server I'd like to also run on that is Debian): Running it through Apache gives me a 500, and running it directly with ssh gives me: linux.so.2: bad ELF interpreter: No such file or directory I assume that error message is the cause of the 500 (can't tell for sure because the 500 isn't even showing up in my Apache error logs). But I'm not enough of a linux expert to have the slightest clue what that error message is all about. I don't need to actually compile it *on* the server do I? I would have thought that all (or at least most) Linux distros used the same executable format - especially (K)Ubuntu and Debian.
Ya, glibc tries very hard to be forward compatible, but it is quite often not very backwards compatible. Meaning, if you build on an older system, it should run on newer systems (forward compatible.) But if you build on newer systems, it's not always going to run on older systems (backwards compatible.) The best solution is to either use LSB (which is a big hoary mess to get into, and isn't fully supported everywhere, and I don't think there's anything you can do to D to make it LSB compatible), build on the oldest distro, or build on each distro. Ideally, for performance, you should build a binary for each distro. You should build one for Ubuntu separately from Debian, but if you don't ahve the time, building on Debian will likely run on Ubuntu, but not visa versa. Statically linking or dynamic linking isn't the answer, it's forward and backwards compatibility. Personally, I build rpms for both RHEL/CentOS and Fedora on a semi-regular basis, and I usually build one to distribute on RHEL/CentOS separate from a build to distribute on Fedora.
Apr 27 2011