www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Re: Conspiracy Theory #1

reply Sean Kelly <sean invisibleduck.org> writes:
retard Wrote:

 Thu, 19 Nov 2009 11:47:46 -0800, Bill Baxter wrote:
 
 
 It seems to me that MS expects C++ to go the way of FORTRAN and
 COBAL.  Still there, still used, but by an increasingly small number of
 people for a small (but important!) subset of things.  Note how MS still
 hasn't produced a C99 compiler. They just don't see it as relevant to
 enough people to be financially worthwhile.

Even the open source community is using more and more dynamic languages such as Python on the desktop and Web 2.0 (mostly javascript, flash, silverlight, php, python) is a strongly growing platform. I expect most of the every day apps to move to the cloud during the next 10 years. Unfortunately c++ and d missed the train here. People don't care about performance anymore. Even application development has moved from library writing to high level descriptions of end user apps that make use of high quality foss/commercial off-the-shelf components. Cloud computing, real- time interactive communication, and fancy visual look are the key features these days.

Performance per watt is a huge issue for server farms, and until all this talk of low power, short pipeline, massively parallel computing is realized (ie. true "cloud computing"), systems languages will have a very definite place in this arena. I know of large-scale Java projects that go to extreme lengths to avoid garbage collection cycles because they take upwards of 30 seconds to complete, even on top-of-the-line hardware. Using a language like C remains a huge win in these situations. Even in this magical world of massively parallel computing there will be a place for systems languages. After all, that's how interaction with hardware works, consistent performance for time-critical code is achieved, etc. I think the real trend to consider is that projects are rarely written in just one language these days, and ease of integration between pieces is of paramount importance. C/C++ still pretty much stinks in this respect.
Nov 19 2009
next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Sean Kelly (sean invisibleduck.org)'s article
 retard Wrote:
 Thu, 19 Nov 2009 11:47:46 -0800, Bill Baxter wrote:


 It seems to me that MS expects C++ to go the way of FORTRAN and
 COBAL.  Still there, still used, but by an increasingly small number of
 people for a small (but important!) subset of things.  Note how MS still
 hasn't produced a C99 compiler. They just don't see it as relevant to
 enough people to be financially worthwhile.

Even the open source community is using more and more dynamic languages such as Python on the desktop and Web 2.0 (mostly javascript, flash, silverlight, php, python) is a strongly growing platform. I expect most of the every day apps to move to the cloud during the next 10 years. Unfortunately c++ and d missed the train here. People don't care about performance anymore. Even application development has moved from library writing to high level descriptions of end user apps that make use of high quality foss/commercial off-the-shelf components. Cloud computing, real- time interactive communication, and fancy visual look are the key features these days.


"cloud computing"), systems languages will have a very definite place in this arena. I know of large-scale Java projects that go to extreme lengths to avoid garbage collection cycles because they take upwards of 30 seconds to complete, even on top-of-the-line hardware. Yes, and similarly, when I write code to do some complicated processing of gene expression data or DNA sequences, and it uses RAM measured in gigabytes, I go to similar lengths to avoid GC for similar reasons. (That and false pointers.) It's not unique to server space. The reason I still use D instead of C or C++ is because, even if I'm using every hack known to man to avoid GC, it's still got insane metaprogramming capabilities, and it's still what std.range and std.algorithm are written in.
Nov 19 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
dsimcha wrote:
 Yes, and similarly, when I write code to do some complicated processing of gene
 expression data or DNA sequences, and it uses RAM measured in gigabytes, I go
to
 similar lengths to avoid GC for similar reasons.  (That and false pointers.) 
It's
 not unique to server space.  The reason I still use D instead of C or C++ is
 because, even if I'm using every hack known to man to avoid GC, it's still got
 insane metaprogramming capabilities, and it's still what std.range and
 std.algorithm are written in.

Generally, GC only works well if the size of your allocations is << the size of the memory. Are you working with gigabyte sized allocations, or just lots of smaller ones?
Nov 19 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 dsimcha wrote:
 Yes, and similarly, when I write code to do some complicated processing of gene
 expression data or DNA sequences, and it uses RAM measured in gigabytes, I go
to
 similar lengths to avoid GC for similar reasons.  (That and false pointers.) 
It's
 not unique to server space.  The reason I still use D instead of C or C++ is
 because, even if I'm using every hack known to man to avoid GC, it's still got
 insane metaprogramming capabilities, and it's still what std.range and
 std.algorithm are written in.

size of the memory. Are you working with gigabyte sized allocations, or just lots of smaller ones?

Little from column A, little from column B.
Nov 20 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
dsimcha wrote:
 == Quote from Walter Bright (newshound1 digitalmars.com)'s article
 dsimcha wrote:
 Yes, and similarly, when I write code to do some complicated processing of gene
 expression data or DNA sequences, and it uses RAM measured in gigabytes, I go
to
 similar lengths to avoid GC for similar reasons.  (That and false pointers.) 
It's
 not unique to server space.  The reason I still use D instead of C or C++ is
 because, even if I'm using every hack known to man to avoid GC, it's still got
 insane metaprogramming capabilities, and it's still what std.range and
 std.algorithm are written in.

size of the memory. Are you working with gigabyte sized allocations, or just lots of smaller ones?

Little from column A, little from column B.

The giant allocations might be better done with malloc.
Nov 20 2009
parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 dsimcha wrote:
 == Quote from Walter Bright (newshound1 digitalmars.com)'s article
 dsimcha wrote:
 Yes, and similarly, when I write code to do some complicated processing of gene
 expression data or DNA sequences, and it uses RAM measured in gigabytes, I go
to
 similar lengths to avoid GC for similar reasons.  (That and false pointers.)




 not unique to server space.  The reason I still use D instead of C or C++ is
 because, even if I'm using every hack known to man to avoid GC, it's still got
 insane metaprogramming capabilities, and it's still what std.range and
 std.algorithm are written in.

size of the memory. Are you working with gigabyte sized allocations, or just lots of smaller ones?

Little from column A, little from column B.


Yes, this is one of those "hacks" that I use to avoid GC.
Nov 20 2009
prev sibling next sibling parent reply Travis Boucher <boucher.travis gmail.com> writes:
Sean Kelly wrote:
 retard Wrote:
 
 Thu, 19 Nov 2009 11:47:46 -0800, Bill Baxter wrote:


 It seems to me that MS expects C++ to go the way of FORTRAN and
 COBAL.  Still there, still used, but by an increasingly small number of
 people for a small (but important!) subset of things.  Note how MS still
 hasn't produced a C99 compiler. They just don't see it as relevant to
 enough people to be financially worthwhile.

such as Python on the desktop and Web 2.0 (mostly javascript, flash, silverlight, php, python) is a strongly growing platform. I expect most of the every day apps to move to the cloud during the next 10 years. Unfortunately c++ and d missed the train here. People don't care about performance anymore. Even application development has moved from library writing to high level descriptions of end user apps that make use of high quality foss/commercial off-the-shelf components. Cloud computing, real- time interactive communication, and fancy visual look are the key features these days.

Performance per watt is a huge issue for server farms, and until all this talk of low power, short pipeline, massively parallel computing is realized (ie. true "cloud computing"), systems languages will have a very definite place in this arena. I know of large-scale Java projects that go to extreme lengths to avoid garbage collection cycles because they take upwards of 30 seconds to complete, even on top-of-the-line hardware. Using a language like C remains a huge win in these situations.

This I agree with to a certain degree. This really only applies to colocated systems. Shared hosting situations, users are often too stupid to understand the effects of crap code, and shared hosting providers tend to over commit machines. Then comes in the virtualization providers, Amazon EC2 being a perfect example. As long as income is greater then costs, EC2 users rarely get their code running as well as it could, even tho they'd see the most direct cost savings from doing so. With today's web languages, the cost to make something efficient and fast (and maintain, debug, etc) is higher then the cost to run slow crappy code. This is amplified by the loss of money in an emerging market where coming out even a month after your competitors could mean your death. Languages like D (and even java and erlang to some degree) had the opportunity to change this trend 10-15 years ago when scalable clusters were not a common thing. However, with the direction the web has gone in the past 5-10 years, to more 'web applications' the opportunity might come again. We just need 'derail' all of those ruby kids, and get some killer web application framework for D. Personally, I hate the Interwebs, and I don't care if it collapses under its own bloated weight. As long as I still have some way of accessing source code.
 Even in this magical world of massively parallel computing there will be a
place for systems languages.  After all, that's how interaction with hardware
works, consistent performance for time-critical code is achieved, etc.  I think
the real trend to consider is that projects are rarely written in just one
language these days, and ease of integration between pieces is of paramount
importance.  C/C++ still pretty much stinks in this respect.

Yes, the days of multi-cpu, multi-core, multi-thread hardware is here. I recently got a chance to do some work on a 32 hardware thread sun machine. Very interesting design concepts. This is where languages like erlang have an advantage, and D is heading in the right direction (but still quite far off). D at least has the ability to adapt to these new architectures, where as C/C++ will soon be dealing with contention hell (they already do in some aspects). The idea of a single machine with 100+ processing contexts (hardware threads) is not something in the distant future. I know some of the sun machines (T5240) already can do 128 hardware threads in a single machine. Add in certain types of high bandwidth transferring (rdma infiniband for example), and the concepts of things like Mosix and erlang and we'll have single processes with multiple threads running on multiple hardware threads, cores, cpus and even machines.
Nov 19 2009
parent retard <re tard.com.invalid> writes:
Thu, 19 Nov 2009 22:27:34 -0700, Travis Boucher wrote:

 Sean Kelly wrote:
 Performance per watt is a huge issue for server farms, and until all
 this talk of low power, short pipeline, massively parallel computing is
 realized (ie. true "cloud computing"), systems languages will have a
 very definite place in this arena.  I know of large-scale Java projects
 that go to extreme lengths to avoid garbage collection cycles because
 they take upwards of 30 seconds to complete, even on top-of-the-line
 hardware.  Using a language like C remains a huge win in these
 situations.

This I agree with to a certain degree. This really only applies to colocated systems. Shared hosting situations, users are often too stupid to understand the effects of crap code, and shared hosting providers tend to over commit machines. Then comes in the virtualization providers, Amazon EC2 being a perfect example. As long as income is greater then costs, EC2 users rarely get their code running as well as it could, even tho they'd see the most direct cost savings from doing so. With today's web languages, the cost to make something efficient and fast (and maintain, debug, etc) is higher then the cost to run slow crappy code. This is amplified by the loss of money in an emerging market where coming out even a month after your competitors could mean your death.

If you're not developing web applications for the global audience, performance rarely matters. And it's a bit hard to compete with huge companies like Google or Amazon anyways so there's no point in trying to do that. The target audience size is usually something between 1 and 100.000 here and most companies are smaller startups. In larger companies you typically write proprietary intraweb enterprise apps for commercial users (with usually less than 10000 clients). Analyzing gene expression data etc. are really small niche markets. Usually the application users are experts of that domain within the same company (so the amount of concurrent users is low). Most web programming deals with simple pages with CRUD functionality, suboptimal database access and lots of hype. The site structure is becoming so standardized that soon you don't even need real programming languages to build one.
 Yes, the days of multi-cpu, multi-core, multi-thread hardware is here. I
 recently got a chance to do some work on a 32 hardware thread sun
 machine.  Very interesting design concepts.

What makes programming these machines rather simple at the moment is that they're mostly good at task parallelism. Very fine grained parallel algorithms aren't that useful in general in commercial use.
Nov 20 2009
prev sibling parent reply Michael Farnsworth <mike.farnsworth gmail.com> writes:
On 11/19/2009 08:52 PM, Sean Kelly wrote:
 retard Wrote:

 Thu, 19 Nov 2009 11:47:46 -0800, Bill Baxter wrote:


 It seems to me that MS expects C++ to go the way of FORTRAN and
 COBAL.  Still there, still used, but by an increasingly small number of
 people for a small (but important!) subset of things.  Note how MS still
 hasn't produced a C99 compiler. They just don't see it as relevant to
 enough people to be financially worthwhile.

Even the open source community is using more and more dynamic languages such as Python on the desktop and Web 2.0 (mostly javascript, flash, silverlight, php, python) is a strongly growing platform. I expect most of the every day apps to move to the cloud during the next 10 years. Unfortunately c++ and d missed the train here. People don't care about performance anymore. Even application development has moved from library writing to high level descriptions of end user apps that make use of high quality foss/commercial off-the-shelf components. Cloud computing, real- time interactive communication, and fancy visual look are the key features these days.

Performance per watt is a huge issue for server farms, and until all this talk of low power, short pipeline, massively parallel computing is realized (ie. true "cloud computing"), systems languages will have a very definite place in this arena. I know of large-scale Java projects that go to extreme lengths to avoid garbage collection cycles because they take upwards of 30 seconds to complete, even on top-of-the-line hardware. Using a language like C remains a huge win in these situations. Even in this magical world of massively parallel computing there will be a place for systems languages. After all, that's how interaction with hardware works, consistent performance for time-critical code is achieved, etc. I think the real trend to consider is that projects are rarely written in just one language these days, and ease of integration between pieces is of paramount importance. C/C++ still pretty much stinks in this respect.

Aye. I work at a movie VFX firm (anybody went to go see New Moon? I wouldn't expect it on this list, but the wolves were done with a a fur system that I and one other developer wrote recently, so keep your eye out for them), and I worked at a game development company before this. These are big industries, and our software requirements parallel the kind also placed on scientific, simulation, and military software development. Speed with reasonable memory usage (say, 4 GB per task!) is the name of the game, 100%, and we regularly have to sacrifice coding speed and good UI to reach it (although we'd prefer not to...D would be really helpful in that regard). Our studio uses python and C++; the python to glue the pipeline together, the C++ to do the heavy lifting. The python has the "execute once and exit" sort of code, and the C++ code execution is where we prefer to spend our cycles as much as possible on our render farm. I love it when I hear "people don't care about performance anymore," because in my experience that couldn't be further from the truth. It sorta reminds me of the "Apple is dying" argument that crops up every so often. There will probably always be a market for Apple, and there will always be a market for performance. Mmm....performance... -Mike
Nov 19 2009
next sibling parent reply Travis Boucher <boucher.travis gmail.com> writes:
Michael Farnsworth wrote:
 
 I love it when I hear "people don't care about performance anymore," 
 because in my experience that couldn't be further from the truth.  It 
 sorta reminds me of the "Apple is dying" argument that crops up every so 
 often.  There will probably always be a market for Apple, and there will 
 always be a market for performance.
 
 Mmm....performance...
 
 -Mike

Its not that people don't care about performance, companies care more about rapid development and short time to market. They work like insurance companies, where if cost of development (ie. coder man hours) is less then (cost of runtime time) * (code lifetime), then the fewer coder man hours wins. Its like the cliche that hardware is cheaper the coders. Also, slow sloppy broken code also means revisions and updates which in some cases are another avenue of revenue. Now in the case of movie development, the cost of coding an efficient rendering system is cheaper then a large rendering farm and/or the money loss if the movie is released at the wrong time. Focusing purely on performance is niche, as is focusing purely on syntax of a language. What matters to the success of a language is how money can be made off of it. Do you think PHP would have been so successful if it wasn't such an easy language which was relatively fast (compared to old CGI scripts), being released at a time when the web was really starting to take off? Right now, from my perspective at least, D has the performance and the syntax, its just the deployment that is sloppy. GDC has a fairly old DMD front end, the official DMD may or may not work as expected (I'm talking about the compiler/runtime/standard library integration on this point). The battle between compiler/runtime/library is something that I think is very much needed (the one part of capitalism I actually agree with), but I think it is definitely something that is blocking D from a wider acceptance.
Nov 19 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Travis Boucher wrote:
 Focusing purely on performance is niche, as is focusing purely on syntax 
 of a language.  What matters to the success of a language is how money 
 can be made off of it.

You're right.
Nov 19 2009
prev sibling parent retard <re tard.com.invalid> writes:
Thu, 19 Nov 2009 22:16:07 -0800, Michael Farnsworth wrote:

 I love it when I hear "people don't care about performance anymore,"
 because in my experience that couldn't be further from the truth.  It
 sorta reminds me of the "Apple is dying" argument that crops up every so
 often.  There will probably always be a market for Apple, and there will
 always be a market for performance.

I 100% agree that extreme performance is needed for solving the problems on your domain. It's just that these kind of companies don't simply exist in all countries. I would have to move abroad and rather far away to get my hands on that kind of systems. I'd love to write demanding software, but on the other hand I like living here and since even simple web applications pay well, why bother. Here most local companies fill the niche by providing agile localized solutions to clients' problems (i.e. usually localized sites built on drupal, joomla etc. with couple of integrated in-house proprietary components with less than 5000 LOC). Even the clients want to get systems for which development requires as little computer science knowledge as possible to keep the cost low. We usually sell them not only the work done on the site (the source remains closed when sold this way), but also the hosting services (bought from some 3rd party located in the US). Megacorporations like google could easily take over the market, but instead they focus on globally available services.
Nov 20 2009