www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Conspiracy Theory #1

reply Martin Hanson <mhanson btinternet.com> writes:
I noticed the "Metaprogramming in D : Some Real-world Examples" Meeting was
held at Microsoft Headquarters. With Google Go now the flavour of the month
will there be big support from Microsoft for D to counteract the Go onslaught...

I think we should be told...
Nov 19 2009
next sibling parent hasenj <hasan.aljudy gmail.com> writes:
Martin Hanson wrote:
 I noticed the "Metaprogramming in D : Some Real-world Examples" Meeting was
held at Microsoft Headquarters. With Google Go now the flavour of the month
will there be big support from Microsoft for D to counteract the Go onslaught...
 
 I think we should be told...

Microsoft have their own C#, which they think they can make a kernel in. D is more of a threat to C# than to Go, IMO. Go is still new and lacks a lot.
Nov 19 2009
prev sibling next sibling parent reply Bill Baxter <wbaxter gmail.com> writes:
On Thu, Nov 19, 2009 at 5:35 AM, Martin Hanson <mhanson btinternet.com> wrote:
 I noticed the "Metaprogramming in D : Some Real-world Examples" Meeting was
held at Microsoft Headquarters. With Google Go now the flavour of the month
will there be big support from Microsoft for D to counteract the Go onslaught...

 I think we should be told...

No conspiracy as far as I know. The NWCPP meetings are just held on the MS campus because someone who worked at Microsoft and was a part of NWCPP got Microsoft to agree to provide the meeting space. I volunteered to talk about D because I enjoyed using it at my last job. It's more or less just a coincidence that I now work at MS and live near the guy who created D. The talk had nothing to do with my day job, unfortunately for me. MS is still crazy in love with C# and all things .NET. I think systems programming languages in general are considered to be too niche to be worth the investment. It takes a several-hundred million dollar market to even start to get MS interested. And there's still a strong preference for technologies which can boost Windows sales (i.e. that will only work on Windows). So an open-source platform-agnostic systems programming language has very little chance of getting the interest of the business-heads at MS. Such a language probably *is* interesting to a lot of the tech people working int the trenches, but they're still a niche audience. Just look at the increasing web and .NET emphasis with each new release of Visual Studio. That's where they see the money to be. It seems to me that MS expects C++ to go the way of FORTRAN and COBAL. Still there, still used, but by an increasingly small number of people for a small (but important!) subset of things. Note how MS still hasn't produced a C99 compiler. They just don't see it as relevant to enough people to be financially worthwhile. Disclaimer -- these are all just my own personal opinions. --bb
Nov 19 2009
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 Note how MS still hasn't produced a C99 compiler.
 They just don't see it as relevant to enough people to be financially
 worthwhile.

Not even the C people care about C99. I rarely get any interest for it with the Digital Mars C compiler.
Nov 19 2009
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
retard wrote:
 Thu, 19 Nov 2009 11:47:46 -0800, Bill Baxter wrote:
 
 
 It seems to me that MS expects C++ to go the way of FORTRAN and
 COBAL.  Still there, still used, but by an increasingly small number of
 people for a small (but important!) subset of things.  Note how MS still
 hasn't produced a C99 compiler. They just don't see it as relevant to
 enough people to be financially worthwhile.

Even the open source community is using more and more dynamic languages such as Python on the desktop and Web 2.0 (mostly javascript, flash, silverlight, php, python) is a strongly growing platform. I expect most of the every day apps to move to the cloud during the next 10 years. Unfortunately c++ and d missed the train here. People don't care about performance anymore. Even application development has moved from library writing to high level descriptions of end user apps that make use of high quality foss/commercial off-the-shelf components. Cloud computing, real- time interactive communication, and fancy visual look are the key features these days.

This is a valid comment, but if I were to speculate I'd say this is more of a blip than a consistent trend. We're running into a multi-layered wall of processor frequency issues, thermal issues, and power issues, that force us to reconsider splurging computing power. Today that reality is very visible already from certain spots. I've recently switched fields from machine learning/nlp research to web/industry. Although the fields are apparently very different, they have a lot in common, along with the simple adage that obsession with performance is a survival skill that (according to all trend extrapolations I could gather) is projected to become more, not less, important. Andrei
Nov 19 2009
parent reply Travis Boucher <boucher.travis gmail.com> writes:
Andrei Alexandrescu wrote:
 
 Today that reality is very visible already from certain spots. I've 
 recently switched fields from machine learning/nlp research to 
 web/industry. Although the fields are apparently very different, they 
 have a lot in common, along with the simple adage that obsession with 
 performance is a survival skill that (according to all trend 
 extrapolations I could gather) is projected to become more, not less, 
 important.
 
 
 Andrei

Except in the web world performance is network and parallelism (cloud computing). Much less code efficiency, much more programmer productivity (which currently is mutually exclusive, but doesn't have to be)
Nov 19 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Travis Boucher wrote:
 Andrei Alexandrescu wrote:
 Today that reality is very visible already from certain spots. I've 
 recently switched fields from machine learning/nlp research to 
 web/industry. Although the fields are apparently very different, they 
 have a lot in common, along with the simple adage that obsession with 
 performance is a survival skill that (according to all trend 
 extrapolations I could gather) is projected to become more, not less, 
 important.


 Andrei

Except in the web world performance is network and parallelism (cloud computing). Much less code efficiency, much more programmer productivity (which currently is mutually exclusive, but doesn't have to be)

You'd be extremely surprised. With Akamai delivery and enough CPUs, it really boils down to sheer code optimization. Studies have shown that artificially inserted delays on the order of tens/hundreds of milliseconds influence user behavior on the site dramatically. Andrei
Nov 19 2009
parent reply Travis Boucher <boucher.travis gmail.com> writes:
Andrei Alexandrescu wrote:
 Travis Boucher wrote:
 Andrei Alexandrescu wrote:
 Today that reality is very visible already from certain spots. I've 
 recently switched fields from machine learning/nlp research to 
 web/industry. Although the fields are apparently very different, they 
 have a lot in common, along with the simple adage that obsession with 
 performance is a survival skill that (according to all trend 
 extrapolations I could gather) is projected to become more, not less, 
 important.


 Andrei

Except in the web world performance is network and parallelism (cloud computing). Much less code efficiency, much more programmer productivity (which currently is mutually exclusive, but doesn't have to be)

You'd be extremely surprised. With Akamai delivery and enough CPUs, it really boils down to sheer code optimization. Studies have shown that artificially inserted delays on the order of tens/hundreds of milliseconds influence user behavior on the site dramatically. Andrei

This is one thing that doesn't surprise me. Even some large sites, when given a choice between a fast language with slower development (C/C++) and a slow language with fast development (Ruby, Perl, Python, PHP), the choice is almost always the fast development. Sure, there are a few people who work on making the lower level stuff faster (mostly network load optimization), but the majority of the optimization is making the code run on a cluster of machines. A site falls into 2 categories. Needs scalability and doesn't. Those who need scalability, design frameworks that scale. Need more speed? Add more machines. Those who don't need scalability, don't care what they write in or how slow their crap is (you don't know how often I've seen horrid SQL queries that cause full table scans). The fast, highly optimized web code is a very niche market.
Nov 19 2009
parent reply Gzp <galap freemail.hu> writes:
 
 Those who don't need scalability, don't care what they write in or how 
 slow their crap is (you don't know how often I've seen horrid SQL 
 queries that cause full table scans).
 
 The fast, highly optimized web code is a very niche market.

Why do people always forget another branch of programs ? We are living on the not, but we still have programs for image processing, compression, processing 3D data, volumetric data, just to mention some. They are not always running on a cloud (grid) system. Just think of your IP-tv as an examlpe. It don't have too much processor in it (though they usually have some kind of HW accelerations). And believe me, it's not a pleasure to write such a code with a mixtures of templates, C++, CUDA. Especially when on a PC with multiple cores, though OpenMP is quite easy to use. So, I do hope D will outperform these languages and can/will combine all the good features of the mentioned mixture: built in parallel programming, templates for COMPILE time evaluation, access for low level libraries (CL), (or for my best hopes: "native" support for GPU accelerated codes ex. CUDA integration ). So D is really needed to have a new, MODERN language for scientific programmers as well. (Don't even dare to mention FORTRAN or matlab :) ) Gzp
Nov 19 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Gzp wrote:
 So D is really needed to have a new, MODERN language for scientific 
 programmers as well. (Don't even dare to mention FORTRAN or matlab :) )

I always thought it was a missed opportunity how, time and again, the C and C++ community ignored the needs of numerics programmers, or only grudgingly provided support. Heck, just look at the abandonment of 80 bit reals!
Nov 19 2009
prev sibling next sibling parent reply BCS <none anon.com> writes:
Hello Travis,

 Andrei Alexandrescu wrote:
 
 Today that reality is very visible already from certain spots. I've
 recently switched fields from machine learning/nlp research to
 web/industry. Although the fields are apparently very different, they
 have a lot in common, along with the simple adage that obsession with
 performance is a survival skill that (according to all trend
 extrapolations I could gather) is projected to become more, not less,
 important.
 
 Andrei
 

computing). Much less code efficiency, much more programmer productivity (which currently is mutually exclusive, but doesn't have to be)

Even if you have network parallelism, CPU loads still costs money. Many server farms are not space limited but power limited. They can't get enough power out of the power company to run more servers. (And take a guess at what there power bills cost!)
Nov 20 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
BCS wrote:
 Even if you have network parallelism, CPU loads still costs money. Many 
 server farms are not space limited but power limited. They can't get 
 enough power out of the power company to run more servers. (And take a 
 guess at what there power bills cost!)

I've often wondered why the server farms aren't located in Alaska, for free cooling, and the waste heat used to heat local businesses. They can turn a cost (cooling) into a revenue source (charge local businesses for heat). There's a peculiar old brick building in downtown Seattle that was called the "steam plant". I always wondered what a "steam plant" did, so I asked one of the tourist guides downtown. He said that the steam plant had a bunch of boilers which would generate steam, which was then piped around to local businesses to heat their buildings, as opposed to the later practice of each building getting their own boiler. So, the idea has precedent.
Nov 20 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 BCS wrote:
 Even if you have network parallelism, CPU loads still costs money. 
 Many server farms are not space limited but power limited. They can't 
 get enough power out of the power company to run more servers. (And 
 take a guess at what there power bills cost!)

I've often wondered why the server farms aren't located in Alaska, for free cooling, and the waste heat used to heat local businesses. They can turn a cost (cooling) into a revenue source (charge local businesses for heat).

http://marketplace.publicradio.org/display/web/2009/03/27/am_iceland_data_farm/ Andrei
Nov 20 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Walter Bright wrote:
 BCS wrote:
 Even if you have network parallelism, CPU loads still costs money. 
 Many server farms are not space limited but power limited. They can't 
 get enough power out of the power company to run more servers. (And 
 take a guess at what there power bills cost!)

I've often wondered why the server farms aren't located in Alaska, for free cooling, and the waste heat used to heat local businesses. They can turn a cost (cooling) into a revenue source (charge local businesses for heat).

http://marketplace.publicradio.org/display/web/2009/03/27/a _iceland_data_farm/

Makes perfect sense.
Nov 20 2009
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Walter Bright wrote:
 BCS wrote:
 Even if you have network parallelism, CPU loads still costs money. 
 Many server farms are not space limited but power limited. They 
 can't get enough power out of the power company to run more servers. 
 (And take a guess at what there power bills cost!)

I've often wondered why the server farms aren't located in Alaska, for free cooling, and the waste heat used to heat local businesses. They can turn a cost (cooling) into a revenue source (charge local businesses for heat).

http://marketplace.publicradio.org/display/web/2009/03/27/a _iceland_data_farm/

Makes perfect sense.

As much as statically disallowing escaping references to locals :o). Andrei
Nov 20 2009
prev sibling parent Justin Johansson <no spam.com> writes:
Walter Bright Wrote:

 Andrei Alexandrescu wrote:
 Walter Bright wrote:
 BCS wrote:
 Even if you have network parallelism, CPU loads still costs money. 
 Many server farms are not space limited but power limited. They can't 
 get enough power out of the power company to run more servers. (And 
 take a guess at what there power bills cost!)

I've often wondered why the server farms aren't located in Alaska, for free cooling, and the waste heat used to heat local businesses. They can turn a cost (cooling) into a revenue source (charge local businesses for heat).

http://marketplace.publicradio.org/display/web/2009/03/27/a _iceland_data_farm/

Makes perfect sense.

Oh, is that why the country is melting away?
Nov 20 2009
prev sibling next sibling parent Bill Baxter <wbaxter gmail.com> writes:
On Fri, Nov 20, 2009 at 2:05 PM, BCS <none anon.com> wrote:
 Hello Travis,

 Andrei Alexandrescu wrote:

 Today that reality is very visible already from certain spots. I've
 recently switched fields from machine learning/nlp research to
 web/industry. Although the fields are apparently very different, they
 have a lot in common, along with the simple adage that obsession with
 performance is a survival skill that (according to all trend
 extrapolations I could gather) is projected to become more, not less,
 important.

 Andrei

computing). Much less code efficiency, much more programmer productivity (which currently is mutually exclusive, but doesn't have to be)

Even if you have network parallelism, CPU loads still costs money. Many server farms are not space limited but power limited. They can't get enough power out of the power company to run more servers. (And take a guess at what there power bills cost!)

The rise of cloud computing does make an interesting case for fast code. When you've got your own server that's under-utilized anyway, you may be ok with CPU-hungry code. But to the cloud provider, every watt of consumption costs, and every cycle used for one client is a cycle that can't be used for another. So you're going to pass those costs on at some point. Probably for a while most cloud customers will be happy about the savings they get from not having to maintain their own servers. But eventually they'll be looking for further savings, and see that code that runs %50 faster gives them a direct savings of %50. If they can get that just by switching to another language, which is almost as easy to use as what they already use, you'd think they would be interested. --bb
Nov 20 2009
prev sibling next sibling parent Bill Baxter <wbaxter gmail.com> writes:
On Fri, Nov 20, 2009 at 5:50 PM, Justin Johansson <no spam.com> wrote:
 Walter Bright Wrote:

 Andrei Alexandrescu wrote:
 Walter Bright wrote:
 BCS wrote:
 Even if you have network parallelism, CPU loads still costs money.
 Many server farms are not space limited but power limited. They can't
 get enough power out of the power company to run more servers. (And
 take a guess at what there power bills cost!)

I've often wondered why the server farms aren't located in Alaska, for free cooling, and the waste heat used to heat local businesses. They can turn a cost (cooling) into a revenue source (charge local businesses for heat).

http://marketplace.publicradio.org/display/web/2009/03/27/am_iceland_data_farm/

Makes perfect sense.

Oh, is that why the country is melting away?

I think you need to bump the subject up to Conspiracy Theory #2 now. --bb
Nov 20 2009
prev sibling parent retard <re tard.com.invalid> writes:
Fri, 20 Nov 2009 14:37:37 -0800, Bill Baxter wrote:

 Probably for a while most cloud customers will be happy about the
 savings they get from not having to maintain their own servers. But
 eventually they'll be looking for further savings, and see that code
 that runs %50 faster gives them a direct savings of %50.  If they can
 get that just by switching to another language, which is almost as easy
 to use as what they already use, you'd think they would be interested.

That's very likely, but it's not happening anytime soon for cloud users with less traffic and simpler web applications.
Nov 21 2009
prev sibling parent reply Daniel de Kok <me nowhere.nospam> writes:
On 2009-11-19 22:10:57 +0100, retard <re tard.com.invalid> said:
 Even the open source community is using more and more dynamic languages
 such as Python on the desktop and Web 2.0 (mostly javascript, flash,
 silverlight, php, python) is a strongly growing platform. I expect most
 of the every day apps to move to the cloud during the next 10 years.

There are many possible scenarios when it comes to cloud computing. E.g. on the immensely popular iPhone, every application is a mix of Objective C/C++, compiled to machine code. While many iPhone applications are relatively dumb and usually communicate with webservers, this shows that native applications are preferred by segment of the market over applications that live in the browser. And server-side, there's also a lot of static language development going on. Often dynamic languages don't scale, and you'll see dynamic languages with performance-intensive parts written in C or C++, or static languages such as Java. -- Daniel
Nov 21 2009
parent reply Travis Boucher <boucher.travis gmail.com> writes:
retard wrote:
 
 Of course the major issues limiting Web 2.0 adoption are unreliable, high 
 latency, expensive communication channels. Another is that the 
 technologies have not matured on non-x86/windows platforms. I bought a 
 new cell phone recently and can't really play any videos with it even 
 though it definitely has enough cpu power to play even 576p mpeg streams.

Sure, high latency and bandwidth costs are a major limiting factor, but platform isn't. From the browser-based client side, the non-windows platforms are just as mature as the windows side (although non-x86 tends to lag somewhat). Server side, non-windows has always been more mature then windows. Unix has always been known for 'server operations', and for good reason, its designed for it and does impose artificial limitations that windows likes to do. On the embedded side of things, alot of media-based embedded devices have hardware assistance for for things like video decoding, but it is definitely a market that I think languages like D could thrive. Unfortunately D isn't targeting that market (which I think is a mistake). dmd doesn't have any capable back ends, gdc's dmd front end is lagging and almost seems to be unmaintained. I haven't used ldc much, so I don't have any real comments on that, but I suspect it is similar to the gdc situation. Hopefully after the D2 spec is frozen, gdc and ldc will catch up. I have looked at the possibility of using D for NDS development (although it'd only be homebrew crap). That is one of GCC's biggest strengths, it's versatility. It runs everywhere and targets almost everything.
 Btw, you can write iPhone apps in .NET languages. Just use Unity.
 
 And server-side, there's also a lot of static language development going
 on. Often dynamic languages don't scale, and you'll see dynamic
 languages with performance-intensive parts written in C or C++, or
 static languages such as Java.

Sure. It's just that not everyone uses them.

Server side scalability has almost nothing to do with the language in use. The server side processing time of anything in any language is dwarfed by the communications latency. Scalability and speed are 2 very different things. Server side scalability is all about being able to handle concurrency (typically across a bunch of machines). Dynamic languages, especially with web-based stuff, is so simple to model in a task-based model. One page (request) == one task == one process. Even the state sharing mechanisms are highly scalable with some of the new database and caching technologies. Since most state in a web-based application is transient, and reliability isn't really required, caching systems like memcached is often enough to handle the requirements of most server side applications. Typically large scale sites have at least a few good developers who can make things perform as needed. The people who really take the hit of poor code in inefficient languages are the hosting providers. They have to deal with tons of clients, runs tons of different poorly written scripts. I've worked in a data center with 2,000 servers that was pushing only a few hundred megabit (web hosting). I have also worked on a cluster of 12 machines that pushed over 5 gigabit. The difference in priority is very obvious in these 2 environments. The future of D to me is very uncertain. I see some very bright possibilities in the embedded area and the web cluster area (these are my 2 areas, so I can't speak on the scientific applications). However the limited targets for the official DMD, and the adoption lag in gdc (and possibly ldc) are issues that need to be addressed before I can see the language getting some of the real attention that it deserves. (of course with real attention comes stupid people, MSFUD, and bad code).
Nov 21 2009
parent reply Travis Boucher <boucher.travis gmail.com> writes:
retard wrote:
 Sat, 21 Nov 2009 06:03:46 -0700, Travis Boucher wrote:
 
 The future of D to me is very uncertain.  I see some very bright
 possibilities in the embedded area and the web cluster area (these are
 my 2 areas, so I can't speak on the scientific applications).  However
 the limited targets for the official DMD, and the adoption lag in gdc
 (and possibly ldc) are issues that need to be addressed before I can see
 the language getting some of the real attention that it deserves.

Agreed, basically you would need to go the gdc/gcc route since e.g. arm/ mips backends on llvm aren't as mature and clearly digitalmars only targets x86.

I hope sometime after the D2 specs are finalized, and dmd2 stablizes, Walter decides to make the dmd backend Boost or MIT licensed (or similar). Then we can all call the Digital Mars compiler 'the reference implementation', and standardize on GCC/LLVM. For most applications/libraries, forking means death. But look at the cases of bind (DNS), sendmail (smtp), and even Apache (and it's NCSA roots). These implementations of their respective protocols are still the 'standard' and 'reference' implementations, they still have a huge installation, and are still see active development. However, their alternatives in many cases offer better support, features and/or speed (not to mention security, especially in the case of bind and sendmail). Of course, I am not even touching on the windows end of things, the weird marketing and politics involved in windows software I can't comment on as it is too confusing for me. (freeware, shareware, crippleware, EULAs).
Nov 21 2009
parent reply Don <nospam nospam.com> writes:
Travis Boucher wrote:
 retard wrote:
 Sat, 21 Nov 2009 06:03:46 -0700, Travis Boucher wrote:

 The future of D to me is very uncertain.  I see some very bright
 possibilities in the embedded area and the web cluster area (these are
 my 2 areas, so I can't speak on the scientific applications).  However
 the limited targets for the official DMD, and the adoption lag in gdc
 (and possibly ldc) are issues that need to be addressed before I can see
 the language getting some of the real attention that it deserves.

Agreed, basically you would need to go the gdc/gcc route since e.g. arm/ mips backends on llvm aren't as mature and clearly digitalmars only targets x86.

I hope sometime after the D2 specs are finalized, and dmd2 stablizes, Walter decides to make the dmd backend Boost or MIT licensed (or similar).

AFAIK, he can't. He doesn't own exclusive rights to it. The statement that it's not guaranteed to work after Y2K is a Symantec requirement, it definitely doesn't come from Walter!
  Then we can all call the Digital Mars compiler 'the reference 
 implementation', and standardize on GCC/LLVM.
 
 For most applications/libraries, forking means death.  But look at the 
 cases of bind (DNS), sendmail (smtp), and even Apache (and it's NCSA 
 roots).  These implementations of their respective protocols are still 
 the 'standard' and 'reference' implementations, they still have a huge 
 installation, and are still see active development.
 
 However, their alternatives in many cases offer better support, features 
 and/or speed (not to mention security, especially in the case of bind 
 and sendmail).
 
 Of course, I am not even touching on the windows end of things, the 
 weird marketing and politics involved in windows software I can't 
 comment on as it is too confusing for me.  (freeware, shareware, 
 crippleware, EULAs).

Nov 22 2009
parent Travis Boucher <boucher.travis gmail.com> writes:
Don wrote:
 Travis Boucher wrote:
 retard wrote:
 Sat, 21 Nov 2009 06:03:46 -0700, Travis Boucher wrote:

 The future of D to me is very uncertain.  I see some very bright
 possibilities in the embedded area and the web cluster area (these are
 my 2 areas, so I can't speak on the scientific applications).  However
 the limited targets for the official DMD, and the adoption lag in gdc
 (and possibly ldc) are issues that need to be addressed before I can 
 see
 the language getting some of the real attention that it deserves.

Agreed, basically you would need to go the gdc/gcc route since e.g. arm/ mips backends on llvm aren't as mature and clearly digitalmars only targets x86.

I hope sometime after the D2 specs are finalized, and dmd2 stablizes, Walter decides to make the dmd backend Boost or MIT licensed (or similar).

AFAIK, he can't. He doesn't own exclusive rights to it. The statement that it's not guaranteed to work after Y2K is a Symantec requirement, it definitely doesn't come from Walter!

Sadly thats even more reason to focus on non-digital mars compilers. Personally I like the digital mars compiler, its relatively simple (compared to the gcc code mess), but legacy symantec stuff could be a bit of a bottleneck.
Nov 22 2009
prev sibling next sibling parent retard <re tard.com.invalid> writes:
Thu, 19 Nov 2009 11:47:46 -0800, Bill Baxter wrote:


 It seems to me that MS expects C++ to go the way of FORTRAN and
 COBAL.  Still there, still used, but by an increasingly small number of
 people for a small (but important!) subset of things.  Note how MS still
 hasn't produced a C99 compiler. They just don't see it as relevant to
 enough people to be financially worthwhile.

Even the open source community is using more and more dynamic languages such as Python on the desktop and Web 2.0 (mostly javascript, flash, silverlight, php, python) is a strongly growing platform. I expect most of the every day apps to move to the cloud during the next 10 years. Unfortunately c++ and d missed the train here. People don't care about performance anymore. Even application development has moved from library writing to high level descriptions of end user apps that make use of high quality foss/commercial off-the-shelf components. Cloud computing, real- time interactive communication, and fancy visual look are the key features these days.
Nov 19 2009
prev sibling next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Thu, Nov 19, 2009 at 09:10:57PM +0000, retard wrote:
 I expect most 
 of the every day apps to move to the cloud during the next 10 years. 
 Unfortunately c++ and d missed the train here.

D can do this. My D Windowing System project, while not originally conceived for it, is already fairly capable of making "cloud" applications, and when it is finished, it will knock the socks off Web 2.0 - possibly, while being somewhat compatible with it. This idea behind it isn't specific to D, of course, but D is so vastly superior to every other language ever made in almost every possible way that I don't understand why you would ever /want/ to use another language. -- Adam D. Ruppe http://arsdnet.net
Nov 19 2009
prev sibling next sibling parent retard <re tard.com.invalid> writes:
Thu, 19 Nov 2009 16:31:36 -0500, Adam D. Ruppe wrote:

 On Thu, Nov 19, 2009 at 09:10:57PM +0000, retard wrote:
 I expect most
 of the every day apps to move to the cloud during the next 10 years.
 Unfortunately c++ and d missed the train here.

D can do this. My D Windowing System project, while not originally conceived for it, is already fairly capable of making "cloud" applications, and when it is finished, it will knock the socks off Web 2.0 - possibly, while being somewhat compatible with it. This idea behind it isn't specific to D, of course, but D is so vastly superior to every other language ever made in almost every possible way that I don't understand why you would ever /want/ to use another language.

LOL. What if I want platform independent client side code (very useful in web 2.0 context), sandboxing, dynamic code loading, dynamic typing, functional programming, proof checker support, a stable language compiler / runtime, elegant looking code instead of some ctfe crap that badly emulates real ast macros? Unfortunately D isn't the best language out there for this particular domain IMHO.
Nov 19 2009
prev sibling next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Thu, Nov 19, 2009 at 09:52:52PM +0000, retard wrote:
 LOL.

Have you ever even see any real world web 2.0 code? It doesn't depend on ANY of the things you listed. Most of it is written with ugly Javascript and spaghetti PHP for crying out loud!
 What if I want platform independent client side code (very useful in 
 web 2.0 context)

Irrelevant; you shouldn't need to run client side code at all. Javascript only forces you to because it is built on a document retrieval protocol and markup language rather than a custom designed system. Besides, how hard can it be to beat Javascript for platform independence, which isn't even consistent across browser versions from the same vendor?
 sandboxing

The actual D program doesn't run on the client machine, so this is moot.
 dynamic code loading

DLLs? You could run a compiler on some source and import that too - this is an implementation question, not a language one.
 dynamic typing

import std.variant;
 functional programming

D can do it.
 proof checker support

dmd is FAR better than anything I've ever seen for Javascript and PHP.
 a stable language  compiler / runtime,

D1, or probably D2 in a matter of months.
 elegant looking code instead of some ctfe crap that 
 badly emulates real ast macros? 

Have you ever actually written D? Most code looks nothing like this.
 Unfortunately D isn't the best language 
 out there for this particular domain IMHO.

You're wrong, and the market will prove it in a few years. -- Adam D. Ruppe http://arsdnet.net
Nov 19 2009
prev sibling next sibling parent retard <re tard.com.invalid> writes:
Sat, 21 Nov 2009 09:48:03 +0100, Daniel de Kok wrote:

 On 2009-11-19 22:10:57 +0100, retard <re tard.com.invalid> said:
 Even the open source community is using more and more dynamic languages
 such as Python on the desktop and Web 2.0 (mostly javascript, flash,
 silverlight, php, python) is a strongly growing platform. I expect most
 of the every day apps to move to the cloud during the next 10 years.

There are many possible scenarios when it comes to cloud computing. E.g. on the immensely popular iPhone, every application is a mix of Objective C/C++, compiled to machine code. While many iPhone applications are relatively dumb and usually communicate with webservers, this shows that native applications are preferred by segment of the market over applications that live in the browser.

Of course the major issues limiting Web 2.0 adoption are unreliable, high latency, expensive communication channels. Another is that the technologies have not matured on non-x86/windows platforms. I bought a new cell phone recently and can't really play any videos with it even though it definitely has enough cpu power to play even 576p mpeg streams. Btw, you can write iPhone apps in .NET languages. Just use Unity.
 And server-side, there's also a lot of static language development going
 on. Often dynamic languages don't scale, and you'll see dynamic
 languages with performance-intensive parts written in C or C++, or
 static languages such as Java.

Sure. It's just that not everyone uses them.
Nov 21 2009
prev sibling parent retard <re tard.com.invalid> writes:
Sat, 21 Nov 2009 06:03:46 -0700, Travis Boucher wrote:

 The future of D to me is very uncertain.  I see some very bright
 possibilities in the embedded area and the web cluster area (these are
 my 2 areas, so I can't speak on the scientific applications).  However
 the limited targets for the official DMD, and the adoption lag in gdc
 (and possibly ldc) are issues that need to be addressed before I can see
 the language getting some of the real attention that it deserves.

Agreed, basically you would need to go the gdc/gcc route since e.g. arm/ mips backends on llvm aren't as mature and clearly digitalmars only targets x86.
Nov 21 2009