www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Conspiracy Theory #1

reply Martin Hanson <mhanson btinternet.com> writes:
I noticed the "Metaprogramming in D : Some Real-world Examples" Meeting was
held at Microsoft Headquarters. With Google Go now the flavour of the month
will there be big support from Microsoft for D to counteract the Go onslaught...

I think we should be told...
Nov 19 2009
next sibling parent hasenj <hasan.aljudy gmail.com> writes:
Martin Hanson wrote:
 I noticed the "Metaprogramming in D : Some Real-world Examples" Meeting was
held at Microsoft Headquarters. With Google Go now the flavour of the month
will there be big support from Microsoft for D to counteract the Go onslaught...
 
 I think we should be told...
Microsoft have their own C#, which they think they can make a kernel in. D is more of a threat to C# than to Go, IMO. Go is still new and lacks a lot.
Nov 19 2009
prev sibling next sibling parent reply Bill Baxter <wbaxter gmail.com> writes:
On Thu, Nov 19, 2009 at 5:35 AM, Martin Hanson <mhanson btinternet.com> wrote:
 I noticed the "Metaprogramming in D : Some Real-world Examples" Meeting was
held at Microsoft Headquarters. With Google Go now the flavour of the month
will there be big support from Microsoft for D to counteract the Go onslaught...

 I think we should be told...
No conspiracy as far as I know. The NWCPP meetings are just held on the MS campus because someone who worked at Microsoft and was a part of NWCPP got Microsoft to agree to provide the meeting space. I volunteered to talk about D because I enjoyed using it at my last job. It's more or less just a coincidence that I now work at MS and live near the guy who created D. The talk had nothing to do with my day job, unfortunately for me. MS is still crazy in love with C# and all things .NET. I think systems programming languages in general are considered to be too niche to be worth the investment. It takes a several-hundred million dollar market to even start to get MS interested. And there's still a strong preference for technologies which can boost Windows sales (i.e. that will only work on Windows). So an open-source platform-agnostic systems programming language has very little chance of getting the interest of the business-heads at MS. Such a language probably *is* interesting to a lot of the tech people working int the trenches, but they're still a niche audience. Just look at the increasing web and .NET emphasis with each new release of Visual Studio. That's where they see the money to be. It seems to me that MS expects C++ to go the way of FORTRAN and COBAL. Still there, still used, but by an increasingly small number of people for a small (but important!) subset of things. Note how MS still hasn't produced a C99 compiler. They just don't see it as relevant to enough people to be financially worthwhile. Disclaimer -- these are all just my own personal opinions. --bb
Nov 19 2009
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 Note how MS still hasn't produced a C99 compiler.
 They just don't see it as relevant to enough people to be financially
 worthwhile.
Not even the C people care about C99. I rarely get any interest for it with the Digital Mars C compiler.
Nov 19 2009
prev sibling parent reply retard <re tard.com.invalid> writes:
Thu, 19 Nov 2009 11:47:46 -0800, Bill Baxter wrote:


 It seems to me that MS expects C++ to go the way of FORTRAN and
 COBAL.  Still there, still used, but by an increasingly small number of
 people for a small (but important!) subset of things.  Note how MS still
 hasn't produced a C99 compiler. They just don't see it as relevant to
 enough people to be financially worthwhile.
Even the open source community is using more and more dynamic languages such as Python on the desktop and Web 2.0 (mostly javascript, flash, silverlight, php, python) is a strongly growing platform. I expect most of the every day apps to move to the cloud during the next 10 years. Unfortunately c++ and d missed the train here. People don't care about performance anymore. Even application development has moved from library writing to high level descriptions of end user apps that make use of high quality foss/commercial off-the-shelf components. Cloud computing, real- time interactive communication, and fancy visual look are the key features these days.
Nov 19 2009
next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Thu, Nov 19, 2009 at 09:10:57PM +0000, retard wrote:
 I expect most 
 of the every day apps to move to the cloud during the next 10 years. 
 Unfortunately c++ and d missed the train here.
D can do this. My D Windowing System project, while not originally conceived for it, is already fairly capable of making "cloud" applications, and when it is finished, it will knock the socks off Web 2.0 - possibly, while being somewhat compatible with it. This idea behind it isn't specific to D, of course, but D is so vastly superior to every other language ever made in almost every possible way that I don't understand why you would ever /want/ to use another language. -- Adam D. Ruppe http://arsdnet.net
Nov 19 2009
parent reply retard <re tard.com.invalid> writes:
Thu, 19 Nov 2009 16:31:36 -0500, Adam D. Ruppe wrote:

 On Thu, Nov 19, 2009 at 09:10:57PM +0000, retard wrote:
 I expect most
 of the every day apps to move to the cloud during the next 10 years.
 Unfortunately c++ and d missed the train here.
D can do this. My D Windowing System project, while not originally conceived for it, is already fairly capable of making "cloud" applications, and when it is finished, it will knock the socks off Web 2.0 - possibly, while being somewhat compatible with it. This idea behind it isn't specific to D, of course, but D is so vastly superior to every other language ever made in almost every possible way that I don't understand why you would ever /want/ to use another language.
LOL. What if I want platform independent client side code (very useful in web 2.0 context), sandboxing, dynamic code loading, dynamic typing, functional programming, proof checker support, a stable language compiler / runtime, elegant looking code instead of some ctfe crap that badly emulates real ast macros? Unfortunately D isn't the best language out there for this particular domain IMHO.
Nov 19 2009
parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Thu, Nov 19, 2009 at 09:52:52PM +0000, retard wrote:
 LOL.
Have you ever even see any real world web 2.0 code? It doesn't depend on ANY of the things you listed. Most of it is written with ugly Javascript and spaghetti PHP for crying out loud!
 What if I want platform independent client side code (very useful in 
 web 2.0 context)
Irrelevant; you shouldn't need to run client side code at all. Javascript only forces you to because it is built on a document retrieval protocol and markup language rather than a custom designed system. Besides, how hard can it be to beat Javascript for platform independence, which isn't even consistent across browser versions from the same vendor?
 sandboxing
The actual D program doesn't run on the client machine, so this is moot.
 dynamic code loading
DLLs? You could run a compiler on some source and import that too - this is an implementation question, not a language one.
 dynamic typing
import std.variant;
 functional programming
D can do it.
 proof checker support
dmd is FAR better than anything I've ever seen for Javascript and PHP.
 a stable language  compiler / runtime,
D1, or probably D2 in a matter of months.
 elegant looking code instead of some ctfe crap that 
 badly emulates real ast macros? 
Have you ever actually written D? Most code looks nothing like this.
 Unfortunately D isn't the best language 
 out there for this particular domain IMHO.
You're wrong, and the market will prove it in a few years. -- Adam D. Ruppe http://arsdnet.net
Nov 19 2009
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
retard wrote:
 Thu, 19 Nov 2009 11:47:46 -0800, Bill Baxter wrote:
 
 
 It seems to me that MS expects C++ to go the way of FORTRAN and
 COBAL.  Still there, still used, but by an increasingly small number of
 people for a small (but important!) subset of things.  Note how MS still
 hasn't produced a C99 compiler. They just don't see it as relevant to
 enough people to be financially worthwhile.
Even the open source community is using more and more dynamic languages such as Python on the desktop and Web 2.0 (mostly javascript, flash, silverlight, php, python) is a strongly growing platform. I expect most of the every day apps to move to the cloud during the next 10 years. Unfortunately c++ and d missed the train here. People don't care about performance anymore. Even application development has moved from library writing to high level descriptions of end user apps that make use of high quality foss/commercial off-the-shelf components. Cloud computing, real- time interactive communication, and fancy visual look are the key features these days.
This is a valid comment, but if I were to speculate I'd say this is more of a blip than a consistent trend. We're running into a multi-layered wall of processor frequency issues, thermal issues, and power issues, that force us to reconsider splurging computing power. Today that reality is very visible already from certain spots. I've recently switched fields from machine learning/nlp research to web/industry. Although the fields are apparently very different, they have a lot in common, along with the simple adage that obsession with performance is a survival skill that (according to all trend extrapolations I could gather) is projected to become more, not less, important. Andrei
Nov 19 2009
parent reply Travis Boucher <boucher.travis gmail.com> writes:
Andrei Alexandrescu wrote:
 
 Today that reality is very visible already from certain spots. I've 
 recently switched fields from machine learning/nlp research to 
 web/industry. Although the fields are apparently very different, they 
 have a lot in common, along with the simple adage that obsession with 
 performance is a survival skill that (according to all trend 
 extrapolations I could gather) is projected to become more, not less, 
 important.
 
 
 Andrei
Except in the web world performance is network and parallelism (cloud computing). Much less code efficiency, much more programmer productivity (which currently is mutually exclusive, but doesn't have to be)
Nov 19 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Travis Boucher wrote:
 Andrei Alexandrescu wrote:
 Today that reality is very visible already from certain spots. I've 
 recently switched fields from machine learning/nlp research to 
 web/industry. Although the fields are apparently very different, they 
 have a lot in common, along with the simple adage that obsession with 
 performance is a survival skill that (according to all trend 
 extrapolations I could gather) is projected to become more, not less, 
 important.


 Andrei
Except in the web world performance is network and parallelism (cloud computing). Much less code efficiency, much more programmer productivity (which currently is mutually exclusive, but doesn't have to be)
You'd be extremely surprised. With Akamai delivery and enough CPUs, it really boils down to sheer code optimization. Studies have shown that artificially inserted delays on the order of tens/hundreds of milliseconds influence user behavior on the site dramatically. Andrei
Nov 19 2009
parent reply Travis Boucher <boucher.travis gmail.com> writes:
Andrei Alexandrescu wrote:
 Travis Boucher wrote:
 Andrei Alexandrescu wrote:
 Today that reality is very visible already from certain spots. I've 
 recently switched fields from machine learning/nlp research to 
 web/industry. Although the fields are apparently very different, they 
 have a lot in common, along with the simple adage that obsession with 
 performance is a survival skill that (according to all trend 
 extrapolations I could gather) is projected to become more, not less, 
 important.


 Andrei
Except in the web world performance is network and parallelism (cloud computing). Much less code efficiency, much more programmer productivity (which currently is mutually exclusive, but doesn't have to be)
You'd be extremely surprised. With Akamai delivery and enough CPUs, it really boils down to sheer code optimization. Studies have shown that artificially inserted delays on the order of tens/hundreds of milliseconds influence user behavior on the site dramatically. Andrei
This is one thing that doesn't surprise me. Even some large sites, when given a choice between a fast language with slower development (C/C++) and a slow language with fast development (Ruby, Perl, Python, PHP), the choice is almost always the fast development. Sure, there are a few people who work on making the lower level stuff faster (mostly network load optimization), but the majority of the optimization is making the code run on a cluster of machines. A site falls into 2 categories. Needs scalability and doesn't. Those who need scalability, design frameworks that scale. Need more speed? Add more machines. Those who don't need scalability, don't care what they write in or how slow their crap is (you don't know how often I've seen horrid SQL queries that cause full table scans). The fast, highly optimized web code is a very niche market.
Nov 19 2009
parent reply Gzp <galap freemail.hu> writes:
 
 Those who don't need scalability, don't care what they write in or how 
 slow their crap is (you don't know how often I've seen horrid SQL 
 queries that cause full table scans).
 
 The fast, highly optimized web code is a very niche market.
Why do people always forget another branch of programs ? We are living on the not, but we still have programs for image processing, compression, processing 3D data, volumetric data, just to mention some. They are not always running on a cloud (grid) system. Just think of your IP-tv as an examlpe. It don't have too much processor in it (though they usually have some kind of HW accelerations). And believe me, it's not a pleasure to write such a code with a mixtures of templates, C++, CUDA. Especially when on a PC with multiple cores, though OpenMP is quite easy to use. So, I do hope D will outperform these languages and can/will combine all the good features of the mentioned mixture: built in parallel programming, templates for COMPILE time evaluation, access for low level libraries (CL), (or for my best hopes: "native" support for GPU accelerated codes ex. CUDA integration ). So D is really needed to have a new, MODERN language for scientific programmers as well. (Don't even dare to mention FORTRAN or matlab :) ) Gzp
Nov 19 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Gzp wrote:
 So D is really needed to have a new, MODERN language for scientific 
 programmers as well. (Don't even dare to mention FORTRAN or matlab :) )
I always thought it was a missed opportunity how, time and again, the C and C++ community ignored the needs of numerics programmers, or only grudgingly provided support. Heck, just look at the abandonment of 80 bit reals!
Nov 19 2009
prev sibling parent reply BCS <none anon.com> writes:
Hello Travis,

 Andrei Alexandrescu wrote:
 
 Today that reality is very visible already from certain spots. I've
 recently switched fields from machine learning/nlp research to
 web/industry. Although the fields are apparently very different, they
 have a lot in common, along with the simple adage that obsession with
 performance is a survival skill that (according to all trend
 extrapolations I could gather) is projected to become more, not less,
 important.
 
 Andrei
 
Except in the web world performance is network and parallelism (cloud computing). Much less code efficiency, much more programmer productivity (which currently is mutually exclusive, but doesn't have to be)
Even if you have network parallelism, CPU loads still costs money. Many server farms are not space limited but power limited. They can't get enough power out of the power company to run more servers. (And take a guess at what there power bills cost!)
Nov 20 2009
next sibling parent reply Bill Baxter <wbaxter gmail.com> writes:
On Fri, Nov 20, 2009 at 2:05 PM, BCS <none anon.com> wrote:
 Hello Travis,

 Andrei Alexandrescu wrote:

 Today that reality is very visible already from certain spots. I've
 recently switched fields from machine learning/nlp research to
 web/industry. Although the fields are apparently very different, they
 have a lot in common, along with the simple adage that obsession with
 performance is a survival skill that (according to all trend
 extrapolations I could gather) is projected to become more, not less,
 important.

 Andrei
Except in the web world performance is network and parallelism (cloud computing). Much less code efficiency, much more programmer productivity (which currently is mutually exclusive, but doesn't have to be)
Even if you have network parallelism, CPU loads still costs money. Many server farms are not space limited but power limited. They can't get enough power out of the power company to run more servers. (And take a guess at what there power bills cost!)
The rise of cloud computing does make an interesting case for fast code. When you've got your own server that's under-utilized anyway, you may be ok with CPU-hungry code. But to the cloud provider, every watt of consumption costs, and every cycle used for one client is a cycle that can't be used for another. So you're going to pass those costs on at some point. Probably for a while most cloud customers will be happy about the savings they get from not having to maintain their own servers. But eventually they'll be looking for further savings, and see that code that runs %50 faster gives them a direct savings of %50. If they can get that just by switching to another language, which is almost as easy to use as what they already use, you'd think they would be interested. --bb
Nov 20 2009
parent retard <re tard.com.invalid> writes:
Fri, 20 Nov 2009 14:37:37 -0800, Bill Baxter wrote:

 Probably for a while most cloud customers will be happy about the
 savings they get from not having to maintain their own servers. But
 eventually they'll be looking for further savings, and see that code
 that runs %50 faster gives them a direct savings of %50.  If they can
 get that just by switching to another language, which is almost as easy
 to use as what they already use, you'd think they would be interested.
That's very likely, but it's not happening anytime soon for cloud users with less traffic and simpler web applications.
Nov 21 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
BCS wrote:
 Even if you have network parallelism, CPU loads still costs money. Many 
 server farms are not space limited but power limited. They can't get 
 enough power out of the power company to run more servers. (And take a 
 guess at what there power bills cost!)
I've often wondered why the server farms aren't located in Alaska, for free cooling, and the waste heat used to heat local businesses. They can turn a cost (cooling) into a revenue source (charge local businesses for heat). There's a peculiar old brick building in downtown Seattle that was called the "steam plant". I always wondered what a "steam plant" did, so I asked one of the tourist guides downtown. He said that the steam plant had a bunch of boilers which would generate steam, which was then piped around to local businesses to heat their buildings, as opposed to the later practice of each building getting their own boiler. So, the idea has precedent.
Nov 20 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 BCS wrote:
 Even if you have network parallelism, CPU loads still costs money. 
 Many server farms are not space limited but power limited. They can't 
 get enough power out of the power company to run more servers. (And 
 take a guess at what there power bills cost!)
I've often wondered why the server farms aren't located in Alaska, for free cooling, and the waste heat used to heat local businesses. They can turn a cost (cooling) into a revenue source (charge local businesses for heat).
http://marketplace.publicradio.org/display/web/2009/03/27/am_iceland_data_farm/ Andrei
Nov 20 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Walter Bright wrote:
 BCS wrote:
 Even if you have network parallelism, CPU loads still costs money. 
 Many server farms are not space limited but power limited. They can't 
 get enough power out of the power company to run more servers. (And 
 take a guess at what there power bills cost!)
I've often wondered why the server farms aren't located in Alaska, for free cooling, and the waste heat used to heat local businesses. They can turn a cost (cooling) into a revenue source (charge local businesses for heat).
http://marketplace.publicradio.org/display/web/2009/03/27/a _iceland_data_farm/
Makes perfect sense.
Nov 20 2009
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Walter Bright wrote:
 BCS wrote:
 Even if you have network parallelism, CPU loads still costs money. 
 Many server farms are not space limited but power limited. They 
 can't get enough power out of the power company to run more servers. 
 (And take a guess at what there power bills cost!)
I've often wondered why the server farms aren't located in Alaska, for free cooling, and the waste heat used to heat local businesses. They can turn a cost (cooling) into a revenue source (charge local businesses for heat).
http://marketplace.publicradio.org/display/web/2009/03/27/a _iceland_data_farm/
Makes perfect sense.
As much as statically disallowing escaping references to locals :o). Andrei
Nov 20 2009
prev sibling parent reply Justin Johansson <no spam.com> writes:
Walter Bright Wrote:

 Andrei Alexandrescu wrote:
 Walter Bright wrote:
 BCS wrote:
 Even if you have network parallelism, CPU loads still costs money. 
 Many server farms are not space limited but power limited. They can't 
 get enough power out of the power company to run more servers. (And 
 take a guess at what there power bills cost!)
I've often wondered why the server farms aren't located in Alaska, for free cooling, and the waste heat used to heat local businesses. They can turn a cost (cooling) into a revenue source (charge local businesses for heat).
http://marketplace.publicradio.org/display/web/2009/03/27/a _iceland_data_farm/
Makes perfect sense.
Oh, is that why the country is melting away?
Nov 20 2009
parent Bill Baxter <wbaxter gmail.com> writes:
On Fri, Nov 20, 2009 at 5:50 PM, Justin Johansson <no spam.com> wrote:
 Walter Bright Wrote:

 Andrei Alexandrescu wrote:
 Walter Bright wrote:
 BCS wrote:
 Even if you have network parallelism, CPU loads still costs money.
 Many server farms are not space limited but power limited. They can't
 get enough power out of the power company to run more servers. (And
 take a guess at what there power bills cost!)
I've often wondered why the server farms aren't located in Alaska, for free cooling, and the waste heat used to heat local businesses. They can turn a cost (cooling) into a revenue source (charge local businesses for heat).
http://marketplace.publicradio.org/display/web/2009/03/27/am_iceland_data_farm/
Makes perfect sense.
Oh, is that why the country is melting away?
I think you need to bump the subject up to Conspiracy Theory #2 now. --bb
Nov 20 2009
prev sibling parent reply Daniel de Kok <me nowhere.nospam> writes:
On 2009-11-19 22:10:57 +0100, retard <re tard.com.invalid> said:
 Even the open source community is using more and more dynamic languages
 such as Python on the desktop and Web 2.0 (mostly javascript, flash,
 silverlight, php, python) is a strongly growing platform. I expect most
 of the every day apps to move to the cloud during the next 10 years.
There are many possible scenarios when it comes to cloud computing. E.g. on the immensely popular iPhone, every application is a mix of Objective C/C++, compiled to machine code. While many iPhone applications are relatively dumb and usually communicate with webservers, this shows that native applications are preferred by segment of the market over applications that live in the browser. And server-side, there's also a lot of static language development going on. Often dynamic languages don't scale, and you'll see dynamic languages with performance-intensive parts written in C or C++, or static languages such as Java. -- Daniel
Nov 21 2009
parent reply retard <re tard.com.invalid> writes:
Sat, 21 Nov 2009 09:48:03 +0100, Daniel de Kok wrote:

 On 2009-11-19 22:10:57 +0100, retard <re tard.com.invalid> said:
 Even the open source community is using more and more dynamic languages
 such as Python on the desktop and Web 2.0 (mostly javascript, flash,
 silverlight, php, python) is a strongly growing platform. I expect most
 of the every day apps to move to the cloud during the next 10 years.
There are many possible scenarios when it comes to cloud computing. E.g. on the immensely popular iPhone, every application is a mix of Objective C/C++, compiled to machine code. While many iPhone applications are relatively dumb and usually communicate with webservers, this shows that native applications are preferred by segment of the market over applications that live in the browser.
Of course the major issues limiting Web 2.0 adoption are unreliable, high latency, expensive communication channels. Another is that the technologies have not matured on non-x86/windows platforms. I bought a new cell phone recently and can't really play any videos with it even though it definitely has enough cpu power to play even 576p mpeg streams. Btw, you can write iPhone apps in .NET languages. Just use Unity.
 And server-side, there's also a lot of static language development going
 on. Often dynamic languages don't scale, and you'll see dynamic
 languages with performance-intensive parts written in C or C++, or
 static languages such as Java.
Sure. It's just that not everyone uses them.
Nov 21 2009
parent reply Travis Boucher <boucher.travis gmail.com> writes:
retard wrote:
 
 Of course the major issues limiting Web 2.0 adoption are unreliable, high 
 latency, expensive communication channels. Another is that the 
 technologies have not matured on non-x86/windows platforms. I bought a 
 new cell phone recently and can't really play any videos with it even 
 though it definitely has enough cpu power to play even 576p mpeg streams.
Sure, high latency and bandwidth costs are a major limiting factor, but platform isn't. From the browser-based client side, the non-windows platforms are just as mature as the windows side (although non-x86 tends to lag somewhat). Server side, non-windows has always been more mature then windows. Unix has always been known for 'server operations', and for good reason, its designed for it and does impose artificial limitations that windows likes to do. On the embedded side of things, alot of media-based embedded devices have hardware assistance for for things like video decoding, but it is definitely a market that I think languages like D could thrive. Unfortunately D isn't targeting that market (which I think is a mistake). dmd doesn't have any capable back ends, gdc's dmd front end is lagging and almost seems to be unmaintained. I haven't used ldc much, so I don't have any real comments on that, but I suspect it is similar to the gdc situation. Hopefully after the D2 spec is frozen, gdc and ldc will catch up. I have looked at the possibility of using D for NDS development (although it'd only be homebrew crap). That is one of GCC's biggest strengths, it's versatility. It runs everywhere and targets almost everything.
 Btw, you can write iPhone apps in .NET languages. Just use Unity.
 
 And server-side, there's also a lot of static language development going
 on. Often dynamic languages don't scale, and you'll see dynamic
 languages with performance-intensive parts written in C or C++, or
 static languages such as Java.
Sure. It's just that not everyone uses them.
Server side scalability has almost nothing to do with the language in use. The server side processing time of anything in any language is dwarfed by the communications latency. Scalability and speed are 2 very different things. Server side scalability is all about being able to handle concurrency (typically across a bunch of machines). Dynamic languages, especially with web-based stuff, is so simple to model in a task-based model. One page (request) == one task == one process. Even the state sharing mechanisms are highly scalable with some of the new database and caching technologies. Since most state in a web-based application is transient, and reliability isn't really required, caching systems like memcached is often enough to handle the requirements of most server side applications. Typically large scale sites have at least a few good developers who can make things perform as needed. The people who really take the hit of poor code in inefficient languages are the hosting providers. They have to deal with tons of clients, runs tons of different poorly written scripts. I've worked in a data center with 2,000 servers that was pushing only a few hundred megabit (web hosting). I have also worked on a cluster of 12 machines that pushed over 5 gigabit. The difference in priority is very obvious in these 2 environments. The future of D to me is very uncertain. I see some very bright possibilities in the embedded area and the web cluster area (these are my 2 areas, so I can't speak on the scientific applications). However the limited targets for the official DMD, and the adoption lag in gdc (and possibly ldc) are issues that need to be addressed before I can see the language getting some of the real attention that it deserves. (of course with real attention comes stupid people, MSFUD, and bad code).
Nov 21 2009
parent reply retard <re tard.com.invalid> writes:
Sat, 21 Nov 2009 06:03:46 -0700, Travis Boucher wrote:

 The future of D to me is very uncertain.  I see some very bright
 possibilities in the embedded area and the web cluster area (these are
 my 2 areas, so I can't speak on the scientific applications).  However
 the limited targets for the official DMD, and the adoption lag in gdc
 (and possibly ldc) are issues that need to be addressed before I can see
 the language getting some of the real attention that it deserves.
Agreed, basically you would need to go the gdc/gcc route since e.g. arm/ mips backends on llvm aren't as mature and clearly digitalmars only targets x86.
Nov 21 2009
parent reply Travis Boucher <boucher.travis gmail.com> writes:
retard wrote:
 Sat, 21 Nov 2009 06:03:46 -0700, Travis Boucher wrote:
 
 The future of D to me is very uncertain.  I see some very bright
 possibilities in the embedded area and the web cluster area (these are
 my 2 areas, so I can't speak on the scientific applications).  However
 the limited targets for the official DMD, and the adoption lag in gdc
 (and possibly ldc) are issues that need to be addressed before I can see
 the language getting some of the real attention that it deserves.
Agreed, basically you would need to go the gdc/gcc route since e.g. arm/ mips backends on llvm aren't as mature and clearly digitalmars only targets x86.
I hope sometime after the D2 specs are finalized, and dmd2 stablizes, Walter decides to make the dmd backend Boost or MIT licensed (or similar). Then we can all call the Digital Mars compiler 'the reference implementation', and standardize on GCC/LLVM. For most applications/libraries, forking means death. But look at the cases of bind (DNS), sendmail (smtp), and even Apache (and it's NCSA roots). These implementations of their respective protocols are still the 'standard' and 'reference' implementations, they still have a huge installation, and are still see active development. However, their alternatives in many cases offer better support, features and/or speed (not to mention security, especially in the case of bind and sendmail). Of course, I am not even touching on the windows end of things, the weird marketing and politics involved in windows software I can't comment on as it is too confusing for me. (freeware, shareware, crippleware, EULAs).
Nov 21 2009
parent reply Don <nospam nospam.com> writes:
Travis Boucher wrote:
 retard wrote:
 Sat, 21 Nov 2009 06:03:46 -0700, Travis Boucher wrote:

 The future of D to me is very uncertain.  I see some very bright
 possibilities in the embedded area and the web cluster area (these are
 my 2 areas, so I can't speak on the scientific applications).  However
 the limited targets for the official DMD, and the adoption lag in gdc
 (and possibly ldc) are issues that need to be addressed before I can see
 the language getting some of the real attention that it deserves.
Agreed, basically you would need to go the gdc/gcc route since e.g. arm/ mips backends on llvm aren't as mature and clearly digitalmars only targets x86.
I hope sometime after the D2 specs are finalized, and dmd2 stablizes, Walter decides to make the dmd backend Boost or MIT licensed (or similar).
AFAIK, he can't. He doesn't own exclusive rights to it. The statement that it's not guaranteed to work after Y2K is a Symantec requirement, it definitely doesn't come from Walter!
  Then we can all call the Digital Mars compiler 'the reference 
 implementation', and standardize on GCC/LLVM.
 
 For most applications/libraries, forking means death.  But look at the 
 cases of bind (DNS), sendmail (smtp), and even Apache (and it's NCSA 
 roots).  These implementations of their respective protocols are still 
 the 'standard' and 'reference' implementations, they still have a huge 
 installation, and are still see active development.
 
 However, their alternatives in many cases offer better support, features 
 and/or speed (not to mention security, especially in the case of bind 
 and sendmail).
 
 Of course, I am not even touching on the windows end of things, the 
 weird marketing and politics involved in windows software I can't 
 comment on as it is too confusing for me.  (freeware, shareware, 
 crippleware, EULAs).
Nov 22 2009
parent Travis Boucher <boucher.travis gmail.com> writes:
Don wrote:
 Travis Boucher wrote:
 retard wrote:
 Sat, 21 Nov 2009 06:03:46 -0700, Travis Boucher wrote:

 The future of D to me is very uncertain.  I see some very bright
 possibilities in the embedded area and the web cluster area (these are
 my 2 areas, so I can't speak on the scientific applications).  However
 the limited targets for the official DMD, and the adoption lag in gdc
 (and possibly ldc) are issues that need to be addressed before I can 
 see
 the language getting some of the real attention that it deserves.
Agreed, basically you would need to go the gdc/gcc route since e.g. arm/ mips backends on llvm aren't as mature and clearly digitalmars only targets x86.
I hope sometime after the D2 specs are finalized, and dmd2 stablizes, Walter decides to make the dmd backend Boost or MIT licensed (or similar).
AFAIK, he can't. He doesn't own exclusive rights to it. The statement that it's not guaranteed to work after Y2K is a Symantec requirement, it definitely doesn't come from Walter!
Sadly thats even more reason to focus on non-digital mars compilers. Personally I like the digital mars compiler, its relatively simple (compared to the gcc code mess), but legacy symantec stuff could be a bit of a bottleneck.
Nov 22 2009
prev sibling next sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
retard Wrote:

 Thu, 19 Nov 2009 11:47:46 -0800, Bill Baxter wrote:
 
 
 It seems to me that MS expects C++ to go the way of FORTRAN and
 COBAL.  Still there, still used, but by an increasingly small number of
 people for a small (but important!) subset of things.  Note how MS still
 hasn't produced a C99 compiler. They just don't see it as relevant to
 enough people to be financially worthwhile.
Even the open source community is using more and more dynamic languages such as Python on the desktop and Web 2.0 (mostly javascript, flash, silverlight, php, python) is a strongly growing platform. I expect most of the every day apps to move to the cloud during the next 10 years. Unfortunately c++ and d missed the train here. People don't care about performance anymore. Even application development has moved from library writing to high level descriptions of end user apps that make use of high quality foss/commercial off-the-shelf components. Cloud computing, real- time interactive communication, and fancy visual look are the key features these days.
Performance per watt is a huge issue for server farms, and until all this talk of low power, short pipeline, massively parallel computing is realized (ie. true "cloud computing"), systems languages will have a very definite place in this arena. I know of large-scale Java projects that go to extreme lengths to avoid garbage collection cycles because they take upwards of 30 seconds to complete, even on top-of-the-line hardware. Using a language like C remains a huge win in these situations. Even in this magical world of massively parallel computing there will be a place for systems languages. After all, that's how interaction with hardware works, consistent performance for time-critical code is achieved, etc. I think the real trend to consider is that projects are rarely written in just one language these days, and ease of integration between pieces is of paramount importance. C/C++ still pretty much stinks in this respect.
Nov 19 2009
next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Sean Kelly (sean invisibleduck.org)'s article
 retard Wrote:
 Thu, 19 Nov 2009 11:47:46 -0800, Bill Baxter wrote:


 It seems to me that MS expects C++ to go the way of FORTRAN and
 COBAL.  Still there, still used, but by an increasingly small number of
 people for a small (but important!) subset of things.  Note how MS still
 hasn't produced a C99 compiler. They just don't see it as relevant to
 enough people to be financially worthwhile.
Even the open source community is using more and more dynamic languages such as Python on the desktop and Web 2.0 (mostly javascript, flash, silverlight, php, python) is a strongly growing platform. I expect most of the every day apps to move to the cloud during the next 10 years. Unfortunately c++ and d missed the train here. People don't care about performance anymore. Even application development has moved from library writing to high level descriptions of end user apps that make use of high quality foss/commercial off-the-shelf components. Cloud computing, real- time interactive communication, and fancy visual look are the key features these days.
Performance per watt is a huge issue for server farms, and until all this talk
of low power, short pipeline, massively parallel computing is realized (ie. true "cloud computing"), systems languages will have a very definite place in this arena. I know of large-scale Java projects that go to extreme lengths to avoid garbage collection cycles because they take upwards of 30 seconds to complete, even on top-of-the-line hardware. Yes, and similarly, when I write code to do some complicated processing of gene expression data or DNA sequences, and it uses RAM measured in gigabytes, I go to similar lengths to avoid GC for similar reasons. (That and false pointers.) It's not unique to server space. The reason I still use D instead of C or C++ is because, even if I'm using every hack known to man to avoid GC, it's still got insane metaprogramming capabilities, and it's still what std.range and std.algorithm are written in.
Nov 19 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
dsimcha wrote:
 Yes, and similarly, when I write code to do some complicated processing of gene
 expression data or DNA sequences, and it uses RAM measured in gigabytes, I go
to
 similar lengths to avoid GC for similar reasons.  (That and false pointers.) 
It's
 not unique to server space.  The reason I still use D instead of C or C++ is
 because, even if I'm using every hack known to man to avoid GC, it's still got
 insane metaprogramming capabilities, and it's still what std.range and
 std.algorithm are written in.
Generally, GC only works well if the size of your allocations is << the size of the memory. Are you working with gigabyte sized allocations, or just lots of smaller ones?
Nov 19 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 dsimcha wrote:
 Yes, and similarly, when I write code to do some complicated processing of gene
 expression data or DNA sequences, and it uses RAM measured in gigabytes, I go
to
 similar lengths to avoid GC for similar reasons.  (That and false pointers.) 
It's
 not unique to server space.  The reason I still use D instead of C or C++ is
 because, even if I'm using every hack known to man to avoid GC, it's still got
 insane metaprogramming capabilities, and it's still what std.range and
 std.algorithm are written in.
Generally, GC only works well if the size of your allocations is << the size of the memory. Are you working with gigabyte sized allocations, or just lots of smaller ones?
Little from column A, little from column B.
Nov 20 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
dsimcha wrote:
 == Quote from Walter Bright (newshound1 digitalmars.com)'s article
 dsimcha wrote:
 Yes, and similarly, when I write code to do some complicated processing of gene
 expression data or DNA sequences, and it uses RAM measured in gigabytes, I go
to
 similar lengths to avoid GC for similar reasons.  (That and false pointers.) 
It's
 not unique to server space.  The reason I still use D instead of C or C++ is
 because, even if I'm using every hack known to man to avoid GC, it's still got
 insane metaprogramming capabilities, and it's still what std.range and
 std.algorithm are written in.
Generally, GC only works well if the size of your allocations is << the size of the memory. Are you working with gigabyte sized allocations, or just lots of smaller ones?
Little from column A, little from column B.
The giant allocations might be better done with malloc.
Nov 20 2009
parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 dsimcha wrote:
 == Quote from Walter Bright (newshound1 digitalmars.com)'s article
 dsimcha wrote:
 Yes, and similarly, when I write code to do some complicated processing of gene
 expression data or DNA sequences, and it uses RAM measured in gigabytes, I go
to
 similar lengths to avoid GC for similar reasons.  (That and false pointers.)
It's
 not unique to server space.  The reason I still use D instead of C or C++ is
 because, even if I'm using every hack known to man to avoid GC, it's still got
 insane metaprogramming capabilities, and it's still what std.range and
 std.algorithm are written in.
Generally, GC only works well if the size of your allocations is << the size of the memory. Are you working with gigabyte sized allocations, or just lots of smaller ones?
Little from column A, little from column B.
The giant allocations might be better done with malloc.
Yes, this is one of those "hacks" that I use to avoid GC.
Nov 20 2009
prev sibling next sibling parent reply Travis Boucher <boucher.travis gmail.com> writes:
Sean Kelly wrote:
 retard Wrote:
 
 Thu, 19 Nov 2009 11:47:46 -0800, Bill Baxter wrote:


 It seems to me that MS expects C++ to go the way of FORTRAN and
 COBAL.  Still there, still used, but by an increasingly small number of
 people for a small (but important!) subset of things.  Note how MS still
 hasn't produced a C99 compiler. They just don't see it as relevant to
 enough people to be financially worthwhile.
Even the open source community is using more and more dynamic languages such as Python on the desktop and Web 2.0 (mostly javascript, flash, silverlight, php, python) is a strongly growing platform. I expect most of the every day apps to move to the cloud during the next 10 years. Unfortunately c++ and d missed the train here. People don't care about performance anymore. Even application development has moved from library writing to high level descriptions of end user apps that make use of high quality foss/commercial off-the-shelf components. Cloud computing, real- time interactive communication, and fancy visual look are the key features these days.
Performance per watt is a huge issue for server farms, and until all this talk of low power, short pipeline, massively parallel computing is realized (ie. true "cloud computing"), systems languages will have a very definite place in this arena. I know of large-scale Java projects that go to extreme lengths to avoid garbage collection cycles because they take upwards of 30 seconds to complete, even on top-of-the-line hardware. Using a language like C remains a huge win in these situations.
This I agree with to a certain degree. This really only applies to colocated systems. Shared hosting situations, users are often too stupid to understand the effects of crap code, and shared hosting providers tend to over commit machines. Then comes in the virtualization providers, Amazon EC2 being a perfect example. As long as income is greater then costs, EC2 users rarely get their code running as well as it could, even tho they'd see the most direct cost savings from doing so. With today's web languages, the cost to make something efficient and fast (and maintain, debug, etc) is higher then the cost to run slow crappy code. This is amplified by the loss of money in an emerging market where coming out even a month after your competitors could mean your death. Languages like D (and even java and erlang to some degree) had the opportunity to change this trend 10-15 years ago when scalable clusters were not a common thing. However, with the direction the web has gone in the past 5-10 years, to more 'web applications' the opportunity might come again. We just need 'derail' all of those ruby kids, and get some killer web application framework for D. Personally, I hate the Interwebs, and I don't care if it collapses under its own bloated weight. As long as I still have some way of accessing source code.
 Even in this magical world of massively parallel computing there will be a
place for systems languages.  After all, that's how interaction with hardware
works, consistent performance for time-critical code is achieved, etc.  I think
the real trend to consider is that projects are rarely written in just one
language these days, and ease of integration between pieces is of paramount
importance.  C/C++ still pretty much stinks in this respect.
Yes, the days of multi-cpu, multi-core, multi-thread hardware is here. I recently got a chance to do some work on a 32 hardware thread sun machine. Very interesting design concepts. This is where languages like erlang have an advantage, and D is heading in the right direction (but still quite far off). D at least has the ability to adapt to these new architectures, where as C/C++ will soon be dealing with contention hell (they already do in some aspects). The idea of a single machine with 100+ processing contexts (hardware threads) is not something in the distant future. I know some of the sun machines (T5240) already can do 128 hardware threads in a single machine. Add in certain types of high bandwidth transferring (rdma infiniband for example), and the concepts of things like Mosix and erlang and we'll have single processes with multiple threads running on multiple hardware threads, cores, cpus and even machines.
Nov 19 2009
parent retard <re tard.com.invalid> writes:
Thu, 19 Nov 2009 22:27:34 -0700, Travis Boucher wrote:

 Sean Kelly wrote:
 Performance per watt is a huge issue for server farms, and until all
 this talk of low power, short pipeline, massively parallel computing is
 realized (ie. true "cloud computing"), systems languages will have a
 very definite place in this arena.  I know of large-scale Java projects
 that go to extreme lengths to avoid garbage collection cycles because
 they take upwards of 30 seconds to complete, even on top-of-the-line
 hardware.  Using a language like C remains a huge win in these
 situations.
This I agree with to a certain degree. This really only applies to colocated systems. Shared hosting situations, users are often too stupid to understand the effects of crap code, and shared hosting providers tend to over commit machines. Then comes in the virtualization providers, Amazon EC2 being a perfect example. As long as income is greater then costs, EC2 users rarely get their code running as well as it could, even tho they'd see the most direct cost savings from doing so. With today's web languages, the cost to make something efficient and fast (and maintain, debug, etc) is higher then the cost to run slow crappy code. This is amplified by the loss of money in an emerging market where coming out even a month after your competitors could mean your death.
If you're not developing web applications for the global audience, performance rarely matters. And it's a bit hard to compete with huge companies like Google or Amazon anyways so there's no point in trying to do that. The target audience size is usually something between 1 and 100.000 here and most companies are smaller startups. In larger companies you typically write proprietary intraweb enterprise apps for commercial users (with usually less than 10000 clients). Analyzing gene expression data etc. are really small niche markets. Usually the application users are experts of that domain within the same company (so the amount of concurrent users is low). Most web programming deals with simple pages with CRUD functionality, suboptimal database access and lots of hype. The site structure is becoming so standardized that soon you don't even need real programming languages to build one.
 Yes, the days of multi-cpu, multi-core, multi-thread hardware is here. I
 recently got a chance to do some work on a 32 hardware thread sun
 machine.  Very interesting design concepts.
What makes programming these machines rather simple at the moment is that they're mostly good at task parallelism. Very fine grained parallel algorithms aren't that useful in general in commercial use.
Nov 20 2009
prev sibling parent reply Michael Farnsworth <mike.farnsworth gmail.com> writes:
On 11/19/2009 08:52 PM, Sean Kelly wrote:
 retard Wrote:

 Thu, 19 Nov 2009 11:47:46 -0800, Bill Baxter wrote:


 It seems to me that MS expects C++ to go the way of FORTRAN and
 COBAL.  Still there, still used, but by an increasingly small number of
 people for a small (but important!) subset of things.  Note how MS still
 hasn't produced a C99 compiler. They just don't see it as relevant to
 enough people to be financially worthwhile.
Even the open source community is using more and more dynamic languages such as Python on the desktop and Web 2.0 (mostly javascript, flash, silverlight, php, python) is a strongly growing platform. I expect most of the every day apps to move to the cloud during the next 10 years. Unfortunately c++ and d missed the train here. People don't care about performance anymore. Even application development has moved from library writing to high level descriptions of end user apps that make use of high quality foss/commercial off-the-shelf components. Cloud computing, real- time interactive communication, and fancy visual look are the key features these days.
Performance per watt is a huge issue for server farms, and until all this talk of low power, short pipeline, massively parallel computing is realized (ie. true "cloud computing"), systems languages will have a very definite place in this arena. I know of large-scale Java projects that go to extreme lengths to avoid garbage collection cycles because they take upwards of 30 seconds to complete, even on top-of-the-line hardware. Using a language like C remains a huge win in these situations. Even in this magical world of massively parallel computing there will be a place for systems languages. After all, that's how interaction with hardware works, consistent performance for time-critical code is achieved, etc. I think the real trend to consider is that projects are rarely written in just one language these days, and ease of integration between pieces is of paramount importance. C/C++ still pretty much stinks in this respect.
Aye. I work at a movie VFX firm (anybody went to go see New Moon? I wouldn't expect it on this list, but the wolves were done with a a fur system that I and one other developer wrote recently, so keep your eye out for them), and I worked at a game development company before this. These are big industries, and our software requirements parallel the kind also placed on scientific, simulation, and military software development. Speed with reasonable memory usage (say, 4 GB per task!) is the name of the game, 100%, and we regularly have to sacrifice coding speed and good UI to reach it (although we'd prefer not to...D would be really helpful in that regard). Our studio uses python and C++; the python to glue the pipeline together, the C++ to do the heavy lifting. The python has the "execute once and exit" sort of code, and the C++ code execution is where we prefer to spend our cycles as much as possible on our render farm. I love it when I hear "people don't care about performance anymore," because in my experience that couldn't be further from the truth. It sorta reminds me of the "Apple is dying" argument that crops up every so often. There will probably always be a market for Apple, and there will always be a market for performance. Mmm....performance... -Mike
Nov 19 2009
next sibling parent reply Travis Boucher <boucher.travis gmail.com> writes:
Michael Farnsworth wrote:
 
 I love it when I hear "people don't care about performance anymore," 
 because in my experience that couldn't be further from the truth.  It 
 sorta reminds me of the "Apple is dying" argument that crops up every so 
 often.  There will probably always be a market for Apple, and there will 
 always be a market for performance.
 
 Mmm....performance...
 
 -Mike
Its not that people don't care about performance, companies care more about rapid development and short time to market. They work like insurance companies, where if cost of development (ie. coder man hours) is less then (cost of runtime time) * (code lifetime), then the fewer coder man hours wins. Its like the cliche that hardware is cheaper the coders. Also, slow sloppy broken code also means revisions and updates which in some cases are another avenue of revenue. Now in the case of movie development, the cost of coding an efficient rendering system is cheaper then a large rendering farm and/or the money loss if the movie is released at the wrong time. Focusing purely on performance is niche, as is focusing purely on syntax of a language. What matters to the success of a language is how money can be made off of it. Do you think PHP would have been so successful if it wasn't such an easy language which was relatively fast (compared to old CGI scripts), being released at a time when the web was really starting to take off? Right now, from my perspective at least, D has the performance and the syntax, its just the deployment that is sloppy. GDC has a fairly old DMD front end, the official DMD may or may not work as expected (I'm talking about the compiler/runtime/standard library integration on this point). The battle between compiler/runtime/library is something that I think is very much needed (the one part of capitalism I actually agree with), but I think it is definitely something that is blocking D from a wider acceptance.
Nov 19 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Travis Boucher wrote:
 Focusing purely on performance is niche, as is focusing purely on syntax 
 of a language.  What matters to the success of a language is how money 
 can be made off of it.
You're right.
Nov 19 2009
prev sibling parent retard <re tard.com.invalid> writes:
Thu, 19 Nov 2009 22:16:07 -0800, Michael Farnsworth wrote:

 I love it when I hear "people don't care about performance anymore,"
 because in my experience that couldn't be further from the truth.  It
 sorta reminds me of the "Apple is dying" argument that crops up every so
 often.  There will probably always be a market for Apple, and there will
 always be a market for performance.
I 100% agree that extreme performance is needed for solving the problems on your domain. It's just that these kind of companies don't simply exist in all countries. I would have to move abroad and rather far away to get my hands on that kind of systems. I'd love to write demanding software, but on the other hand I like living here and since even simple web applications pay well, why bother. Here most local companies fill the niche by providing agile localized solutions to clients' problems (i.e. usually localized sites built on drupal, joomla etc. with couple of integrated in-house proprietary components with less than 5000 LOC). Even the clients want to get systems for which development requires as little computer science knowledge as possible to keep the cost low. We usually sell them not only the work done on the site (the source remains closed when sold this way), but also the hosting services (bought from some 3rd party located in the US). Megacorporations like google could easily take over the market, but instead they focus on globally available services.
Nov 20 2009
prev sibling next sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
Travis Boucher Wrote:
 
 The fast, highly optimized web code is a very niche market.
I'm not sure it will remain this way for long. Look at social networking sites, where people spend a great deal of their time in what are essentially user-created apps. Make them half as efficient and the "cloud" will need twice the resources to run them.
Nov 19 2009
parent reply Travis Boucher <boucher.travis gmail.com> writes:
Sean Kelly wrote:
 Travis Boucher Wrote:
 The fast, highly optimized web code is a very niche market.
I'm not sure it will remain this way for long. Look at social networking sites, where people spend a great deal of their time in what are essentially user-created apps. Make them half as efficient and the "cloud" will need twice the resources to run them.
I hope it doesn't remain this way. Personally I am sick of fixing broken PHP code, retarded ruby code, and bad SQL queries. However, the issue isn't the language as much as it is the coders. Easy powerful languages = stupid coders who do stupid things. D is an easy, powerful language, but has one aspect which may protect it against stupid coders. Its hard to do stupid things in D. Its harder to create a memory leak in D then it is to prevent one in C. Hell, I've seen ruby do things which personally I thought was a memory leak at first, to later realize it was just a poor GC implementation. (this is mats ruby, not jruby or rubinius). I know stupid coders will always exist, but D promotes good practice without sacrificing performance.
Nov 19 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Travis Boucher (boucher.travis gmail.com)'s article
 Sean Kelly wrote:
  Its harder
 to create a memory leak in D then it is to prevent one in C.
void doStuff() { uint[] foo = new uint[100_000_000]; } void main() { while(true) { doStuff(); } }
Nov 19 2009
parent reply Travis Boucher <boucher.travis gmail.com> writes:
dsimcha wrote:
 == Quote from Travis Boucher (boucher.travis gmail.com)'s article
 Sean Kelly wrote:
  Its harder
 to create a memory leak in D then it is to prevent one in C.
void doStuff() { uint[] foo = new uint[100_000_000]; } void main() { while(true) { doStuff(); } }
Hmm, that seems like that should be an implementation bug. Shouldn't foo be marked for GC once it scope? (I have never used new on a primitive type, so I don't know)
Nov 19 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Travis Boucher (boucher.travis gmail.com)'s article
 dsimcha wrote:
 == Quote from Travis Boucher (boucher.travis gmail.com)'s article
 Sean Kelly wrote:
  Its harder
 to create a memory leak in D then it is to prevent one in C.
void doStuff() { uint[] foo = new uint[100_000_000]; } void main() { while(true) { doStuff(); } }
Hmm, that seems like that should be an implementation bug. Shouldn't foo be marked for GC once it scope? (I have never used new on a primitive type, so I don't know)
It's conservative GC. D's GC, along with the Hans Boehm GC and probably most GCs for close to the metal languages, can't perfectly identify what's a pointer and what's not. Therefore, for sufficiently large allocations there's a high probability that some bit pattern that looks like a pointer but isn't one will keep the allocation alive long after there are no "real" references to it left.
Nov 20 2009
parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Fri, 20 Nov 2009 17:28:07 +0300, dsimcha <dsimcha yahoo.com> wrote:

 == Quote from Travis Boucher (boucher.travis gmail.com)'s article
 dsimcha wrote:
 == Quote from Travis Boucher (boucher.travis gmail.com)'s article
 Sean Kelly wrote:
  Its harder
 to create a memory leak in D then it is to prevent one in C.
void doStuff() { uint[] foo = new uint[100_000_000]; } void main() { while(true) { doStuff(); } }
Hmm, that seems like that should be an implementation bug. Shouldn't foo be marked for GC once it scope? (I have never used new on a primitive type, so I don't know)
It's conservative GC. D's GC, along with the Hans Boehm GC and probably most GCs for close to the metal languages, can't perfectly identify what's a pointer and what's not. Therefore, for sufficiently large allocations there's a high probability that some bit pattern that looks like a pointer but isn't one will keep the allocation alive long after there are no "real" references to it left.
Aren't uint array allocations have hasPointers flag set off? I always thought they aren't scanned for pointers (unlike, say, void[]).
Nov 20 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Denis Koroskin (2korden gmail.com)'s article
 On Fri, 20 Nov 2009 17:28:07 +0300, dsimcha <dsimcha yahoo.com> wrote:
 == Quote from Travis Boucher (boucher.travis gmail.com)'s article
 dsimcha wrote:
 == Quote from Travis Boucher (boucher.travis gmail.com)'s article
 Sean Kelly wrote:
  Its harder
 to create a memory leak in D then it is to prevent one in C.
void doStuff() { uint[] foo = new uint[100_000_000]; } void main() { while(true) { doStuff(); } }
Hmm, that seems like that should be an implementation bug. Shouldn't foo be marked for GC once it scope? (I have never used new on a primitive type, so I don't know)
It's conservative GC. D's GC, along with the Hans Boehm GC and probably most GCs for close to the metal languages, can't perfectly identify what's a pointer and what's not. Therefore, for sufficiently large allocations there's a high probability that some bit pattern that looks like a pointer but isn't one will keep the allocation alive long after there are no "real" references to it left.
Aren't uint array allocations have hasPointers flag set off? I always thought they aren't scanned for pointers (unlike, say, void[]).
Right, but they can still be the target of false pointers. In this case, false pointers keep each instance of foo[] alive, leading to severe memory leaks.
Nov 20 2009
next sibling parent reply Travis Boucher <boucher.travis gmail.com> writes:
dsimcha wrote:
 == Quote from Denis Koroskin (2korden gmail.com)'s article
 On Fri, 20 Nov 2009 17:28:07 +0300, dsimcha <dsimcha yahoo.com> wrote:
 == Quote from Travis Boucher (boucher.travis gmail.com)'s article
 dsimcha wrote:
 == Quote from Travis Boucher (boucher.travis gmail.com)'s article
 Sean Kelly wrote:
  Its harder
 to create a memory leak in D then it is to prevent one in C.
void doStuff() { uint[] foo = new uint[100_000_000]; } void main() { while(true) { doStuff(); } }
Hmm, that seems like that should be an implementation bug. Shouldn't foo be marked for GC once it scope? (I have never used new on a primitive type, so I don't know)
It's conservative GC. D's GC, along with the Hans Boehm GC and probably most GCs for close to the metal languages, can't perfectly identify what's a pointer and what's not. Therefore, for sufficiently large allocations there's a high probability that some bit pattern that looks like a pointer but isn't one will keep the allocation alive long after there are no "real" references to it left.
Aren't uint array allocations have hasPointers flag set off? I always thought they aren't scanned for pointers (unlike, say, void[]).
Right, but they can still be the target of false pointers. In this case, false pointers keep each instance of foo[] alive, leading to severe memory leaks.
But the issue is more of a GC implementation issue then a language issue, correct? Or is this an issue of all lower level language garbage collectors? I do not know much about GC, just basic concepts.
Nov 20 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Travis Boucher (boucher.travis gmail.com)'s article
 dsimcha wrote:
 == Quote from Denis Koroskin (2korden gmail.com)'s article
 On Fri, 20 Nov 2009 17:28:07 +0300, dsimcha <dsimcha yahoo.com> wrote:
 == Quote from Travis Boucher (boucher.travis gmail.com)'s article
 dsimcha wrote:
 == Quote from Travis Boucher (boucher.travis gmail.com)'s article
 Sean Kelly wrote:
  Its harder
 to create a memory leak in D then it is to prevent one in C.
void doStuff() { uint[] foo = new uint[100_000_000]; } void main() { while(true) { doStuff(); } }
Hmm, that seems like that should be an implementation bug. Shouldn't foo be marked for GC once it scope? (I have never used new on a primitive type, so I don't know)
It's conservative GC. D's GC, along with the Hans Boehm GC and probably most GCs for close to the metal languages, can't perfectly identify what's a pointer and what's not. Therefore, for sufficiently large allocations there's a high probability that some bit pattern that looks like a pointer but isn't one will keep the allocation alive long after there are no "real" references to it left.
Aren't uint array allocations have hasPointers flag set off? I always thought they aren't scanned for pointers (unlike, say, void[]).
Right, but they can still be the target of false pointers. In this case, false pointers keep each instance of foo[] alive, leading to severe memory leaks.
But the issue is more of a GC implementation issue then a language issue, correct?
Yes.
 Or is this an issue of all lower level language garbage
 collectors?
Kinda sorta. It's possible, but not easy, to implement fully precise GC (except for the extreme corner case of unions of reference and non-reference types) in a close to the metal, statically compiled language.
Nov 20 2009
next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Fri, 20 Nov 2009 19:24:05 +0300, dsimcha <dsimcha yahoo.com> wrote:

 == Quote from Travis Boucher (boucher.travis gmail.com)'s article
 dsimcha wrote:
 == Quote from Denis Koroskin (2korden gmail.com)'s article
 On Fri, 20 Nov 2009 17:28:07 +0300, dsimcha <dsimcha yahoo.com>  
wrote:
 == Quote from Travis Boucher (boucher.travis gmail.com)'s article
 dsimcha wrote:
 == Quote from Travis Boucher (boucher.travis gmail.com)'s article
 Sean Kelly wrote:
  Its harder
 to create a memory leak in D then it is to prevent one in C.
void doStuff() { uint[] foo = new uint[100_000_000]; } void main() { while(true) { doStuff(); } }
Hmm, that seems like that should be an implementation bug.
Shouldn't
 foo be marked for GC once it scope?  (I have never used new on a
 primitive type, so I don't know)
It's conservative GC. D's GC, along with the Hans Boehm GC and
probably
 most GCs
 for close to the metal languages, can't perfectly identify what's a
 pointer and
 what's not.  Therefore, for sufficiently large allocations there's  
a high
 probability that some bit pattern that looks like a pointer but  
isn't
 one will
 keep the allocation alive long after there are no "real" references  
to
 it left.
Aren't uint array allocations have hasPointers flag set off? I always thought they aren't scanned for pointers (unlike, say, void[]).
Right, but they can still be the target of false pointers. In this
case, false
 pointers keep each instance of foo[] alive, leading to severe memory  
leaks. But the issue is more of a GC implementation issue then a language issue, correct?
Yes.
 Or is this an issue of all lower level language garbage
 collectors?
Kinda sorta. It's possible, but not easy, to implement fully precise GC (except for the extreme corner case of unions of reference and non-reference types) in a close to the metal, statically compiled language.
Unions could be deprecated in favor of tagged unions (see an example in Cyclone http://cyclone.thelanguage.org/wiki/Tagged%20Unions). Would that help?
Nov 20 2009
next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Denis Koroskin (2korden gmail.com)'s article
 On Fri, 20 Nov 2009 19:24:05 +0300, dsimcha <dsimcha yahoo.com> wrote:
 == Quote from Travis Boucher (boucher.travis gmail.com)'s article
 dsimcha wrote:
 == Quote from Denis Koroskin (2korden gmail.com)'s article
 On Fri, 20 Nov 2009 17:28:07 +0300, dsimcha <dsimcha yahoo.com>
wrote:
 == Quote from Travis Boucher (boucher.travis gmail.com)'s article
 dsimcha wrote:
 == Quote from Travis Boucher (boucher.travis gmail.com)'s article
 Sean Kelly wrote:
  Its harder
 to create a memory leak in D then it is to prevent one in C.
void doStuff() { uint[] foo = new uint[100_000_000]; } void main() { while(true) { doStuff(); } }
Hmm, that seems like that should be an implementation bug.
Shouldn't
 foo be marked for GC once it scope?  (I have never used new on a
 primitive type, so I don't know)
It's conservative GC. D's GC, along with the Hans Boehm GC and
probably
 most GCs
 for close to the metal languages, can't perfectly identify what's a
 pointer and
 what's not.  Therefore, for sufficiently large allocations there's
a high
 probability that some bit pattern that looks like a pointer but
isn't
 one will
 keep the allocation alive long after there are no "real" references
to
 it left.
Aren't uint array allocations have hasPointers flag set off? I always thought they aren't scanned for pointers (unlike, say, void[]).
Right, but they can still be the target of false pointers. In this
case, false
 pointers keep each instance of foo[] alive, leading to severe memory
leaks. But the issue is more of a GC implementation issue then a language issue, correct?
Yes.
 Or is this an issue of all lower level language garbage
 collectors?
Kinda sorta. It's possible, but not easy, to implement fully precise GC (except for the extreme corner case of unions of reference and non-reference types) in a close to the metal, statically compiled language.
Unions could be deprecated in favor of tagged unions (see an example in Cyclone http://cyclone.thelanguage.org/wiki/Tagged%20Unions). Would that help?
It would be negligible. The idea is that unions of reference and non-reference types are such a corner case that they could be handled conservatively as a special case, and then it's possible, at least in principle, to deal with the other 99.99999% of cases precisely and being conservative in 0.00001% of cases is really of no practical significance. Keep in mind that we would need to have the ability to pin and scan conservatively anyhow, since a systems language must allow allocation of untyped blocks of memory. I guess what I should have said is that the SafeD subset can be made 100% precise and D as a whole can be made about 99+% precise.
Nov 20 2009
parent reply Rainer Deyke <rainerd eldwood.com> writes:
dsimcha wrote:
 == Quote from Denis Koroskin (2korden gmail.com)'s article
 It would be negligible.  The idea is that unions of reference and non-reference
 types are such a corner case that they could be handled conservatively as a
 special case, and then it's possible, at least in principle, to deal with the
 other 99.99999% of cases precisely and being conservative in 0.00001% of cases
is
 really of no practical significance.
Yes, but a moving GC needs to be 100% precise, not 99.99999%. -- Rainer Deyke - rainerd eldwood.com
Nov 20 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Rainer Deyke (rainerd eldwood.com)'s article
 dsimcha wrote:
 == Quote from Denis Koroskin (2korden gmail.com)'s article
 It would be negligible.  The idea is that unions of reference and non-reference
 types are such a corner case that they could be handled conservatively as a
 special case, and then it's possible, at least in principle, to deal with the
 other 99.99999% of cases precisely and being conservative in 0.00001% of cases
is
 really of no practical significance.
Yes, but a moving GC needs to be 100% precise, not 99.99999%.
Not if you allow pinning, which we'd need anyhow for untyped, conservatively scanned memory blocks.
Nov 20 2009
parent reply Rainer Deyke <rainerd eldwood.com> writes:
dsimcha wrote:
 == Quote from Rainer Deyke (rainerd eldwood.com)'s article
 Yes, but a moving GC needs to be 100% precise, not 99.99999%.
Not if you allow pinning, which we'd need anyhow for untyped, conservatively scanned memory blocks.
If you allow pinning then you no longer get the full benefits of a moving gc. It would be nice to be able to trade untyped, conservatively scanned memory blocks for a better gc. -- Rainer Deyke - rainerd eldwood.com
Nov 20 2009
next sibling parent "Denis Koroskin" <2korden gmail.com> writes:
On Sat, 21 Nov 2009 05:57:48 +0300, Rainer Deyke <rainerd eldwood.com>  
wrote:

 dsimcha wrote:
 == Quote from Rainer Deyke (rainerd eldwood.com)'s article
 Yes, but a moving GC needs to be 100% precise, not 99.99999%.
Not if you allow pinning, which we'd need anyhow for untyped, conservatively scanned memory blocks.
If you allow pinning then you no longer get the full benefits of a moving gc. It would be nice to be able to trade untyped, conservatively scanned memory blocks for a better gc.
Pinning is a must-have if you want communicate with code written in other languages (C, for example). Casting from Object to void* would be forbidden, use void* pinnedAddress = GC.pin(obj); (or similar) instead.
Nov 21 2009
prev sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
Rainer Deyke wrote:

 dsimcha wrote:
 == Quote from Rainer Deyke (rainerd eldwood.com)'s article
 Yes, but a moving GC needs to be 100% precise, not 99.99999%.
Not if you allow pinning, which we'd need anyhow for untyped, conservatively scanned memory blocks.
If you allow pinning then you no longer get the full benefits of a moving gc. It would be nice to be able to trade untyped, conservatively scanned memory blocks for a better gc.
Is it possible to allocate 'pinnable' objects from a different heap and still have your normal objects managed by an optimal moving gc?
Nov 21 2009
prev sibling parent Travis Boucher <boucher.travis gmail.com> writes:
Denis Koroskin wrote:
 On Fri, 20 Nov 2009 19:24:05 +0300, dsimcha <dsimcha yahoo.com> wrote:
 
 == Quote from Travis Boucher (boucher.travis gmail.com)'s article
 dsimcha wrote:
 == Quote from Denis Koroskin (2korden gmail.com)'s article
 On Fri, 20 Nov 2009 17:28:07 +0300, dsimcha <dsimcha yahoo.com> 
wrote:
 == Quote from Travis Boucher (boucher.travis gmail.com)'s article
 dsimcha wrote:
 == Quote from Travis Boucher (boucher.travis gmail.com)'s article
 Sean Kelly wrote:
  Its harder
 to create a memory leak in D then it is to prevent one in C.
void doStuff() { uint[] foo = new uint[100_000_000]; } void main() { while(true) { doStuff(); } }
Hmm, that seems like that should be an implementation bug.
Shouldn't
 foo be marked for GC once it scope?  (I have never used new on a
 primitive type, so I don't know)
It's conservative GC. D's GC, along with the Hans Boehm GC and
probably
 most GCs
 for close to the metal languages, can't perfectly identify what's a
 pointer and
 what's not.  Therefore, for sufficiently large allocations 
there's a high
 probability that some bit pattern that looks like a pointer but 
isn't
 one will
 keep the allocation alive long after there are no "real" 
references to
 it left.
Aren't uint array allocations have hasPointers flag set off? I always thought they aren't scanned for pointers (unlike, say, void[]).
Right, but they can still be the target of false pointers. In this
case, false
 pointers keep each instance of foo[] alive, leading to severe 
memory leaks. But the issue is more of a GC implementation issue then a language issue, correct?
Yes.
 Or is this an issue of all lower level language garbage
 collectors?
Kinda sorta. It's possible, but not easy, to implement fully precise GC (except for the extreme corner case of unions of reference and non-reference types) in a close to the metal, statically compiled language.
Unions could be deprecated in favor of tagged unions (see an example in Cyclone http://cyclone.thelanguage.org/wiki/Tagged%20Unions). Would that help?
Probably not since the bit pattern of int i could still match a valid pointer. Foo.i = cast(int)&Foo; // for bad practice ugliness or Foo.i = (some expression that happens to equal &Foo) Adding extra information to a union could also have the bad side effect of killing performance as writes would include an extra write, and additional memory would be required (which would cause another set of issues on how to handle alignment).
Nov 20 2009
prev sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
dsimcha, el 20 de noviembre a las 16:24 me escribiste:
 Right, but they can still be the target of false pointers.  In this case, false
 pointers keep each instance of foo[] alive, leading to severe memory leaks.
But the issue is more of a GC implementation issue then a language issue, correct?
Yes.
 Or is this an issue of all lower level language garbage
 collectors?
Kinda sorta. It's possible, but not easy, to implement fully precise GC (except for the extreme corner case of unions of reference and non-reference types) in a close to the metal, statically compiled language.
I don't think so if you want to be able to link to C code, unless I'm missing something... -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- "Lidiar" no es lo mismo que "holguear"; ya que "lidiar" es relativo a "lidia" y "holguear" es relativo a "olga". -- Ricardo Vaporeso
Nov 20 2009
parent reply Travis Boucher <boucher.travis gmail.com> writes:
Leandro Lucarella wrote:
 dsimcha, el 20 de noviembre a las 16:24 me escribiste:
 Right, but they can still be the target of false pointers.  In this case, false
 pointers keep each instance of foo[] alive, leading to severe memory leaks.
But the issue is more of a GC implementation issue then a language issue, correct?
Yes.
 Or is this an issue of all lower level language garbage
 collectors?
Kinda sorta. It's possible, but not easy, to implement fully precise GC (except for the extreme corner case of unions of reference and non-reference types) in a close to the metal, statically compiled language.
I don't think so if you want to be able to link to C code, unless I'm missing something...
The extern (C) stuff and malloc allocated memory isn't garbage collected.
Nov 20 2009
parent Leandro Lucarella <llucax gmail.com> writes:
Travis Boucher, el 20 de noviembre a las 16:45 me escribiste:
 Leandro Lucarella wrote:
dsimcha, el 20 de noviembre a las 16:24 me escribiste:
Right, but they can still be the target of false pointers.  In this case, false
pointers keep each instance of foo[] alive, leading to severe memory leaks.
But the issue is more of a GC implementation issue then a language issue, correct?
Yes.
Or is this an issue of all lower level language garbage
collectors?
Kinda sorta. It's possible, but not easy, to implement fully precise GC (except for the extreme corner case of unions of reference and non-reference types) in a close to the metal, statically compiled language.
I don't think so if you want to be able to link to C code, unless I'm missing something...
The extern (C) stuff and malloc allocated memory isn't garbage collected.
I know, but the stack is used as a root of the live data, and if you use C code, you will have frames without type information. So you will never get full a full precise root set, unless you find some way to separate the D stack from the C stack and completely ignore the C stack. Again, unless I'm missing something :) -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- - Mire, don Inodoro! Una paloma con un anillo en la pata! Debe ser mensajera y cayó aquí! - Y... si no es mensajera es coqueta... o casada. -- Mendieta e Inodoro Pereyra
Nov 20 2009
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"dsimcha" <dsimcha yahoo.com> wrote in message 
news:he6aah$4d6$1 digitalmars.com...
 == Quote from Denis Koroskin (2korden gmail.com)'s article
 Aren't uint array allocations have hasPointers flag set off? I always
 thought they aren't scanned for pointers (unlike, say, void[]).
Right, but they can still be the target of false pointers. In this case, false pointers keep each instance of foo[] alive, leading to severe memory leaks.
I don't suppose there's a way to lookup the pointers the GC believes it has found to a given piece of GC-ed memory? Sounds like that would be very useful, if not essential, for debugging/optimizing memory usage.
Nov 21 2009
parent Travis Boucher <boucher.travis gmail.com> writes:
Nick Sabalausky wrote:
 "dsimcha" <dsimcha yahoo.com> wrote in message 
 news:he6aah$4d6$1 digitalmars.com...
 == Quote from Denis Koroskin (2korden gmail.com)'s article
 Aren't uint array allocations have hasPointers flag set off? I always
 thought they aren't scanned for pointers (unlike, say, void[]).
Right, but they can still be the target of false pointers. In this case, false pointers keep each instance of foo[] alive, leading to severe memory leaks.
I don't suppose there's a way to lookup the pointers the GC believes it has found to a given piece of GC-ed memory? Sounds like that would be very useful, if not essential, for debugging/optimizing memory usage.
Maybe extend the GC interface so the compiler and language in general will give hints on what the memory is being used for. This could even be extended to application code as well. MEM_OBJECT, MEM_STRUCT, MEM_PTRARRY, MEM_ARRAY, etc. (I haven't fully thought this through so these examples may be bad). Then the GC implementations can decide how to allocate the memory in the best way for the underlying architecture. I know this would be useful on weird memory layouts found in embedded machines (NDS for example), but could also be extended language-wise to other hardware memory areas. For example, allocating memory on video cards or DSP hardware. Like I said, this isn't something I have thought through much, and I don't know how much (if any) compiler/GC interface support would be required.
Nov 21 2009
prev sibling next sibling parent Sean Kelly <sean invisibleduck.org> writes:
dsimcha Wrote:
 
 Yes, and similarly, when I write code to do some complicated processing of gene
 expression data or DNA sequences, and it uses RAM measured in gigabytes, I go
to
 similar lengths to avoid GC for similar reasons.  (That and false pointers.) 
It's
 not unique to server space.  The reason I still use D instead of C or C++ is
 because, even if I'm using every hack known to man to avoid GC, it's still got
 insane metaprogramming capabilities, and it's still what std.range and
 std.algorithm are written in.
In our case, we're running on machines with 64 GB or more of physical RAM, and using all of it. I think we could get away with GC for certain processing where it's convenient so long as we had per-thread GCs and used malloc/free for the bulk of our data (feasible, since most everything has a deterministic lifetime). D seems like a great language in this respect, since it doesn't require GC use for every allocation.
Nov 20 2009
prev sibling parent Sean Kelly <sean invisibleduck.org> writes:
 
 For most applications/libraries, forking means death.  But look at the 
 cases of bind (DNS), sendmail (smtp), and even Apache (and it's NCSA 
 roots).  These implementations of their respective protocols are still 
 the 'standard' and 'reference' implementations, they still have a huge 
 installation, and are still see active development.
These are 'reference' implementations largely because no one follows the RFCs (something I was very frustrated to discover recently). I sincerely hope that other D compilers follow the spec rather than some potentially divergent implementation.
 However, their alternatives in many cases offer better support, features 
 and/or speed (not to mention security, especially in the case of bind 
 and sendmail).
I'd personally rather have a unified user base than having to re-code things based on which compiler I targeted. I really don't care if one compiler can do some nifty thing the others can't. That web programmers need different code for each browser is utterly ridiculous. If D went that route I'd be back to C++ in a heartbeat.
Nov 22 2009