www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Do everything in =?UTF-8?Q?Java=E2=80=A6?=

reply Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
It's an argument for Java over Python specifically but a bit more
general in reality. This stood out for me:


!=E2=80=A6other languages like D and Go are too new to bet my work on."


http://www.teamten.com/lawrence/writings/java-for-everything.html


--=20
Russel.
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D
Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder ekiga.n=
et
41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder
Dec 04 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 4 December 2014 at 13:48:04 UTC, Russel Winder via 
Digitalmars-d wrote:
 It's an argument for Java over Python specifically but a bit 
 more general in reality.
A fun read, and I see his POV. It is a pity Python does not include some static typing, but I think he undervalues the access to a REPL! I think Swift is being a bit innovative here by having a REPL built into the debugger. Good move, wish I had a project that was suitable for it (but requiring ios8 makes it DOA for now). For speed… I dunno. In the cloud you can run Python on 10 instances with little effort, so 10x faster is often not so important if development is slower. Cloud computing has changed my perception of speed: if you can partition the problem then Python is fast enough for low frequency situations… I think the main benefit of prototype based dynamic languages like javascript is forward-compatibility and mixed type containers. By being able to "patch" the prototype you can make IE9 support new functionality by emulating newer features like "classList"… That's pretty nice. Java on the browser turned out to be a disaster…
 This stood out for me:


 !…other languages like D and Go are too new to bet my work on."
I did not find that odd, they are not perceived as stable and proven. Go is still working on finding the right GC solution.
Dec 04 2014
next sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Thursday, 4 December 2014 at 14:12:34 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 4 December 2014 at 13:48:04 UTC, Russel Winder via 
 Digitalmars-d wrote:
 It's an argument for Java over Python specifically but a bit 
 more general in reality.
A fun read, and I see his POV. It is a pity Python does not include some static typing, but I think he undervalues the access to a REPL! I think Swift is being a bit innovative here by having a REPL built into the debugger. Good move, wish I had a project that was suitable for it (but requiring ios8 makes it DOA for now). For speed… I dunno. In the cloud you can run Python on 10 instances with little effort, so 10x faster is often not so important if development is slower. Cloud computing has changed my perception of speed: if you can partition the problem then Python is fast enough for low frequency situations…
I rather pay for just one instance. Honestly, I could never see an use for Python outside shell scripting. And I was an heavy user of it during my stay at CERN, and later companies, for build and test automation.
Dec 04 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 4 December 2014 at 14:25:52 UTC, Paulo  Pinto wrote:
 I rather pay for just one instance.
That depends. What makes Go and Python attractive on AppEngine is the fast spin up time, you only pay for 15 minutes, and it scales up to 100 instances transparently. With java you need multiple idle instances 24/7 because the spin up is slow.
 Honestly, I could never see an use for Python outside shell 
 scripting.
Not having static typing is a weakness, but not as bad as I thought it would be when you learn how to deal with it. Dropbox likes Python enough to develop a JIT for it according to this blog: https://tech.dropbox.com/2014/04/introducing-pyston-an-upcoming-jit-based-python-implementation/ So I'd say it all depends.
Dec 04 2014
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Thursday, 4 December 2014 at 14:40:10 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 4 December 2014 at 14:25:52 UTC, Paulo  Pinto 
 wrote:
 I rather pay for just one instance.
That depends. What makes Go and Python attractive on AppEngine is the fast spin up time, you only pay for 15 minutes, and it scales up to 100 instances transparently. With java you need multiple idle instances 24/7 because the spin up is slow.
 Honestly, I could never see an use for Python outside shell 
 scripting.
Not having static typing is a weakness, but not as bad as I thought it would be when you learn how to deal with it. Dropbox likes Python enough to develop a JIT for it according to this blog: https://tech.dropbox.com/2014/04/introducing-pyston-an-upcoming-jit-based-python-implementation/ So I'd say it all depends.
PyPy has now 10 year of research spent into it, and it still doesn't support all Python features. I am aware of Dropbox efforts. Lets see if they go Unladen Swallow direction or not.
Dec 04 2014
parent "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Thursday, 4 December 2014 at 15:04:44 UTC, Paulo  Pinto wrote:
 On Thursday, 4 December 2014 at 14:40:10 UTC, Ola Fosheim 
 Grøstad wrote:

 PyPy has now 10 year of research spent into it, and it still 
 doesn't support all Python features.
Armin Rigo is a smart guy, but well, some things are really a no-way in python. --- Paolo
Dec 04 2014
prev sibling next sibling parent reply Shammah Chancellor <anonymous coward.com> writes:
On 2014-12-04 14:12:32 +0000, Ola Fosheim Grstad said:

 I did not find that odd, they are not perceived as stable and proven. 
 Go is still working on finding the right GC solution.
There are quite a few companies using Go in production. -S.
Dec 04 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 5 December 2014 at 07:33:21 UTC, Shammah Chancellor 
wrote:
 On 2014-12-04 14:12:32 +0000, Ola Fosheim Grøstad said:

 I did not find that odd, they are not perceived as stable and 
 proven. Go is still working on finding the right GC solution.
There are quite a few companies using Go in production.
Yes, but I will not consider Go ready for production until they are out of Beta on Google App Engine. Google has to demonstrate that they believe in the capability of their own language ;-). https://cloud.google.com/appengine/docs/go/
Dec 05 2014
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Friday, 5 December 2014 at 08:08:13 UTC, Ola Fosheim Grøstad
wrote:
 On Friday, 5 December 2014 at 07:33:21 UTC, Shammah Chancellor 
 wrote:
 On 2014-12-04 14:12:32 +0000, Ola Fosheim Grøstad said:

 I did not find that odd, they are not perceived as stable and 
 proven. Go is still working on finding the right GC solution.
There are quite a few companies using Go in production.
Yes, but I will not consider Go ready for production until they are out of Beta on Google App Engine. Google has to demonstrate that they believe in the capability of their own language ;-). https://cloud.google.com/appengine/docs/go/
Go is more mature than D. They have at least 2 implementation and well fleshed out specs. Granted, it is easier in Go as the language is smaller.
Dec 05 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 5 December 2014 at 23:09:56 UTC, deadalnix wrote:
 Go is more mature than D. They have at least 2 implementation 
 and
 well fleshed out specs.

 Granted, it is easier in Go as the language is smaller.
Yes, the Go feature set is relative mature and they have stable releases, but Google has advertisers as their customers so they can drop tools any time the feel like with no effect on customers. Chrome, Dart and PNaCl is all about securing Google Search, providing tools for developers is a by product… I don't trust Google until they commit to making Go available as a supported tool to customers that they make revenue from. Google will probably evolve Go until it fits their own needs, but once they make it available as a supported tool on AppEngine they will have to commit to maintaining it as a stable release. It does say something that it has been in experimental/beta state for years on AppEngine. At Cppcon representatives from Google clearly stated that they could not see Go replace C++ any time soon. Overall that sounds like Google internally does not view Go as complete even though the Go authors think it is…
Dec 06 2014
prev sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Friday, 5 December 2014 at 07:33:21 UTC, Shammah Chancellor 
wrote:
 On 2014-12-04 14:12:32 +0000, Ola Fosheim Grøstad said:

 I did not find that odd, they are not perceived as stable and 
 proven. Go is still working on finding the right GC solution.
There are quite a few companies using Go in production. -S.
Yes there are, but for me using Go in production means it is listed as the required language in a Request For Proposal document. -- Paulo
Dec 05 2014
prev sibling next sibling parent Ziad Hatahet via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, Dec 4, 2014 at 6:12 AM, via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 For speed... I dunno. In the cloud you can run Python on 10 instances with
 little effort,
But if a single instance suffices, why would you? Probably not a popular opinion, but we should think more about resources and power usage, even if they're "cheap". Convenience is not everything. As engineers, we have duties and responsibilities toward the community and the environment. I am not a fan at the throw-servers-at-it-until-it-works approach.
Dec 05 2014
prev sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Dec 05, 2014 at 11:41:28AM -0800, Ziad Hatahet via Digitalmars-d wrote:
 On Thu, Dec 4, 2014 at 6:12 AM, via Digitalmars-d <
 digitalmars-d puremagic.com> wrote:
 
 For speed... I dunno. In the cloud you can run Python on 10
 instances with little effort,
But if a single instance suffices, why would you? Probably not a popular opinion, but we should think more about resources and power usage, even if they're "cheap". Convenience is not everything. As engineers, we have duties and responsibilities toward the community and the environment. I am not a fan at the throw-servers-at-it-until-it-works approach.
I agree. It's not just about conservation of resources and power, though. It's also about maximizing the utility of our assets and extending our reach. If I were a business and I invested $10,000 in servers, wouldn't I want to maximize the amount of computation I can get from these servers before I need to shell out money for more servers? There are also certain large computational problems that basically need every last drop of juice you can get in order to have any fighting chance to solve them. In such cases you'd want to get as far as possible in refining approximate (or partial) solutions before giving up. If it were up to me, I'd want to squeeze every last drop out of every last server I can ever afford to buy, since otherwise I might not be able to go as far as I could have due to too many resources being wasted on unnecessary or inefficient processes. But apparently, in these days of economic downturn, we are still wallowing in enough cash that throwing more servers at the problem is still a viable business strategy, and not maximizing what we can get given what we have is an acceptable compromise. *shrug* T -- Beware of bugs in the above code; I have only proved it correct, not tried it. -- Donald Knuth
Dec 05 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 5 December 2014 at 20:32:54 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 I agree. It's not just about conservation of resources and 
 power,
 though. It's also about maximizing the utility of our assets and
 extending our reach.

 If I were a business and I invested $10,000 in servers, 
 wouldn't I want
 to maximize the amount of computation I can get from these 
 servers
 before I need to shell out money for more servers?
Those $10,000 in servers is a small investment compared to the cost of the inhouse IT department to run them… Which is why the cloud make sense. Why have all that unused capacity inhouse (say
90% idle over 24/7) and pay someone to make it work, when you 
can put it in the cloud where you get load balancing, have a 99,999% stable environment and can cut down on the IT staff?
 There are also certain large computational problems that 
 basically need
 every last drop of juice you can get in order to have any 
 fighting
 chance to solve them.
Sure, but then you should run it on SIMD processors (GPUs) anyway. And if you only run a couple of times a month, it still makes sense to run it on more servers using map-reduce in the cloud where you only pay for CPU time. The only situation where you truly need dedicated servers is where you have real time requirements, a constant high load or where you need a lot of RAM because you cannot partition the dataset.
Dec 05 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 5 December 2014 at 21:21:49 UTC, Ola Fosheim Grøstad 
wrote:
 The only situation where you truly need dedicated servers is 
 where you have real time requirements, a constant high load or 
 where you need a lot of RAM because you cannot partition the 
 dataset.
Btw, in most cases the last point does not apply. Compute Engine has a 16core/104GB option and I would be surprised if Azure and Amazon doesn't have a similar offer. You pay for at least 10 minutes and after that per minute at 0.8-1.2 USD/hour. So if the computation has to run for 30 minutes on 30 instances (Approx. the cpu power of 480 sandy bridge cores and 3 TB RAM) it will cost you ~18USD.
Dec 05 2014
prev sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Friday, 5 December 2014 at 21:21:49 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 5 December 2014 at 20:32:54 UTC, H. S. Teoh via 
 Digitalmars-d wrote:
 I agree. It's not just about conservation of resources and 
 power,
 though. It's also about maximizing the utility of our assets 
 and
 extending our reach.

 If I were a business and I invested $10,000 in servers, 
 wouldn't I want
 to maximize the amount of computation I can get from these 
 servers
 before I need to shell out money for more servers?
Those $10,000 in servers is a small investment compared to the cost of the inhouse IT department to run them… Which is why the cloud make sense. Why have all that unused capacity inhouse (say >90% idle over 24/7) and pay someone to make it work, when you can put it in the cloud where you get load balancing, have a 99,999% stable environment and can cut down on the IT staff?
 There are also certain large computational problems that 
 basically need
 every last drop of juice you can get in order to have any 
 fighting
 chance to solve them.
Sure, but then you should run it on SIMD processors (GPUs) anyway. And if you only run a couple of times a month, it still makes sense to run it on more servers using map-reduce in the cloud where you only pay for CPU time. The only situation where you truly need dedicated servers is where you have real time requirements, a constant high load or where you need a lot of RAM because you cannot partition the dataset.
Big simulations still benefit from dedicated clusters. Good performance often requires uniformly extremely low latencies between nodes, as well as the very fastest in distributed storage (read *and* write). P.S. GPUs are not a panacea for all hpc problems. For example, rdma is only a recent thing for GPUs across different nodes. In general there is a communication bandwidth and latency issue: the more power you pack in each compute unit (GPU or CPU or whatever), the more bandwidth you need connecting them.
Dec 06 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 6 December 2014 at 09:24:57 UTC, John Colvin wrote:
 Big simulations still benefit from dedicated clusters. Good 
 performance often requires uniformly extremely low latencies 
 between nodes, as well as the very fastest in distributed 
 storage (read *and* write).
The question is not performance between nodes if you can partition the dataset (which I made a requirement), but how much you pay in total for getting the job done. So you can have inefficiency and still profit by renting CPU time because the total cost of ownership from having a local under-utilized server farm can be quite high. But if the simulation requires a NUMA-like architecture… then you don't have a dataset that you can partition and solve in a map-reduce style.
 P.S. GPUs are not a panacea for all hpc problems. For example, 
 rdma is only a recent thing for GPUs across different nodes. In 
 general there is a communication bandwidth and latency issue: 
 the more power you pack in each compute unit (GPU or CPU or 
 whatever), the more bandwidth you need connecting them.
HPC is a special case and different architectures will suit different problems, so you have to tailor the hardware architecture to the problems you want to solve, but then we are not talking $10.000 servers… If you need RDMA, then you are basically in NUMA land, which is not really suitable for a generic cloud solution in the first place?
Dec 06 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 6 December 2014 at 12:04:48 UTC, Ola Fosheim Grøstad 
wrote:
 not talking $10.000 servers… If you need RDMA, then you are 
 basically in NUMA land, which is not really suitable for a 
 generic cloud solution in the first place?
Actually, Microsoft Azure provides InfiniBand RDMA for 4.47USD/hour for their A8 and A9 nodes: http://azure.microsoft.com/en-us/pricing/details/virtual-machines/#Linux
Dec 06 2014
prev sibling next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Thursday, 4 December 2014 at 13:48:04 UTC, Russel Winder via 
Digitalmars-d wrote:
 It's an argument for Java over Python specifically but a bit 
 more
 general in reality. This stood out for me:


 !…other languages like D and Go are too new to bet my work on."


 http://www.teamten.com/lawrence/writings/java-for-everything.html
I stand by him. Since 2004, our projects are either pure Java stacks or .NET stacks depending on the customer. When people compare new languages against Java, .NET and friends they always forget how rich the eco-systems are in terms of tooling. Go, D and Rust might win over the poor tooling C and C++ developers have, but not over the richness Java and .NET worlds enjoy in application monitoring, IDEs and libraries. Now with Java and .NET official SDKs supporting AOT compilation, instead of forcing developers to buy commercial AOT compilers, the eco-systems are even better. This is why, at least on my area of work, enterprise consulting. It is very hard to sell alternatives to the JVM and .NET eco-systems, like D. It is a world that has left C++ in the mid-2000 and fully embraced GC based languages and their eco-systems. Being just better than C++ isn't enough. -- Paulo
Dec 04 2014
prev sibling next sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 12/4/14, 10:47 AM, Russel Winder via Digitalmars-d wrote:
 It's an argument for Java over Python specifically but a bit more
 general in reality. This stood out for me:


 !…other languages like D and Go are too new to bet my work on."


 http://www.teamten.com/lawrence/writings/java-for-everything.html
Very interesting read. But the world of humans still has time to grow and evolve, and humans always try to do better, you can't stop that. He says Java is verbose and "so what?". Well, couldn't it be less verbose and still be that good? Could you be very DRY (Don't Repeat Yourself) in a language that's statically typed, but with good type inference and very good performance, superior to those of VM languages? Yes, you can. You shouldn't stop there. OK, use Java now, but don't stop there. Try to think of new ideas, new languages. At least as a hobby. If Python makes you happy and Java not, but Java gets the work done, who cares? I don't want to spend my time in the world being unhappy but doing work (which probably isn't for my own utility, and probably isn't for anyone's *real* utility), I'd rather be happy. Just my 2 cents :-)
Dec 04 2014
parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 12/4/14, 2:11 PM, Ary Borenszweig wrote:
 On 12/4/14, 10:47 AM, Russel Winder via Digitalmars-d wrote:
 It's an argument for Java over Python specifically but a bit more
 general in reality. This stood out for me:


 !…other languages like D and Go are too new to bet my work on."


 http://www.teamten.com/lawrence/writings/java-for-everything.html
Very interesting read. But the world of humans still has time to grow and evolve, and humans always try to do better, you can't stop that. He says Java is verbose and "so what?". Well, couldn't it be less verbose and still be that good? Could you be very DRY (Don't Repeat Yourself) in a language that's statically typed, but with good type inference and very good performance, superior to those of VM languages? Yes, you can. You shouldn't stop there. OK, use Java now, but don't stop there. Try to think of new ideas, new languages. At least as a hobby. If Python makes you happy and Java not, but Java gets the work done, who cares? I don't want to spend my time in the world being unhappy but doing work (which probably isn't for my own utility, and probably isn't for anyone's *real* utility), I'd rather be happy. Just my 2 cents :-)
Like, cool, Java helped Twitter improve their search engine. Yes, Twitter has some real value for the humanity.
Dec 04 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 04 Dec 2014 14:12:48 -0300
Ary Borenszweig via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Twitter has some real value for the humanity.
( /me eats his cigarette )
Dec 04 2014
prev sibling next sibling parent "Dicebot" <public dicebot.lv> writes:
On Thursday, 4 December 2014 at 13:48:04 UTC, Russel Winder via 
Digitalmars-d wrote:
 It's an argument for Java over Python specifically but a bit 
 more
 general in reality. This stood out for me:


 !…other languages like D and Go are too new to bet my work on."


 http://www.teamten.com/lawrence/writings/java-for-everything.html
This crap is told so often it is not even interesting anymore.
Dec 04 2014
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 4 December 2014 at 13:48:04 UTC, Russel Winder via
Digitalmars-d wrote:
 It's an argument for Java over Python specifically but a bit 
 more
 general in reality. This stood out for me:


 !…other languages like D and Go are too new to bet my work on."


 http://www.teamten.com/lawrence/writings/java-for-everything.html
High risk, high reward.
Dec 04 2014
prev sibling next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 4 December 2014 at 13:48:04 UTC, Russel Winder via
Digitalmars-d wrote:
 It's an argument for Java over Python specifically but a bit 
 more
 general in reality. This stood out for me:


 !…other languages like D and Go are too new to bet my work on."


 http://www.teamten.com/lawrence/writings/java-for-everything.html
Also relevant: http://wiki.jetbrains.net/intellij/Developing_and_running_a_Java_EE_Hello_World_application
Dec 04 2014
next sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Fri, 05 Dec 2014 02:39:49 +0000
deadalnix via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 On Thursday, 4 December 2014 at 13:48:04 UTC, Russel Winder via
 Digitalmars-d wrote:
 It's an argument for Java over Python specifically but a bit=20
 more
 general in reality. This stood out for me:


 !=E2=80=A6other languages like D and Go are too new to bet my work on."


 http://www.teamten.com/lawrence/writings/java-for-everything.html
=20 Also relevant: http://wiki.jetbrains.net/intellij/Developing_and_running_a_Java_EE_Hello=
_World_application i didn't make it past the contents. too hard for silly me.
Dec 04 2014
prev sibling next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Dec 05, 2014 at 04:49:02AM +0200, ketmar via Digitalmars-d wrote:
 On Fri, 05 Dec 2014 02:39:49 +0000
 deadalnix via Digitalmars-d <digitalmars-d puremagic.com> wrote:
[...]
 Also relevant:
 http://wiki.jetbrains.net/intellij/Developing_and_running_a_Java_EE_Hello_World_application
i didn't make it past the contents. too hard for silly me.
Whoa. Thanks for the link -- I was actually at some point considering maybe to get into the Java field instead of being stuck with C/C++ at work, but after reading that page, I was completely dispelled of the notion. I think I would lose my sanity after 5 minutes of clicking through those endless submenus, typing out XML by hand (argh), and writing 50 pages of Java legalese and setting up 17 pieces of scaffolding just to get a Hello World program to run. Whoa! I think I need therapy just skimming over that page. This is sooo over-engineered it's not even funny. For all their flaws, C/C++ at least doesn't require that level of inanity... But of course, if I could only write D at my job, that'd be a whole lot different... :-P T -- Trying to define yourself is like trying to bite your own teeth. -- Alan Watts
Dec 05 2014
next sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Friday, 5 December 2014 at 13:14:52 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 On Fri, Dec 05, 2014 at 04:49:02AM +0200, ketmar via 
 Digitalmars-d wrote:
 On Fri, 05 Dec 2014 02:39:49 +0000
 deadalnix via Digitalmars-d <digitalmars-d puremagic.com> 
 wrote:
[...]
 Also relevant:
 http://wiki.jetbrains.net/intellij/Developing_and_running_a_Java_EE_Hello_World_application
i didn't make it past the contents. too hard for silly me.
Whoa. Thanks for the link -- I was actually at some point considering maybe to get into the Java field instead of being stuck with C/C++ at work, but after reading that page, I was completely dispelled of the notion. I think I would lose my sanity after 5 minutes of clicking through those endless submenus, typing out XML by hand (argh), and writing 50 pages of Java legalese and setting up 17 pieces of scaffolding just to get a Hello World program to run. Whoa! I think I need therapy just skimming over that page. This is sooo over-engineered it's not even funny. For all their flaws, C/C++ at least doesn't require that level of inanity... But of course, if I could only write D at my job, that'd be a whole lot different... :-P T
Modern JEE is quite different from that tutorial. Besides you don't use JEE for HelloWorld, rather for distributed applications. C/C++ don't provide half the tools that allow JEE to scale across the cluster and the respective monitoring infrastructure. JEE is the evolution of distributed CORBA applications in the enterprise, with .NET enterprise applications being the evolution of DCOM. Both games that C++ lost its place at. -- Paulo
Dec 05 2014
parent reply "Jonathan" <jadit2 gmail.com> writes:
 JEE is the evolution of distributed CORBA applications in the 
 enterprise, with .NET enterprise applications being the 
 evolution of DCOM.

 Both games that C++ lost its place at.
What about zeromq with C++ or even resorting to simple internal REST protocols. I've yet to see a valid argument that DCOM (not sure about COBRA) offer a tangible benefit over simpler approaches. Thoughts?
Dec 05 2014
parent reply "paulo pinto" <pjmlp progtools.org> writes:
On Friday, 5 December 2014 at 18:46:12 UTC, Jonathan wrote:
 JEE is the evolution of distributed CORBA applications in the 
 enterprise, with .NET enterprise applications being the 
 evolution of DCOM.

 Both games that C++ lost its place at.
What about zeromq with C++ or even resorting to simple internal REST protocols. I've yet to see a valid argument that DCOM (not sure about COBRA) offer a tangible benefit over simpler approaches. Thoughts?
I am yet to enconter any project using zeromq. The whole issue is the infrastructure you can get from such eco-systems for large scale deployments. For example, you a standard way across multiple operating systems to: - message queues, including mainframe systems - monitoring - scheduling - user security, including integration with existing systems and multiple authentication levels - database drivers - package aplications and deliver them across the cluster - load balancing schemes - web development frameworks - batch processing - orms - meta-programming - cluster based cache systems - web apis In C++ you would need to cherry pick different sets of libraries, without guarantees of compatibilities across them, with different semantics. And they still wouldn't cover the whole functionality. Then you will be fighting with compilation times, memory errors and so on. -- Paulo
Dec 05 2014
parent Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Fri, 2014-12-05 at 20:56 +0000, paulo pinto via Digitalmars-d wrote:
 
[…]
 I am yet to enconter any project using zeromq.
[…] Many of the financial institutions in London are using ZeroMQ for a lot of their projects. It does it's job very well. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Dec 06 2014
prev sibling parent Mike Parker <aldacron gmail.com> writes:
On 12/5/2014 10:12 PM, H. S. Teoh via Digitalmars-d wrote:
 On Fri, Dec 05, 2014 at 04:49:02AM +0200, ketmar via Digitalmars-d wrote:
 On Fri, 05 Dec 2014 02:39:49 +0000
 deadalnix via Digitalmars-d <digitalmars-d puremagic.com> wrote:
[...]
 Also relevant:
 http://wiki.jetbrains.net/intellij/Developing_and_running_a_Java_EE_Hello_World_application
i didn't make it past the contents. too hard for silly me.
Whoa. Thanks for the link -- I was actually at some point considering maybe to get into the Java field instead of being stuck with C/C++ at work, but after reading that page, I was completely dispelled of the notion. I think I would lose my sanity after 5 minutes of clicking through those endless submenus, typing out XML by hand (argh), and writing 50 pages of Java legalese and setting up 17 pieces of scaffolding just to get a Hello World program to run. Whoa! I think I need therapy just skimming over that page. This is sooo over-engineered it's not even funny. For all their flaws, C/C++ at least doesn't require that level of inanity...
I really don't think a Hello World example is representative of the usefulness of Java in the web. I don't see it as being over-engineered at at all (though that is a disease Java programmers are often afflicted with). The XML configuration allows you to be portable across web containers and application servers, while using only the bits of the JEE specification that you need. Anyone doing serious Java web dev, from servlets to full-blown JEE stacks, is going to be using a Java IDE that generates much of what is needed anyway, and will only need to tweak the config files for customization. I've done Java backends on a contract basis in the past. If I needed to whip up a web app today, I'd still choose Java.
Dec 06 2014
prev sibling next sibling parent Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Fri, 2014-12-05 at 05:12 -0800, H. S. Teoh via Digitalmars-d wrote:
 On Fri, Dec 05, 2014 at 04:49:02AM +0200, ketmar via Digitalmars-d wrote:
 On Fri, 05 Dec 2014 02:39:49 +0000
 deadalnix via Digitalmars-d <digitalmars-d puremagic.com> wrote:
[...]
 Also relevant:
 http://wiki.jetbrains.net/intellij/Developing_and_running_a_Java_EE_H=
ello_World_application
 i didn't make it past the contents. too hard for silly me.
=20 Whoa. Thanks for the link -- I was actually at some point considering maybe to get into the Java field instead of being stuck with C/C++ at work, but after reading that page, I was completely dispelled of the notion. I think I would lose my sanity after 5 minutes of clicking through those endless submenus, typing out XML by hand (argh), and writing 50 pages of Java legalese and setting up 17 pieces of scaffolding just to get a Hello World program to run. Whoa! I think I need therapy just skimming over that page. This is sooo over-engineered it's not even funny. For all their flaws, C/C++ at least doesn't require that level of inanity... =20 But of course, if I could only write D at my job, that'd be a whole lot different... :-P
Hopefully this all being stated in jest since anyone considering using JavaEE for a Hello World micro-service is either trying to introduce people to the JavaEE workflow for big applications or has a deep agenda, possibly involving Spring Boot or general hatred of Java. As a counter example let us consider Ratpack where the complete Hello World micro-service (*) is coded as. get("/") { "Hello, World!" } (*) This term is now mandatory for fashion reasons. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Dec 05 2014
prev sibling next sibling parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Dec 05, 2014 at 07:52:24PM +0000, Russel Winder via Digitalmars-d wrote:
 On Fri, 2014-12-05 at 05:12 -0800, H. S. Teoh via Digitalmars-d wrote:
 On Fri, Dec 05, 2014 at 04:49:02AM +0200, ketmar via Digitalmars-d wrote:
 On Fri, 05 Dec 2014 02:39:49 +0000
 deadalnix via Digitalmars-d <digitalmars-d puremagic.com> wrote:
[...]
 Also relevant:
 http://wiki.jetbrains.net/intellij/Developing_and_running_a_Java_EE_Hello_World_application
i didn't make it past the contents. too hard for silly me.
Whoa. Thanks for the link -- I was actually at some point considering maybe to get into the Java field instead of being stuck with C/C++ at work, but after reading that page, I was completely dispelled of the notion. I think I would lose my sanity after 5 minutes of clicking through those endless submenus, typing out XML by hand (argh), and writing 50 pages of Java legalese and setting up 17 pieces of scaffolding just to get a Hello World program to run. Whoa! I think I need therapy just skimming over that page. This is sooo over-engineered it's not even funny. For all their flaws, C/C++ at least doesn't require that level of inanity... But of course, if I could only write D at my job, that'd be a whole lot different... :-P
Hopefully this all being stated in jest since anyone considering using JavaEE for a Hello World micro-service is either trying to introduce people to the JavaEE workflow for big applications or has a deep agenda, possibly involving Spring Boot or general hatred of Java.
It's not so much jest as hyperbole. :-) While I'm sure J2EE (or whatever the correct acronym is these days) has its uses, otherwise it would quickly cease to exist, it violates the principle of easy things being easy and hard things being possible. No doubt it makes hard things possible, but easy things require an incommensurate amount of effort. That, and the general tendency of Java platforms to require a whole infrastructure of external configuration files and assorted paraphrenelia makes me think twice about stepping in that direction. Surely there are less tedious ways of accomplishing the same thing!
 As a counter example let us consider Ratpack where the complete Hello
 World micro-service (*) is coded as.
 
 get("/") {
     "Hello, World!"
 }
Yes, and *that* would be what I'd call "easy things are easy, and hard things are possible". Well, I don't have direct evidence of the latter half of the statement, but I'm giving the benefit of the doubt here. :-) On a more serious note, the fact that these alternatives to heavy-weight Java web application platforms are springing up suggests that perhaps my evaluation of J2EE (or whatever it's properly called) may not be completely off-base. No matter how much you try to alleviate the tedium by having fancy IDEs auto-generate everything for you, there's something about simplicity that attracts people. K.I.S.S., and all that. :-)
 (*) This term is now mandatory for fashion reasons.
[...] This statement makes one suspect that perhaps there is some truth to Nick Sabalausky's hyperbole about fashion designers posing as software engineers. ;-) T -- Once the bikeshed is up for painting, the rainbow won't suffice. -- Andrei Alexandrescu
Dec 05 2014
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2014-12-05 03:39, deadalnix wrote:

 Also relevant:
 http://wiki.jetbrains.net/intellij/Developing_and_running_a_Java_EE_Hello_World_application
Haha, I saw how small the scroll bar was and didn't bother reading any more than the title. -- /Jacob Carlborg
Dec 06 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/4/14 6:39 PM, deadalnix wrote:
 On Thursday, 4 December 2014 at 13:48:04 UTC, Russel Winder via
 Digitalmars-d wrote:
 It's an argument for Java over Python specifically but a bit more
 general in reality. This stood out for me:


 !…other languages like D and Go are too new to bet my work on."


 http://www.teamten.com/lawrence/writings/java-for-everything.html
Also relevant: http://wiki.jetbrains.net/intellij/Developing_and_running_a_Java_EE_Hello_World_application
Very interesting. Even after all IDE details are factored out, the code is quite convoluted. No wonder Ruby on Rails and friends are so attractive by comparison. -- Andrei
Dec 17 2014
next sibling parent Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 2014-12-17 at 09:09 -0800, Andrei Alexandrescu via Digitalmars-d wrote:
 
[…]
 Very interesting. Even after all IDE details are factored out, the 
 code is quite convoluted. No wonder Ruby on Rails and friends are so 
 attractive by comparison. -- Andrei
For the record, the right tools for lightweight web applications on the JVM are Grails, Ratpack, or Vert.x. JavaEE is for "enterprise size sites" (which is why it has enterprise in the title I guess :-). Spring as was is still there but Spring Boot is making a lot of people very happy. I do not do any Web application, but I know a lot of people who do. BSkyB for example do some JavaEE, quite a lot of Spring, and a great deal of Grails. cf. http://grails.org/doc/2.4.x/guide/gettingStarted.html, http://www.ratpack.io/manual/current/, http://vertx.io/ -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Dec 17 2014
prev sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Wednesday, 17 December 2014 at 17:09:34 UTC, Andrei
Alexandrescu wrote:
 On 12/4/14 6:39 PM, deadalnix wrote:
 On Thursday, 4 December 2014 at 13:48:04 UTC, Russel Winder via
 Digitalmars-d wrote:
 It's an argument for Java over Python specifically but a bit 
 more
 general in reality. This stood out for me:

 !…other languages like D and Go are too new to bet my work 
 on."

 http://www.teamten.com/lawrence/writings/java-for-everything.html
Also relevant: http://wiki.jetbrains.net/intellij/Developing_and_running_a_Java_EE_Hello_World_application
Very interesting. Even after all IDE details are factored out, the code is quite convoluted. No wonder Ruby on Rails and friends are so attractive by comparison. -- Andrei
Hah. I tried RoR once. I couldn't get the environment set up and running and eventually just gave up.
Dec 17 2014
next sibling parent "Meta" <jared771 gmail.com> writes:
On Wednesday, 17 December 2014 at 22:24:09 UTC, Sean Kelly wrote:
 Hah.  I tried RoR once.  I couldn't get the environment set up
 and running and eventually just gave up.
Getting RoR set up and working for me + 4 people in a Windows environment was absolute hell. I'd never want to go through that again.
Dec 17 2014
prev sibling next sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 17 December 2014 at 22:24:09 UTC, Sean Kelly wrote:
 On Wednesday, 17 December 2014 at 17:09:34 UTC, Andrei
 Alexandrescu wrote:
 On 12/4/14 6:39 PM, deadalnix wrote:
 On Thursday, 4 December 2014 at 13:48:04 UTC, Russel Winder 
 via
 Digitalmars-d wrote:
 It's an argument for Java over Python specifically but a bit 
 more
 general in reality. This stood out for me:

 !…other languages like D and Go are too new to bet my work 
 on."

 http://www.teamten.com/lawrence/writings/java-for-everything.html
Also relevant: http://wiki.jetbrains.net/intellij/Developing_and_running_a_Java_EE_Hello_World_application
Very interesting. Even after all IDE details are factored out, the code is quite convoluted. No wonder Ruby on Rails and friends are so attractive by comparison. -- Andrei
Hah. I tried RoR once. I couldn't get the environment set up and running and eventually just gave up.
After learning what RoR was about, I lost my interest. I had been there once back in the early .COM days in a startup that did, lets call it, TCL on Rails. It was inspired by AOLserver for those who remember it. Eventually scaling problems made us consider other options, then since we were in a position to have access to early versions of .NET, the decision was made to adopt it. Almost everything that RoR 1.0 was doing, our TCL framework did as well. Specially the whole ActiveRecord thing. We just weren't famous. -- Paulo
Dec 18 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 18 Dec 2014 08:17:47 +0000
Paulo  Pinto via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 On Wednesday, 17 December 2014 at 22:24:09 UTC, Sean Kelly wrote:
 On Wednesday, 17 December 2014 at 17:09:34 UTC, Andrei
 Alexandrescu wrote:
 On 12/4/14 6:39 PM, deadalnix wrote:
 On Thursday, 4 December 2014 at 13:48:04 UTC, Russel Winder=20
 via
 Digitalmars-d wrote:
 It's an argument for Java over Python specifically but a bit=20
 more
 general in reality. This stood out for me:

 !=E2=80=A6other languages like D and Go are too new to bet my work=20
 on."

 http://www.teamten.com/lawrence/writings/java-for-everything.html
Also relevant: http://wiki.jetbrains.net/intellij/Developing_and_running_a_Java_EE_H=
ello_World_application
 Very interesting. Even after all IDE details are factored out,=20
 the code is quite convoluted. No wonder Ruby on Rails and=20
 friends are so attractive by comparison. -- Andrei
Hah. I tried RoR once. I couldn't get the environment set up and running and eventually just gave up.
=20 =20 After learning what RoR was about, I lost my interest. =20 I had been there once back in the early .COM days in a startup=20 that did, lets call it, TCL on Rails. It was inspired by=20 AOLserver for those who remember it. =20 Eventually scaling problems made us consider other options, then=20 since we were in a position to have access to early versions of=20 .NET, the decision was made to adopt it. =20 Almost everything that RoR 1.0 was doing, our TCL framework did=20 as well. Specially the whole ActiveRecord thing. =20 We just weren't famous.
no, you just didn't chose the language that alot of hipsters like. p.s. Tcl is nice. it's LISP told without brackets. ;-)
Dec 18 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-12-17 23:24, Sean Kelly wrote:

 Hah.  I tried RoR once.  I couldn't get the environment set up
 and running and eventually just gave up.
I don't know when you tried it last time, but today it's very easy to install: 1. Make sure Ruby is installed 2. $ gem install rails 3. $ rails new foo 4. $ cd foo 5. $ bundle 6. $ rails s -- /Jacob Carlborg
Dec 18 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Thursday, 18 December 2014 at 09:20:27 UTC, Jacob Carlborg 
wrote:
 On 2014-12-17 23:24, Sean Kelly wrote:

 Hah.  I tried RoR once.  I couldn't get the environment set up
 and running and eventually just gave up.
I don't know when you tried it last time, but today it's very easy to install: 1. Make sure Ruby is installed 2. $ gem install rails 3. $ rails new foo 4. $ cd foo 5. $ bundle 6. $ rails s
I was following the original RoR book. I got bogged down in setting up the DB and wiring everything together.
Dec 21 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 2014-12-21 19:31, Sean Kelly wrote:

 I was following the original RoR book.  I got bogged down in setting up
 the DB and wiring everything together.
The default settings will use SQLite and if you're on a Mac that will already be installed. That means you don't have to do anything. For anything else, just add the address, the database and account information. Run "rake db:create" to create the database and "rake db:migrate" to migrate any changes you make to the database via Rails. I have had problems with Rails, but never getting started. Well, I suppose it's easy when you know how to do it. -- /Jacob Carlborg
Dec 21 2014
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
I tried Ruby back in I think 2008 and had just an absolute beast 
of a time getting it running on the servers. PHP, by contrast, 
almost just worked.

RoR is a lot better now than it was at that point, though I'm 
still not impressed with it. I do some work stuff with it and 
often hit pretty random seeming problems:

1) versions don't match. Stuff like rvm and bundler can mitigate 
this, but they don't help searching the web. Find a technique and 
try it... but it requires Rails 2.17 and the app depends in 2.15 
or something stupid like that. I guess you can't blame them for 
adding new features, but I do wish the documentation for old 
versions was always easy to get to and always easily labeled so 
it would be obvious. (D could do this too!)

2) SSL/TLS just seems to randomly fail in applications and the 
tools like gem and bundle. Even updating the certificates on the 
system didn't help most recently, I also had to set an 
environment variable, which seems just strange.

3) Setting up the default WEBrick isn't too bad, but making it 
work on a production system (like apache passenger) has been 
giving us trouble. Got it working for the most part pretty fast, 
but then adding more stuff became a painful config nightmare. 
This might be the application (based on Rails 2 btw) more than 
the platform in general, but it still irked me.

4) It is abysmally slow, every little thing takes forever. DB 
changes, slow. Asset recompiles: slow. Tests: slow. Restarting 
the server: slow. The app itself: slow. I'm told Ruby on the JVM 
is faster though :)


My main problems with ruby on rails though are bad decisions and 
just underwhelming aspect of actually using it. Everyone sells it 
as being the best thing ever and so fast to develop against but 
I've seen better like everything. Maybe it was cool in 2005 (if 
you could actually get it running then...), but not so much 
anymore.
Dec 21 2014
parent Jacob Carlborg <doob me.com> writes:
On 2014-12-21 20:37, Adam D. Ruppe wrote:

 1) versions don't match. Stuff like rvm and bundler can mitigate this,
I'm not exactly sure what you're meaning but using Rails without bundler is just mad.
 but they don't help searching the web. Find a technique and try it...
 but it requires Rails 2.17 and the app depends in 2.15 or something
 stupid like that. I guess you can't blame them for adding new features,
 but I do wish the documentation for old versions was always easy to get
 to and always easily labeled so it would be obvious. (D could do this too!)
This page [1] contains documentation for Rails, for 4.1.x, 4.0.x, 3.2.x and 2.3.x. It's basically the latest version of a given branch. This page [2] contains the API reference for Rails, it's not easy to find but you can append "vX.Y.Z" to that URL to get a specific version.
 2) SSL/TLS just seems to randomly fail in applications and the tools
 like gem and bundle. Even updating the certificates on the system didn't
 help most recently, I also had to set an environment variable, which
 seems just strange.
I think I have seen that once or twice when upgrading to a new version of OS X. But that's usually because your gems and other software is still built for the older version. I can't recall seeing this for a new project.
 3) Setting up the default WEBrick isn't too bad, but making it work on a
 production system (like apache passenger) has been giving us trouble.
 Got it working for the most part pretty fast, but then adding more stuff
 became a painful config nightmare. This might be the application (based
 on Rails 2 btw) more than the platform in general, but it still irked me.
I haven't been too involved in that part. I have set up one or two apps with passenger and it was pretty easy to just follow the installation. Although, that wasn't production servers.
 4) It is abysmally slow, every little thing takes forever. DB changes,
 slow. Asset recompiles: slow. Tests: slow. Restarting the server: slow.
 The app itself: slow. I'm told Ruby on the JVM is faster though :)
Yeah, that's one major issue. It can be very, very slow. But I also think it's too easy code slow with something like ActiveRecord. It's easy to forget it's actual a database behind it.
 My main problems with ruby on rails though are bad decisions and just
 underwhelming aspect of actually using it. Everyone sells it as being
 the best thing ever and so fast to develop against but I've seen better
 like everything. Maybe it was cool in 2005 (if you could actually get it
 running then...), but not so much anymore.
I find it difficult to find something better. I think that's mostly because of the existing ecosystem with plugins and libraries available. I feel the same thing with D vs Ruby. At some point I just get tired with developing my own libraries and just want to get something done. [1] http://guides.rubyonrails.org/ [2] http://api.rubyonrails.org -- /Jacob Carlborg
Dec 22 2014
prev sibling next sibling parent reply "Kagamin" <spam here.lot> writes:
Well, his choice may make sense, but I see no connection between 
pet projects and proprietary paid work. They can't share code.
Dec 05 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Fri, 05 Dec 2014 08:22:03 +0000
Kagamin via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Well, his choice may make sense, but I see no connection between=20
 pet projects and proprietary paid work. They can't share code.
hm. but they can. my proprietary paid projects sharing alot of code with my hobby projects. it's like i'm writing some libraries for my own use and then including parts of that in my paid work, 'cause it's much easier to simply use tested and familiar library than to write brand new one.
Dec 05 2014
parent reply "Kagamin" <spam here.lot> writes:
On Friday, 5 December 2014 at 08:34:18 UTC, ketmar via 
Digitalmars-d wrote:
 'cause it's much easier to simply use tested and familiar 
 library than to write brand new one.
Why not? There are always things to improve.
Dec 05 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Fri, 05 Dec 2014 08:41:57 +0000
Kagamin via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 On Friday, 5 December 2014 at 08:34:18 UTC, ketmar via=20
 Digitalmars-d wrote:
 'cause it's much easier to simply use tested and familiar=20
 library than to write brand new one.
=20 Why not? There are always things to improve.
my customers paying me for making the work done, not for experimenting and researching. that's why i'm doing researches in my hobby projects, and then just using the resulting code in my paid projects. well-tested (heh, i don't want my projects go mad from bugs, so fixing 'em is a must here ;-) and mature code. this way everyone is happy, and i'm not blocked in trying another approach or breaking some API. if i wrote code especially for payed project, i can't use that code anywhere else: my customers payed for it, so they own it. i don't want to go into legal things with them, it's too boring. but it's generally ok for customers if we'll saying that we will use some of our internal libraries to deliver a product faster. they don't claiming ownership on those libraries and everyone is happy.
Dec 05 2014
parent reply "Kagamin" <spam here.lot> writes:
On Friday, 5 December 2014 at 08:56:03 UTC, ketmar via 
Digitalmars-d wrote:
 my customers paying me for making the work done, not for 
 experimenting and researching.
They pay you to make the work from scratch and they don't care how you do it.
 must here ;-) and mature code. this way everyone is happy, and 
 i'm not
 blocked in trying another approach or breaking some API.
If you do it from scratch, there's no breakage. What's the reason to not do it? It looks as if you hate writing better code in your language of choice. You hate that language?
Dec 05 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Fri, 05 Dec 2014 09:07:23 +0000
Kagamin via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 On Friday, 5 December 2014 at 08:56:03 UTC, ketmar via=20
 Digitalmars-d wrote:
 my customers paying me for making the work done, not for=20
 experimenting and researching.
=20 They pay you to make the work from scratch and they don't care=20 how you do it.
in no way. thay paying me to build the software that solving their problems. if they can pay less and get the software faster, they are happy. and then they going to me when they need another software, 'cause they know that i'm not interested in only taking their money and deliver something that barely works, and beyond the deadline.
 must here ;-) and mature code. this way everyone is happy, and=20
 i'm not
 blocked in trying another approach or breaking some API.
=20 If you do it from scratch, there's no breakage. What's the reason=20 to not do it? It looks as if you hate writing better code in your=20 language of choice. You hate that language?
i hate rewriting the code which is already written and working. that's why i'm not starting new project with writing new compiler for it, for example. and that's why i like to reuse what i did in another projects -- to deliver a good solution in reasonable time and budget. it may be fun for me to rewrite everything again and again, but my customers aren't interested in giving me fun, they want their problems solved.
Dec 05 2014
prev sibling parent reply "Freddy" <Hexagonalstar64 gmail.com> writes:
On Thursday, 4 December 2014 at 13:48:04 UTC, Russel Winder via
Digitalmars-d wrote:
 It's an argument for Java over Python specifically but a bit 
 more
 general in reality. This stood out for me:


 !…other languages like D and Go are too new to bet my work on."


 http://www.teamten.com/lawrence/writings/java-for-everything.html
My problems with java: no unsigned ints primitive are passed by value; arrays and user defined types are passed by reference only (killing memory usage) no operator overloading(looks at java.util.ArrayList) no templates no property syntax(getters and setters are used instead even if you know the field is never going to be dynamic) only and exactly one class per file(ALL THE IMPORTS) every thing must be inside a class(globals and free functions are static fields in a class) This is all i can remember.
Dec 05 2014
next sibling parent reply Rikki Cattermole <alphaglosined gmail.com> writes:
On 6/12/2014 11:28 a.m., Freddy wrote:
 On Thursday, 4 December 2014 at 13:48:04 UTC, Russel Winder via
 Digitalmars-d wrote:
 It's an argument for Java over Python specifically but a bit more
 general in reality. This stood out for me:


 !…other languages like D and Go are too new to bet my work on."


 http://www.teamten.com/lawrence/writings/java-for-everything.html
My problems with java: no unsigned ints primitive are passed by value; arrays and user defined types are passed by reference only (killing memory usage) no operator overloading(looks at java.util.ArrayList) no templates no property syntax(getters and setters are used instead even if you know the field is never going to be dynamic) only and exactly one class per file(ALL THE IMPORTS) every thing must be inside a class(globals and free functions are static fields in a class) This is all i can remember.
You forgot type removal for generics during compilation.
Dec 05 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sat, Dec 06, 2014 at 03:01:44PM +1300, Rikki Cattermole via Digitalmars-d
wrote:
 On 6/12/2014 11:28 a.m., Freddy wrote:
On Thursday, 4 December 2014 at 13:48:04 UTC, Russel Winder via
Digitalmars-d wrote:
It's an argument for Java over Python specifically but a bit more
general in reality. This stood out for me:


!…other languages like D and Go are too new to bet my work on."


http://www.teamten.com/lawrence/writings/java-for-everything.html
My problems with java: no unsigned ints primitive are passed by value; arrays and user defined types are passed by reference only (killing memory usage) no operator overloading(looks at java.util.ArrayList) no templates no property syntax(getters and setters are used instead even if you know the field is never going to be dynamic) only and exactly one class per file(ALL THE IMPORTS) every thing must be inside a class(globals and free functions are static fields in a class) This is all i can remember.
You forgot type removal for generics during compilation.
I dunno, while type erasure is certainly annoying when you actually need information about the type, it's also eliminates template bloat. I think the ideal system should be somewhere in between, where type erasure is actively performed by the compiler where the type information is not needed, while template instantiations are retained when it is needed. This should keep template bloat under control while still offering full template capabilities. D currently leans on the template bloat end of the spectrum; I think there is much room for improvement. T -- "The number you have dialed is imaginary. Please rotate your phone 90 degrees and try again."
Dec 05 2014
parent reply Rikki Cattermole <alphaglosined gmail.com> writes:
On 6/12/2014 3:12 p.m., H. S. Teoh via Digitalmars-d wrote:
 On Sat, Dec 06, 2014 at 03:01:44PM +1300, Rikki Cattermole via Digitalmars-d
wrote:
 On 6/12/2014 11:28 a.m., Freddy wrote:
 On Thursday, 4 December 2014 at 13:48:04 UTC, Russel Winder via
 Digitalmars-d wrote:
 It's an argument for Java over Python specifically but a bit more
 general in reality. This stood out for me:


 !…other languages like D and Go are too new to bet my work on."


 http://www.teamten.com/lawrence/writings/java-for-everything.html
My problems with java: no unsigned ints primitive are passed by value; arrays and user defined types are passed by reference only (killing memory usage) no operator overloading(looks at java.util.ArrayList) no templates no property syntax(getters and setters are used instead even if you know the field is never going to be dynamic) only and exactly one class per file(ALL THE IMPORTS) every thing must be inside a class(globals and free functions are static fields in a class) This is all i can remember.
You forgot type removal for generics during compilation.
I dunno, while type erasure is certainly annoying when you actually need information about the type, it's also eliminates template bloat. I think the ideal system should be somewhere in between, where type erasure is actively performed by the compiler where the type information is not needed, while template instantiations are retained when it is needed. This should keep template bloat under control while still offering full template capabilities. D currently leans on the template bloat end of the spectrum; I think there is much room for improvement. T
Its a bit more then annoying. What happened when it was originally implemented was basically hacking of the compiler to support it, type erasure wasn't a design decision to my understanding. Then again the last time I checked Java's reference compiler / jvm source code it was a real mess to say the least. If I remember right an xml parser lib was at the same level in the repo as the compiler and nothing else at that level. This was only a few years ago now. I really hope I'm wrong or its changed since then but who knows.
Dec 05 2014
next sibling parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sat, Dec 06, 2014 at 03:49:35PM +1300, Rikki Cattermole via Digitalmars-d
wrote:
 On 6/12/2014 3:12 p.m., H. S. Teoh via Digitalmars-d wrote:
On Sat, Dec 06, 2014 at 03:01:44PM +1300, Rikki Cattermole via Digitalmars-d
wrote:
On 6/12/2014 11:28 a.m., Freddy wrote:
[...]
My problems with java:
[...]
This is all i can remember.
You forgot type removal for generics during compilation.
I dunno, while type erasure is certainly annoying when you actually need information about the type, it's also eliminates template bloat. I think the ideal system should be somewhere in between, where type erasure is actively performed by the compiler where the type information is not needed, while template instantiations are retained when it is needed. This should keep template bloat under control while still offering full template capabilities. D currently leans on the template bloat end of the spectrum; I think there is much room for improvement.
[...]
 
 Its a bit more then annoying. What happened when it was originally
 implemented was basically hacking of the compiler to support it, type
 erasure wasn't a design decision to my understanding.  Then again the
 last time I checked Java's reference compiler / jvm source code it was
 a real mess to say the least.  If I remember right an xml parser lib
 was at the same level in the repo as the compiler and nothing else at
 that level. This was only a few years ago now.
 
 I really hope I'm wrong or its changed since then but who knows.
It's enterprise code. 'Nuff said. Even though "enterprise code" sounds so glamorous, in reality it has a high likelihood of being a rats' nest of spaghetti code with lasagna code layered on top, an accumulation of patches upon hacks to bandaids over bugs resulting from earlier pathces, with messy sauce leaking everywhere. I've seen enough examples of actual "enterprise code" to know better than the idealistic image they pretend to convey. But anyway, that's beside the point. :-P Type erasure does have its value -- for example, if you have a template class that represents a linked-list or tree or something like that, most of the code actually doesn't care about the type of the data at all. Code that swaps pointers to link / unlink nodes, or code that rebalances a tree, those pieces of code are mostly type-independent and can operate generically on lists or trees containing any type. Under D's template system, unless you manually factor it out, all of this code will be instantiated over and over again, once for every data type you might put into the list / tree. For non-trivial containers, the template bloat can be quite horrendous. Type erasure allows you to reuse a *single* copy of the code that handles every type of data contained. However, having *only* type erasure like Java leads to problems in parts of the code that *do* need to know about the specifics of the data. Such as whether it's a by-value or by-reference type (AIUI Java generics requires you to box all POD types due to type erasure, which introduces an additional needless layer of indirection), the need for postblits, dtors, copy ctors, etc. (in Java they would be handled by virtual methods AIUI), or certain operations that can generically apply to a particular category of types (e.g., +, -, <, >, should in theory work for all numeric types, including built-in types). This is where D's template system shines, because it is able to eliminate boxing/unboxing and lots of indirections by taking advantage of type information, as well as not being subject to limitations of type erasure (e.g., Java can't have catch blocks that catch List<Integer> and List<String> separately, due to type erasure). In an ideal world, we'd like to have the best of both worlds -- minimize template bloat by merging parts of the generic code that don't depend on the type, but retaining enough type information to be able to use it when needed. T -- Error: Keyboard not attached. Press F1 to continue. -- Yoon Ha Lee, CONLANG
Dec 05 2014
prev sibling next sibling parent Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sat, 2014-12-06 at 15:49 +1300, Rikki Cattermole via Digitalmars-d wrote:
 On 6/12/2014 3:12 p.m., H. S. Teoh via Digitalmars-d wrote:
 I dunno, while type erasure is certainly annoying when you 
 actually need information about the type, it's also eliminates 
 template bloat. I think the ideal system should be somewhere in 
 between, where type erasure is actively performed by the compiler 
 where the type information is not needed, while template 
 instantiations are retained when it is needed. This should keep 
 template bloat under control while still offering full template 
 capabilities. D currently leans on the template bloat end of the 
 spectrum; I think there is much room for improvement.
The best solution is to not have generics, just like Javas 0 to 1.4. Every container is a container of references to Object. OK so Go has a quite neat solution. (OK this is part troll. Given all the Java, D, Go, Python,… discussions of generics I see the whole topic is beginning not to get me worked up at all.)
 
 Its a bit more then annoying. What happened when it was originally 
 implemented was basically hacking of the compiler to support it, 
 type erasure wasn't a design decision to my understanding.
 Then again the last time I checked Java's reference compiler / jvm 
 source code it was a real mess to say the least.
 If I remember right an xml parser lib was at the same level in the 
 repo as the compiler and nothing else at that level. This was only a 
 few years ago now.
Erasure originally arose because the generics team were told they couldn't change the JVM: the JVM definition was sacrosanct on the altar of backward compatibility. Of course the Annotations team were told the same thing and then changed the JVM definition anyway. So type erasure was a hack. Since then many people, probably suffering from Stockholm Syndrome, or being Scala type management infrastructure folk, no believe type erasure is the right thing for the JVM. There is a vocal contingent pushing for type parameter reification, as was done in CLR, but I think there are too many influential people saying "won't happen" for it to happen. Java 9 should see a far better structured JVM source code and runtime system.
 I really hope I'm wrong or its changed since then but who knows.
Some things have changed. Some would say not enough. The LJC (of which I am a member) is on the JCP EC so we get to vote. We are generally on the side of "no corporate politicking, get stuff done to help Java programmers". cf. the rewrite of the OpenJDK build system, the AdoptAJSR and AdoptJDK programs which have been hugely successful. More user groups are getting more stuff into OpenJDK than ever before. Obviously though Oracle and IBM are still the main players. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Dec 06 2014
prev sibling parent Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Fri, 2014-12-05 at 22:19 -0800, H. S. Teoh via Digitalmars-d wrote:
 
[…]
 It's enterprise code. 'Nuff said. Even though "enterprise code" 
 sounds so glamorous, in reality it has a high likelihood of being a 
 rats' nest of spaghetti code with lasagna code layered on top, an 
 accumulation of patches upon hacks to bandaids over bugs resulting 
 from earlier pathces, with messy sauce leaking everywhere. I've seen 
 enough examples of actual "enterprise code" to know better than the 
 idealistic image they pretend to convey.
Most "enterprise software" is written by "average programmers", and to be honest, the average programmer really is not that good. Which means I am always surprised anything works, and never surprised at the crap I see/use.
 But anyway, that's beside the point. :-P  Type erasure does have its 
 value -- for example, if you have a template class that represents a 
 linked-list or tree or something like that, most of the code 
 actually doesn't care about the type of the data at all. Code that 
 swaps pointers to link / unlink nodes, or code that rebalances a 
 tree, those pieces of code are mostly type-independent and can 
 operate generically on lists or trees containing any type. Under D's 
 template system, unless you manually factor it out, all of this code 
 will be instantiated over and over again, once for every data type 
 you might put into the list / tree. For non-trivial containers, the 
 template bloat can be quite horrendous. Type erasure allows you to 
 reuse a *single* copy of the code that handles every type of data 
 contained.
Type erasure is a total waste, reify the type parameter. If you actually want type erasure then just use Object as the type, job done. As the Java Platform API and Scala have proven you have to do a lot of work to manage type parameter under type erasure. With type reification, erasure is just one model of use.
 
[…] -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Dec 06 2014
prev sibling next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
06-Dec-2014 01:28, Freddy пишет:
 On Thursday, 4 December 2014 at 13:48:04 UTC, Russel Winder via
 Digitalmars-d wrote:
 It's an argument for Java over Python specifically but a bit more
 general in reality. This stood out for me:


 !…other languages like D and Go are too new to bet my work on."


 http://www.teamten.com/lawrence/writings/java-for-everything.html
My problems with java: no unsigned ints primitive are passed by value; arrays and user defined types are passed by reference only (killing memory usage) no operator overloading(looks at java.util.ArrayList) no templates no property syntax(getters and setters are used instead even if you know the field is never going to be dynamic) only and exactly one class per file(ALL THE IMPORTS) every thing must be inside a class(globals and free functions are static fields in a class) This is all i can remember.
Solved in Scala: - operator overloading - properties - that + optional (), a library writer still can enforce () to be used - only and exactly one class - any number in any combination - everything class - sort of, it has 'object' clause (just like 'class') that can be thought as a kind of namespace or a singleton if you love OOP. Not fixed: - unsigend types - nothing here unless Java adds support - pasing by value - there are immutable and value types (e.g. Tuples) but I think they are references behind the scenes - no templates, but you may use AST macros which is even more powerful -- Dmitry Olshansky
Dec 06 2014
next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Saturday, 6 December 2014 at 09:07:34 UTC, Dmitry Olshansky 
wrote:
 06-Dec-2014 01:28, Freddy пишет:
 On Thursday, 4 December 2014 at 13:48:04 UTC, Russel Winder via
 Digitalmars-d wrote:
 It's an argument for Java over Python specifically but a bit 
 more
 general in reality. This stood out for me:


 !…other languages like D and Go are too new to bet my work 
 on."


 http://www.teamten.com/lawrence/writings/java-for-everything.html
My problems with java: no unsigned ints primitive are passed by value; arrays and user defined types are passed by reference only (killing memory usage) no operator overloading(looks at java.util.ArrayList) no templates no property syntax(getters and setters are used instead even if you know the field is never going to be dynamic) only and exactly one class per file(ALL THE IMPORTS) every thing must be inside a class(globals and free functions are static fields in a class) This is all i can remember.
Solved in Scala: - operator overloading - properties - that + optional (), a library writer still can enforce () to be used - only and exactly one class - any number in any combination - everything class - sort of, it has 'object' clause (just like 'class') that can be thought as a kind of namespace or a singleton if you love OOP. Not fixed: - unsigend types - nothing here unless Java adds support - pasing by value - there are immutable and value types (e.g. Tuples) but I think they are references behind the scenes - no templates, but you may use AST macros which is even more powerful
Some form of unsigned arithmetic exists since Java 8. For example, https://docs.oracle.com/javase/8/docs/api/java/lang/Byte.html#toUnsignedInt-byte- https://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html#remainderUnsigned-int-int- There many more methods available.
Dec 06 2014
prev sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Saturday, 6 December 2014 at 09:07:34 UTC, Dmitry Olshansky 
wrote:
 Solved in Scala:
 - operator overloading
 - properties - that + optional (), a library writer still can 
 enforce () to be used
 - only and exactly one class - any number in any combination
 - everything class - sort of, it has 'object' clause (just like 
 'class') that can be thought as a kind of namespace or a 
 singleton if you love OOP.

 Not fixed:
  - unsigend types - nothing here unless Java adds support
  - pasing by value - there are immutable and value types (e.g. 
 Tuples) but I think they are references behind the scenes
  - no templates, but you may use AST macros which is even more 
 powerful
Scala tries to make things nicer by providing higher level abstractions but with tiny bit more poking JVM origins still are unpleasantly notable. The whole Function1 .. Function22 trait thing has made me laugh very hard when reading the spec originally :)
Dec 07 2014
next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Sunday, 7 December 2014 at 13:39:38 UTC, Dicebot wrote:
 On Saturday, 6 December 2014 at 09:07:34 UTC, Dmitry Olshansky 
 wrote:
 Solved in Scala:
 - operator overloading
 - properties - that + optional (), a library writer still can 
 enforce () to be used
 - only and exactly one class - any number in any combination
 - everything class - sort of, it has 'object' clause (just 
 like 'class') that can be thought as a kind of namespace or a 
 singleton if you love OOP.

 Not fixed:
 - unsigend types - nothing here unless Java adds support
 - pasing by value - there are immutable and value types (e.g. 
 Tuples) but I think they are references behind the scenes
 - no templates, but you may use AST macros which is even more 
 powerful
Scala tries to make things nicer by providing higher level abstractions but with tiny bit more poking JVM origins still are unpleasantly notable. The whole Function1 .. Function22 trait thing has made me laugh very hard when reading the spec originally :)
.NET is no different http://msdn.microsoft.com/en-us/library/dd402872%28v=vs.110%29.aspx This is what happens when generics don't support variable number of types. -- Paulo
Dec 07 2014
prev sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
07-Dec-2014 16:39, Dicebot пишет:
 On Saturday, 6 December 2014 at 09:07:34 UTC, Dmitry Olshansky wrote:
 Solved in Scala:
 - operator overloading
 - properties - that + optional (), a library writer still can enforce
 () to be used
 - only and exactly one class - any number in any combination
 - everything class - sort of, it has 'object' clause (just like
 'class') that can be thought as a kind of namespace or a singleton if
 you love OOP.

 Not fixed:
  - unsigend types - nothing here unless Java adds support
  - pasing by value - there are immutable and value types (e.g. Tuples)
 but I think they are references behind the scenes
  - no templates, but you may use AST macros which is even more powerful
Scala tries to make things nicer by providing higher level abstractions but with tiny bit more poking JVM origins still are unpleasantly notable.
It actually quite successful at making things more coherent and extensible (something directly opposite to original Java). There are downsides, type erasure is the most unavoidable trait.
 The whole Function1 .. Function22 trait thing has made me laugh
 very hard when reading the spec originally :)
Aye. The good things is that while e.g. (Int,Int) has type Tuple2![Int,Int] it's at least compiler-generated. -- Dmitry Olshansky
Dec 07 2014
prev sibling next sibling parent reply Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Fri, 2014-12-05 at 22:28 +0000, Freddy via Digitalmars-d wrote:
 
[…]
 My problems with java:
    no unsigned ints
Indeed, right pain in the .
    primitive are passed by value; arrays and user defined types are
 passed by reference only (killing memory usage)
Primitive types are scheduled for removal, leaving only reference types.
    no operator overloading(looks at java.util.ArrayList)
Biggest mistake The Green Team made. Fixed by Groovy. Oh and Scala, Ceylon, Kotlin.
 
    no templates
Why would you want them for a JVM-based machine, there is no possibility of reification of type parameters, type erasure and all that mess. Scala, Kotlin, etc. have to create a vast infrastructure to deal with this.
    no property syntax(getters and setters are used instead even if
 you know the field is never going to be dynamic)
Setters, getters and mutable properties are, or should be, anathema. They turn an object oriented programming language into an imperative procedural one without encapsulation.
    only and exactly one class per file(ALL THE IMPORTS)
You can have as many classes as you want per file, but only one of them can be public.
    every thing must be inside a class(globals and free functions
 are static fields in a class)
In Java. And Scala. Groovy, Kotlin, and Ceylon do things differently, the programmer can treat the JVM as having top-level functions.
 […]
-- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Dec 06 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/6/14 7:26 AM, Russel Winder via Digitalmars-d wrote:
 Primitive types are scheduled for removal, leaving only reference
 types.
Wow, that's a biggie. Link(s)? -- Andrei
Dec 20 2014
parent reply Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sat, 2014-12-20 at 15:16 -0800, Andrei Alexandrescu via Digitalmars-d wrote:
 On 12/6/14 7:26 AM, Russel Winder via Digitalmars-d wrote:
 Primitive types are scheduled for removal, leaving only reference 
 types.
Wow, that's a biggie. Link(s)? -- Andrei
Simon Ritter laid out the OpenJDK/JCP/Oracle thinking way back in 2011 in a number of conference presentations. cf. http://www.slideshare.net/JAX_London/keynote-to-java-se-8-and beyond-simon-ritter page 41 has the explicit statement of goal for JDK10. OK so this was pre-JDK8 and reality has changed a bit from his predictions, but not yet on this issue. There are changes to the JIT for JDK9 and JDK10 that are precursors to removing primitive types, so as to get rid of the last unnecessary boxing and unboxing during function evaluation. Expression evaluation is already handled well with no unnecessary (un)boxing. Many see "value types" cf JEP 169 http://openjdk.java.net/jeps/169 as a necessary precursor, but it is not exactly clear that this is actually the case. It's a question of which JIT is part of the standard reference implementation (OpenJDK) and what suppliers (e.g. Oracle, IBM, Azul, etc.) ship in their distributions. Although the vast majority of Java is used in a basically I/O bound context, there is knowledge of and desire to improve Java in a CPU- bound context. The goal here is to always be as fast as C and C++ for all CPU-bound codes. A lot of people are already seeing Java being faster than C and C++, but they have to use primitive types to achieve this. With the shift to internal iteration and new JITS, the aim is to achieve even better but using reference types in the code. There are an increasing number of people from Oracle, IBM and Azul actively working on this, so it is a well-funded activity. Targeting JDK10 means they have 2 years left to get it right :-) -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Dec 21 2014
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 21 December 2014 at 10:00:36 UTC, Russel Winder via 
Digitalmars-d wrote:
 Although the vast majority of Java is used in a basically I/O 
 bound
 context, there is knowledge of and desire to improve Java in a 
 CPU-
 bound context. The goal here is to always be as fast as C and 
 C++ for
 all CPU-bound codes. A lot of people are already seeing Java 
 being
 faster than C and C++, but they have to use primitive types to 
 achieve
 this. With the shift to internal iteration and new JITS, the 
 aim is to
 achieve even better but using reference types in the code.
That is quite a claim. If it is true in some context, and I'd go as far as to say that vanilla code in C/C++ tend to be slower than the vanilla version in java, ultimately, C and C++ offer more flexibility, which mean that if you are willing to spend the time to optimize, Java won't be as fast. Generally, the killer is memory layout, which allow to fit more in cache, and be faster. Java is addicted to indirections.
Dec 22 2014
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Monday, 22 December 2014 at 17:25:48 UTC, deadalnix wrote:
 On Sunday, 21 December 2014 at 10:00:36 UTC, Russel Winder via 
 Digitalmars-d wrote:
 Although the vast majority of Java is used in a basically I/O 
 bound
 context, there is knowledge of and desire to improve Java in a 
 CPU-
 bound context. The goal here is to always be as fast as C and 
 C++ for
 all CPU-bound codes. A lot of people are already seeing Java 
 being
 faster than C and C++, but they have to use primitive types to 
 achieve
 this. With the shift to internal iteration and new JITS, the 
 aim is to
 achieve even better but using reference types in the code.
That is quite a claim. If it is true in some context, and I'd go as far as to say that vanilla code in C/C++ tend to be slower than the vanilla version in java, ultimately, C and C++ offer more flexibility, which mean that if you are willing to spend the time to optimize, Java won't be as fast. Generally, the killer is memory layout, which allow to fit more in cache, and be faster. Java is addicted to indirections.
If one is willing to spend time (aka money) optimizing, there are also a few tricks that are possible in Java and used in high frequency trading systems. Latest as of Java 10, indirections in Java will be a thing of the past, assuming all features being discussed make their way into the language. C and C++ are becoming a niche languages in distributed computing systems. -- Paulo
Dec 22 2014
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Monday, 22 December 2014 at 21:05:22 UTC, Paulo Pinto wrote:
 C and C++ are becoming a niche languages in distributed 
 computing systems.
That is quite a claim. Even with new java feature, you'll certainly reduce java's indirection addiction to some extent, but that won't give you control of data layout, which is one of the highest criteria when it comes to speed (because you fit more in cache). Granted, when it come to distributed computing, you have many problem to manage (network, node failing, sheduling, ...) and how much you can feed to the CPU is one criterion amongst others. I also concede that making thing in Java and getting them fast enough is a much easier job than it is in C++.
Dec 23 2014
parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Tuesday, 23 December 2014 at 13:56:51 UTC, deadalnix wrote:
 On Monday, 22 December 2014 at 21:05:22 UTC, Paulo Pinto wrote:
 C and C++ are becoming a niche languages in distributed 
 computing systems.
That is quite a claim. Even with new java feature, you'll certainly reduce java's indirection addiction to some extent, but that won't give you control of data layout, which is one of the highest criteria when it comes to speed (because you fit more in cache). Granted, when it come to distributed computing, you have many problem to manage (network, node failing, sheduling, ...) and how much you can feed to the CPU is one criterion amongst others. I also concede that making thing in Java and getting them fast enough is a much easier job than it is in C++.
It is as you say, the problems to solve around the application have a bigger impact than the language itself. On our use cases, a few hundred ms are acceptable as response times, so it isn't worth developer time to squeeze every ms out of the CPU. My claim is based on the fact, that on my little enterprise world I see C++ being stuffed inside legacy boxes in architecture diagrams. C++ code is equated with CORBA and DCOM components that are still somehow kept alive. -- Paulo
Dec 23 2014
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/21/14 1:59 AM, Russel Winder via Digitalmars-d wrote:
 On Sat, 2014-12-20 at 15:16 -0800, Andrei Alexandrescu via Digitalmars-d wrote:
 On 12/6/14 7:26 AM, Russel Winder via Digitalmars-d wrote:
 Primitive types are scheduled for removal, leaving only reference
 types.
Wow, that's a biggie. Link(s)? -- Andrei
Simon Ritter laid out the OpenJDK/JCP/Oracle thinking way back in 2011 in a number of conference presentations. cf. http://www.slideshare.net/JAX_London/keynote-to-java-se-8-and beyond-simon-ritter page 41 has the explicit statement of goal for JDK10. OK so this was pre-JDK8 and reality has changed a bit from his predictions, but not yet on this issue. There are changes to the JIT for JDK9 and JDK10 that are precursors to removing primitive types, so as to get rid of the last unnecessary boxing and unboxing during function evaluation. Expression evaluation is already handled well with no unnecessary (un)boxing. Many see "value types" cf JEP 169 http://openjdk.java.net/jeps/169 as a necessary precursor, but it is not exactly clear that this is actually the case. It's a question of which JIT is part of the standard reference implementation (OpenJDK) and what suppliers (e.g. Oracle, IBM, Azul, etc.) ship in their distributions. Although the vast majority of Java is used in a basically I/O bound context, there is knowledge of and desire to improve Java in a CPU- bound context. The goal here is to always be as fast as C and C++ for all CPU-bound codes. A lot of people are already seeing Java being faster than C and C++, but they have to use primitive types to achieve this. With the shift to internal iteration and new JITS, the aim is to achieve even better but using reference types in the code. There are an increasing number of people from Oracle, IBM and Azul actively working on this, so it is a well-funded activity. Targeting JDK10 means they have 2 years left to get it right :-)
Hmmm... On one hand there's "make everything objects" in the slides, and on the other hand we have the JEP that adds value types. Confusing. Andrei
Dec 23 2014
prev sibling next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sat, Dec 06, 2014 at 03:26:08PM +0000, Russel Winder via Digitalmars-d wrote:
[...]
    primitive are passed by value; arrays and user defined types are
 passed by reference only (killing memory usage)
Primitive types are scheduled for removal, leaving only reference types.
[...] Whoa. So they're basically going to rely on JIT to convert those boxed Integers into hardware ints for performance? Sounds like I will never consider Java for computation-heavy tasks then... T -- Marketing: the art of convincing people to pay for what they didn't need before which you can't deliver after.
Dec 06 2014
next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Saturday, 6 December 2014 at 15:35:57 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 On Sat, Dec 06, 2014 at 03:26:08PM +0000, Russel Winder via 
 Digitalmars-d wrote:
 [...]
    primitive are passed by value; arrays and user defined 
 types are
 passed by reference only (killing memory usage)
Primitive types are scheduled for removal, leaving only reference types.
[...] Whoa. So they're basically going to rely on JIT to convert those boxed Integers into hardware ints for performance? Sounds like I will never consider Java for computation-heavy tasks then... T
It the same approach taken by .NET, Eiffel and many other languages. Just because it looks like an object to the eyes of the programmer, it doesn't mean it is one. So when Java finally gets value types (either on 9 or 10 if the effort is too much), then primitives might become alias just like in .NET. -- Paulo
Dec 06 2014
prev sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
06-Dec-2014 18:33, H. S. Teoh via Digitalmars-d пишет:
 On Sat, Dec 06, 2014 at 03:26:08PM +0000, Russel Winder via Digitalmars-d
wrote:
 [...]
     primitive are passed by value; arrays and user defined types are
 passed by reference only (killing memory usage)
Primitive types are scheduled for removal, leaving only reference types.
[...] Whoa. So they're basically going to rely on JIT to convert those boxed Integers into hardware ints for performance?
With great success.
 Sounds like I will never
 consider Java for computation-heavy tasks then...
Interestingly working with JVM for the last 2 years the only problem I've found is memory usage overhead of collections and non-trivial objects. In my tests performance of simple numeric code was actually better with Scala (not even plain Java) then with D (LDC), for instance. -- Dmitry Olshansky
Dec 07 2014
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Sunday, 7 December 2014 at 19:56:49 UTC, Dmitry Olshansky 
wrote:
 06-Dec-2014 18:33, H. S. Teoh via Digitalmars-d пишет:
 On Sat, Dec 06, 2014 at 03:26:08PM +0000, Russel Winder via 
 Digitalmars-d wrote:
 [...]
    primitive are passed by value; arrays and user defined 
 types are
 passed by reference only (killing memory usage)
Primitive types are scheduled for removal, leaving only reference types.
[...] Whoa. So they're basically going to rely on JIT to convert those boxed Integers into hardware ints for performance?
With great success.
 Sounds like I will never
 consider Java for computation-heavy tasks then...
Interestingly working with JVM for the last 2 years the only problem I've found is memory usage overhead of collections and non-trivial objects. In my tests performance of simple numeric code was actually better with Scala (not even plain Java) then with D (LDC), for instance.
Got an example? I'd be interested to see a numerical-code example where the JVM can beat the llvm/gcc backends on a real calculation (even if it's a small one).
Dec 07 2014
next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
08-Dec-2014 00:36, John Colvin пишет:
 On Sunday, 7 December 2014 at 19:56:49 UTC, Dmitry Olshansky wrote:
 06-Dec-2014 18:33, H. S. Teoh via Digitalmars-d пишет:
 On Sat, Dec 06, 2014 at 03:26:08PM +0000, Russel Winder via
 Digitalmars-d wrote:
 [...]
    primitive are passed by value; arrays and user defined types are
 passed by reference only (killing memory usage)
Primitive types are scheduled for removal, leaving only reference types.
[...] Whoa. So they're basically going to rely on JIT to convert those boxed Integers into hardware ints for performance?
With great success.
 Sounds like I will never
 consider Java for computation-heavy tasks then...
Interestingly working with JVM for the last 2 years the only problem I've found is memory usage overhead of collections and non-trivial objects. In my tests performance of simple numeric code was actually better with Scala (not even plain Java) then with D (LDC), for instance.
Got an example? I'd be interested to see a numerical-code example where the JVM can beat the llvm/gcc backends on a real calculation (even if it's a small one).
It was trivial Gaussian integration. http://en.wikipedia.org/wiki/Gaussian_quadrature I do not claim code is optimal or anything, but it's line for line. // D version import std.algorithm, std.stdio, std.datetime; auto integrate(double function(double) f, double a, double b, int n){ auto step = (b-a)/n; auto sum = 0.0; auto x = a; while(x<b) { sum += (f(x) + f(x+step))*step/2; x += step; } return sum; } long timeIt(){ StopWatch sw; sw.start(); auto r = integrate(x => x*x*x, 0.0, 1.0, 1000000); sw.stop(); return sw.peek().usecs; } void main(){ auto estimate = timeIt; foreach(_; 0..1000) estimate = min(estimate, timeIt); writef("%s sec\n", estimate/1e6); } // Scala version def integrate(f: Double => Double, a: Double, b: Double, n : Int): Double = { val step = (b-a)/n; var sum = 0.0; var x = a; while(x<b) { sum += (f(x) + f(x+step))*step/2; x += step; } sum } def timeIt() = { val start = System.nanoTime(); val r = integrate(x => x*x*x, 0.0, 1.0, 1000000); val end = System.nanoTime(); end - start } var estimate = timeIt; for ( _ <- 1 to 1000 ) estimate = Math.min(estimate, timeIt) printf("%s sec\n", estimate/1e9); -- Dmitry Olshansky
Dec 07 2014
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Sunday, 7 December 2014 at 22:13:50 UTC, Dmitry Olshansky 
wrote:
 08-Dec-2014 00:36, John Colvin пишет:
 On Sunday, 7 December 2014 at 19:56:49 UTC, Dmitry Olshansky 
 wrote:
 06-Dec-2014 18:33, H. S. Teoh via Digitalmars-d пишет:
 On Sat, Dec 06, 2014 at 03:26:08PM +0000, Russel Winder via
 Digitalmars-d wrote:
 [...]
   primitive are passed by value; arrays and user defined 
 types are
 passed by reference only (killing memory usage)
Primitive types are scheduled for removal, leaving only reference types.
[...] Whoa. So they're basically going to rely on JIT to convert those boxed Integers into hardware ints for performance?
With great success.
 Sounds like I will never
 consider Java for computation-heavy tasks then...
Interestingly working with JVM for the last 2 years the only problem I've found is memory usage overhead of collections and non-trivial objects. In my tests performance of simple numeric code was actually better with Scala (not even plain Java) then with D (LDC), for instance.
Got an example? I'd be interested to see a numerical-code example where the JVM can beat the llvm/gcc backends on a real calculation (even if it's a small one).
It was trivial Gaussian integration. http://en.wikipedia.org/wiki/Gaussian_quadrature I do not claim code is optimal or anything, but it's line for line. // D version import std.algorithm, std.stdio, std.datetime; auto integrate(double function(double) f, double a, double b, int n){ auto step = (b-a)/n; auto sum = 0.0; auto x = a; while(x<b) { sum += (f(x) + f(x+step))*step/2; x += step; } return sum; } long timeIt(){ StopWatch sw; sw.start(); auto r = integrate(x => x*x*x, 0.0, 1.0, 1000000); sw.stop(); return sw.peek().usecs; } void main(){ auto estimate = timeIt; foreach(_; 0..1000) estimate = min(estimate, timeIt); writef("%s sec\n", estimate/1e6); } // Scala version def integrate(f: Double => Double, a: Double, b: Double, n : Int): Double = { val step = (b-a)/n; var sum = 0.0; var x = a; while(x<b) { sum += (f(x) + f(x+step))*step/2; x += step; } sum } def timeIt() = { val start = System.nanoTime(); val r = integrate(x => x*x*x, 0.0, 1.0, 1000000); val end = System.nanoTime(); end - start } var estimate = timeIt; for ( _ <- 1 to 1000 ) estimate = Math.min(estimate, timeIt) printf("%s sec\n", estimate/1e9);
on my machine (Haswell i5) I get scala as taking 1.6x as long as the ldc version. I don't know scala though, I compiled using -optimise, are there other arguments I should be using?
Dec 07 2014
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
08-Dec-2014 01:38, John Colvin пишет:
 On Sunday, 7 December 2014 at 22:13:50 UTC, Dmitry Olshansky wrote:
 08-Dec-2014 00:36, John Colvin пишет:
 On Sunday, 7 December 2014 at 19:56:49 UTC, Dmitry Olshansky wrote:
 06-Dec-2014 18:33, H. S. Teoh via Digitalmars-d пишет:
 On Sat, Dec 06, 2014 at 03:26:08PM +0000, Russel Winder via
 Digitalmars-d wrote:
 [...]
   primitive are passed by value; arrays and user defined types are
 passed by reference only (killing memory usage)
Primitive types are scheduled for removal, leaving only reference types.
[...] Whoa. So they're basically going to rely on JIT to convert those boxed Integers into hardware ints for performance?
With great success.
 Sounds like I will never
 consider Java for computation-heavy tasks then...
Interestingly working with JVM for the last 2 years the only problem I've found is memory usage overhead of collections and non-trivial objects. In my tests performance of simple numeric code was actually better with Scala (not even plain Java) then with D (LDC), for instance.
Got an example? I'd be interested to see a numerical-code example where the JVM can beat the llvm/gcc backends on a real calculation (even if it's a small one).
It was trivial Gaussian integration. http://en.wikipedia.org/wiki/Gaussian_quadrature I do not claim code is optimal or anything, but it's line for line.
[snip]
 on my machine (Haswell i5) I get scala as taking 1.6x as long as the ldc
 version.

 I don't know scala though, I compiled using -optimise, are there other
 arguments I should be using?
There is no point in -optimise at least I do not recall using it. What's your JVM ? It should be Oracle's HotSpot not OpenJDK. -- Dmitry Olshansky
Dec 07 2014
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Sunday, 7 December 2014 at 22:46:02 UTC, Dmitry Olshansky 
wrote:
 08-Dec-2014 01:38, John Colvin пишет:
 On Sunday, 7 December 2014 at 22:13:50 UTC, Dmitry Olshansky 
 wrote:
 08-Dec-2014 00:36, John Colvin пишет:
 On Sunday, 7 December 2014 at 19:56:49 UTC, Dmitry Olshansky 
 wrote:
 06-Dec-2014 18:33, H. S. Teoh via Digitalmars-d пишет:
 On Sat, Dec 06, 2014 at 03:26:08PM +0000, Russel Winder via
 Digitalmars-d wrote:
 [...]
  primitive are passed by value; arrays and user defined 
 types are
 passed by reference only (killing memory usage)
Primitive types are scheduled for removal, leaving only reference types.
[...] Whoa. So they're basically going to rely on JIT to convert those boxed Integers into hardware ints for performance?
With great success.
 Sounds like I will never
 consider Java for computation-heavy tasks then...
Interestingly working with JVM for the last 2 years the only problem I've found is memory usage overhead of collections and non-trivial objects. In my tests performance of simple numeric code was actually better with Scala (not even plain Java) then with D (LDC), for instance.
Got an example? I'd be interested to see a numerical-code example where the JVM can beat the llvm/gcc backends on a real calculation (even if it's a small one).
It was trivial Gaussian integration. http://en.wikipedia.org/wiki/Gaussian_quadrature I do not claim code is optimal or anything, but it's line for line.
[snip]
 on my machine (Haswell i5) I get scala as taking 1.6x as long 
 as the ldc
 version.

 I don't know scala though, I compiled using -optimise, are 
 there other
 arguments I should be using?
There is no point in -optimise at least I do not recall using it. What's your JVM ? It should be Oracle's HotSpot not OpenJDK.
hotspot. After changing the benchmark to more carefully measure the integration function (ldc was unfairly taking advantage of knowing a and b at compile-time), scala does indeed win by a small margin. I wonder what it's managing to achieve here? AFAICT there really isn't much scope for optimisation in that while loop without breaking IEEE-754 guarantees.
Dec 08 2014
next sibling parent reply "Rene Zwanenburg" <renezwanenburg gmail.com> writes:
On Monday, 8 December 2014 at 10:31:46 UTC, John Colvin wrote:
 On Sunday, 7 December 2014 at 22:46:02 UTC, Dmitry Olshansky 
 wrote:
 08-Dec-2014 01:38, John Colvin пишет:
 On Sunday, 7 December 2014 at 22:13:50 UTC, Dmitry Olshansky 
 wrote:
 08-Dec-2014 00:36, John Colvin пишет:
 On Sunday, 7 December 2014 at 19:56:49 UTC, Dmitry 
 Olshansky wrote:
 06-Dec-2014 18:33, H. S. Teoh via Digitalmars-d пишет:
 On Sat, Dec 06, 2014 at 03:26:08PM +0000, Russel Winder 
 via
 Digitalmars-d wrote:
 [...]
 primitive are passed by value; arrays and user defined 
 types are
 passed by reference only (killing memory usage)
Primitive types are scheduled for removal, leaving only reference types.
[...] Whoa. So they're basically going to rely on JIT to convert those boxed Integers into hardware ints for performance?
With great success.
 Sounds like I will never
 consider Java for computation-heavy tasks then...
Interestingly working with JVM for the last 2 years the only problem I've found is memory usage overhead of collections and non-trivial objects. In my tests performance of simple numeric code was actually better with Scala (not even plain Java) then with D (LDC), for instance.
Got an example? I'd be interested to see a numerical-code example where the JVM can beat the llvm/gcc backends on a real calculation (even if it's a small one).
It was trivial Gaussian integration. http://en.wikipedia.org/wiki/Gaussian_quadrature I do not claim code is optimal or anything, but it's line for line.
[snip]
 on my machine (Haswell i5) I get scala as taking 1.6x as long 
 as the ldc
 version.

 I don't know scala though, I compiled using -optimise, are 
 there other
 arguments I should be using?
There is no point in -optimise at least I do not recall using it. What's your JVM ? It should be Oracle's HotSpot not OpenJDK.
hotspot. After changing the benchmark to more carefully measure the integration function (ldc was unfairly taking advantage of knowing a and b at compile-time), scala does indeed win by a small margin. I wonder what it's managing to achieve here? AFAICT there really isn't much scope for optimisation in that while loop without breaking IEEE-754 guarantees.
I don't think 'f' will be inlined in the D version. What happens if you make it an alias instead?
Dec 08 2014
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Monday, 8 December 2014 at 11:02:21 UTC, Rene Zwanenburg wrote:
 On Monday, 8 December 2014 at 10:31:46 UTC, John Colvin wrote:
 On Sunday, 7 December 2014 at 22:46:02 UTC, Dmitry Olshansky 
 wrote:
 08-Dec-2014 01:38, John Colvin пишет:
 On Sunday, 7 December 2014 at 22:13:50 UTC, Dmitry Olshansky 
 wrote:
 08-Dec-2014 00:36, John Colvin пишет:
 On Sunday, 7 December 2014 at 19:56:49 UTC, Dmitry 
 Olshansky wrote:
 06-Dec-2014 18:33, H. S. Teoh via Digitalmars-d пишет:
 On Sat, Dec 06, 2014 at 03:26:08PM +0000, Russel Winder 
 via
 Digitalmars-d wrote:
 [...]
 primitive are passed by value; arrays and user defined 
 types are
 passed by reference only (killing memory usage)
Primitive types are scheduled for removal, leaving only reference types.
[...] Whoa. So they're basically going to rely on JIT to convert those boxed Integers into hardware ints for performance?
With great success.
 Sounds like I will never
 consider Java for computation-heavy tasks then...
Interestingly working with JVM for the last 2 years the only problem I've found is memory usage overhead of collections and non-trivial objects. In my tests performance of simple numeric code was actually better with Scala (not even plain Java) then with D (LDC), for instance.
Got an example? I'd be interested to see a numerical-code example where the JVM can beat the llvm/gcc backends on a real calculation (even if it's a small one).
It was trivial Gaussian integration. http://en.wikipedia.org/wiki/Gaussian_quadrature I do not claim code is optimal or anything, but it's line for line.
[snip]
 on my machine (Haswell i5) I get scala as taking 1.6x as 
 long as the ldc
 version.

 I don't know scala though, I compiled using -optimise, are 
 there other
 arguments I should be using?
There is no point in -optimise at least I do not recall using it. What's your JVM ? It should be Oracle's HotSpot not OpenJDK.
hotspot. After changing the benchmark to more carefully measure the integration function (ldc was unfairly taking advantage of knowing a and b at compile-time), scala does indeed win by a small margin. I wonder what it's managing to achieve here? AFAICT there really isn't much scope for optimisation in that while loop without breaking IEEE-754 guarantees.
I don't think 'f' will be inlined in the D version. What happens if you make it an alias instead?
The delegate is inlined, after the whole integrate function is inlined into timeIt.
Dec 08 2014
parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Monday, 8 December 2014 at 11:40:25 UTC, John Colvin wrote:
 On Monday, 8 December 2014 at 11:02:21 UTC, Rene Zwanenburg 
 wrote:
 On Monday, 8 December 2014 at 10:31:46 UTC, John Colvin wrote:
 On Sunday, 7 December 2014 at 22:46:02 UTC, Dmitry Olshansky 
 wrote:
 08-Dec-2014 01:38, John Colvin пишет:
 On Sunday, 7 December 2014 at 22:13:50 UTC, Dmitry 
 Olshansky wrote:
 08-Dec-2014 00:36, John Colvin пишет:
 On Sunday, 7 December 2014 at 19:56:49 UTC, Dmitry 
 Olshansky wrote:
 06-Dec-2014 18:33, H. S. Teoh via Digitalmars-d пишет:
 On Sat, Dec 06, 2014 at 03:26:08PM +0000, Russel Winder 
 via
 Digitalmars-d wrote:
 [...]
 primitive are passed by value; arrays and user 
 defined types are
 passed by reference only (killing memory usage)
Primitive types are scheduled for removal, leaving only reference types.
[...] Whoa. So they're basically going to rely on JIT to convert those boxed Integers into hardware ints for performance?
With great success.
 Sounds like I will never
 consider Java for computation-heavy tasks then...
Interestingly working with JVM for the last 2 years the only problem I've found is memory usage overhead of collections and non-trivial objects. In my tests performance of simple numeric code was actually better with Scala (not even plain Java) then with D (LDC), for instance.
Got an example? I'd be interested to see a numerical-code example where the JVM can beat the llvm/gcc backends on a real calculation (even if it's a small one).
It was trivial Gaussian integration. http://en.wikipedia.org/wiki/Gaussian_quadrature I do not claim code is optimal or anything, but it's line for line.
[snip]
 on my machine (Haswell i5) I get scala as taking 1.6x as 
 long as the ldc
 version.

 I don't know scala though, I compiled using -optimise, are 
 there other
 arguments I should be using?
There is no point in -optimise at least I do not recall using it. What's your JVM ? It should be Oracle's HotSpot not OpenJDK.
hotspot. After changing the benchmark to more carefully measure the integration function (ldc was unfairly taking advantage of knowing a and b at compile-time), scala does indeed win by a small margin. I wonder what it's managing to achieve here? AFAICT there really isn't much scope for optimisation in that while loop without breaking IEEE-754 guarantees.
I don't think 'f' will be inlined in the D version. What happens if you make it an alias instead?
The delegate is inlined, after the whole integrate function is inlined into timeIt.
sorry, f is a function not a delegate.
Dec 08 2014
prev sibling next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Monday, 8 December 2014 at 10:31:46 UTC, John Colvin wrote:
 On Sunday, 7 December 2014 at 22:46:02 UTC, Dmitry Olshansky 
 wrote:
 08-Dec-2014 01:38, John Colvin пишет:
 On Sunday, 7 December 2014 at 22:13:50 UTC, Dmitry Olshansky 
 wrote:
 08-Dec-2014 00:36, John Colvin пишет:
 On Sunday, 7 December 2014 at 19:56:49 UTC, Dmitry 
 Olshansky wrote:
 06-Dec-2014 18:33, H. S. Teoh via Digitalmars-d пишет:
 On Sat, Dec 06, 2014 at 03:26:08PM +0000, Russel Winder 
 via
 Digitalmars-d wrote:
 [...]
 primitive are passed by value; arrays and user defined 
 types are
 passed by reference only (killing memory usage)
Primitive types are scheduled for removal, leaving only reference types.
[...] Whoa. So they're basically going to rely on JIT to convert those boxed Integers into hardware ints for performance?
With great success.
 Sounds like I will never
 consider Java for computation-heavy tasks then...
Interestingly working with JVM for the last 2 years the only problem I've found is memory usage overhead of collections and non-trivial objects. In my tests performance of simple numeric code was actually better with Scala (not even plain Java) then with D (LDC), for instance.
Got an example? I'd be interested to see a numerical-code example where the JVM can beat the llvm/gcc backends on a real calculation (even if it's a small one).
It was trivial Gaussian integration. http://en.wikipedia.org/wiki/Gaussian_quadrature I do not claim code is optimal or anything, but it's line for line.
[snip]
 on my machine (Haswell i5) I get scala as taking 1.6x as long 
 as the ldc
 version.

 I don't know scala though, I compiled using -optimise, are 
 there other
 arguments I should be using?
There is no point in -optimise at least I do not recall using it. What's your JVM ? It should be Oracle's HotSpot not OpenJDK.
hotspot. After changing the benchmark to more carefully measure the integration function (ldc was unfairly taking advantage of knowing a and b at compile-time), scala does indeed win by a small margin. I wonder what it's managing to achieve here? AFAICT there really isn't much scope for optimisation in that while loop without breaking IEEE-754 guarantees.
You can check it, if you wish to do so. With Oracle JVM and OpenJDK you have two options: - https://github.com/AdoptOpenJDK/jitwatch/ - Oracle Solaris Studio on Solaris, http://www.oracle.com/technetwork/articles/servers-storage-dev/profiling-java-studio-perf-2293553.html - Plain text tools, https://wikis.oracle.com/display/HotSpotInternals/PrintAssembly Other JVMs offer similar tooling. -- Paulo
Dec 08 2014
prev sibling parent reply "Kagamin" <spam here.lot> writes:
On Monday, 8 December 2014 at 10:31:46 UTC, John Colvin wrote:
 After changing the benchmark to more carefully measure the 
 integration function (ldc was unfairly taking advantage of 
 knowing a and b at compile-time), scala does indeed win by a 
 small margin.

 I wonder what it's managing to achieve here? AFAICT there 
 really isn't much scope for optimisation in that while loop 
 without breaking IEEE-754 guarantees.
Maybe scala takes the same advantage?
Dec 08 2014
parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Monday, 8 December 2014 at 12:09:20 UTC, Kagamin wrote:
 On Monday, 8 December 2014 at 10:31:46 UTC, John Colvin wrote:
 After changing the benchmark to more carefully measure the 
 integration function (ldc was unfairly taking advantage of 
 knowing a and b at compile-time), scala does indeed win by a 
 small margin.

 I wonder what it's managing to achieve here? AFAICT there 
 really isn't much scope for optimisation in that while loop 
 without breaking IEEE-754 guarantees.
Maybe scala takes the same advantage?
Perhaps it did, but the technique used to force ldc not to (randomly generating `a` on entry to timeIt) should also apply to scala.
Dec 08 2014
prev sibling next sibling parent Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sun, 2014-12-07 at 21:36 +0000, John Colvin via Digitalmars-d wrote:
 […]
 Got an example? I'd be interested to see a numerical-code example 
 where the JVM can beat the llvm/gcc backends on a real
 calculation (even if it's a small one).
π by quadrature (it's just a single loop) can show the effect very well, though currently anecdotally since I haven't set up proper benchmarking even after 7 years of tinkering. https://github.com/russel/Pi_Quadrature Of course JVM suffers a JIT warm up which native code languages do not, so you have to be careful with single data point comparisons. As with any of these situation the convoluted hardcoded for a specific processor code, especially assembly language will always win. I don't care about that, I care about the fastest comprehensible code that is portable simply by compilation or execution. Based on this, Java does well, so does some Groovy perhaps surprisingly, also Scala. C++ does well especially with TBB (though as an API it leaves a lot to be desired). D is OK but only using ldc2 or gdc, dmd sucks. Go has issues using gc but gccgo is fine. Rust does very well, but if using Cargo for build you have to be careful to use --release. A big winner here is Python, but only if you can get Numba working, Cython and Pythran for me are a bit icky. On the outside rails is Chapel, which if it could get some traction outside HPC would probably wipe the floor with all other languages, with X10 a good runner up. Of course this is just a trivial microbenchmark, you may be looking for more real world actual codes. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Dec 08 2014
prev sibling next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Mon, Dec 08, 2014 at 08:33:16AM +0000, Russel Winder via Digitalmars-d wrote:
[...]
 As with any of these situation the convoluted hardcoded for a specific
 processor code, especially assembly language will always win. I don't
 care about that, I care about the fastest comprehensible code that is
 portable simply by compilation or execution. Based on this, Java does
 well, so does some Groovy perhaps surprisingly, also Scala.  C++ does
 well especially with TBB (though as an API it leaves a lot to be
 desired). D is OK but only using ldc2 or gdc, dmd sucks.
[...] Yeah, I find in my own experience that gdc -O3 tends to produce code that's consistently ~20% faster than dmd -O, especially in compute-intensive code. The downside is that gdc usually lags behind dmd by one release, which, given the current rate of development in D, can be quite a big difference in feature set available. T -- INTEL = Only half of "intelligence".
Dec 08 2014
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
08-Dec-2014 18:18, H. S. Teoh via Digitalmars-d пишет:
 On Mon, Dec 08, 2014 at 08:33:16AM +0000, Russel Winder via Digitalmars-d
wrote:
 [...]
 As with any of these situation the convoluted hardcoded for a specific
 processor code, especially assembly language will always win. I don't
 care about that, I care about the fastest comprehensible code that is
 portable simply by compilation or execution. Based on this, Java does
 well, so does some Groovy perhaps surprisingly, also Scala.  C++ does
 well especially with TBB (though as an API it leaves a lot to be
 desired). D is OK but only using ldc2 or gdc, dmd sucks.
[...] Yeah, I find in my own experience that gdc -O3 tends to produce code that's consistently ~20% faster than dmd -O, especially in compute-intensive code.
And that's not nearly enough. Also both LDC & GDC often can't inline many functions from phobos due to separate compilation.
 The downside is that gdc usually lags behind dmd
 by one release, which, given the current rate of development in D, can
 be quite a big difference in feature set available.
-- Dmitry Olshansky
Dec 09 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tue, Dec 09, 2014 at 07:16:56PM +0300, Dmitry Olshansky via Digitalmars-d
wrote:
 08-Dec-2014 18:18, H. S. Teoh via Digitalmars-d пишет:
On Mon, Dec 08, 2014 at 08:33:16AM +0000, Russel Winder via Digitalmars-d wrote:
[...]
As with any of these situation the convoluted hardcoded for a
specific processor code, especially assembly language will always
win. I don't care about that, I care about the fastest
comprehensible code that is portable simply by compilation or
execution. Based on this, Java does well, so does some Groovy
perhaps surprisingly, also Scala.  C++ does well especially with TBB
(though as an API it leaves a lot to be desired). D is OK but only
using ldc2 or gdc, dmd sucks.
[...] Yeah, I find in my own experience that gdc -O3 tends to produce code that's consistently ~20% faster than dmd -O, especially in compute-intensive code.
And that's not nearly enough. Also both LDC & GDC often can't inline many functions from phobos due to separate compilation.
[...] Really? Most of the Phobos function I use are templates, so inlining shouldn't be a problem, should it? Besides, gdc is far better at inlining that dmd ever was, though of course there are some constructs that the front-end doesn't inline, and the backend doesn't have enough info to do so. This is an area that should be improved. T -- Fact is stranger than fiction.
Dec 09 2014
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
09-Dec-2014 20:54, H. S. Teoh via Digitalmars-d пишет:
 On Tue, Dec 09, 2014 at 07:16:56PM +0300, Dmitry Olshansky via Digitalmars-d
wrote:
 08-Dec-2014 18:18, H. S. Teoh via Digitalmars-d пишет:
 On Mon, Dec 08, 2014 at 08:33:16AM +0000, Russel Winder via Digitalmars-d
wrote:
 [...]
 As with any of these situation the convoluted hardcoded for a
 specific processor code, especially assembly language will always
 win. I don't care about that, I care about the fastest
 comprehensible code that is portable simply by compilation or
 execution. Based on this, Java does well, so does some Groovy
 perhaps surprisingly, also Scala.  C++ does well especially with TBB
 (though as an API it leaves a lot to be desired). D is OK but only
 using ldc2 or gdc, dmd sucks.
[...] Yeah, I find in my own experience that gdc -O3 tends to produce code that's consistently ~20% faster than dmd -O, especially in compute-intensive code.
And that's not nearly enough. Also both LDC & GDC often can't inline many functions from phobos due to separate compilation.
[...] Really? Most of the Phobos function I use are templates, so inlining shouldn't be a problem, should it? Besides, gdc is far better at inlining that dmd ever was, though of course there are some constructs that the front-end doesn't inline, and the backend doesn't have enough info to do so. This is an area that should be improved.
std.ascii.isWhite ... and there are plenty of things our templates inevitably unfold to. I mean come on phobos library is big pile of object code, it can't be all templates. Last time I checked if you copy-paste isWhite it to your source code it gets much faster then std one because of inlining. -- Dmitry Olshansky
Dec 09 2014
next sibling parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tue, Dec 09, 2014 at 10:08:35PM +0300, Dmitry Olshansky via Digitalmars-d
wrote:
 09-Dec-2014 20:54, H. S. Teoh via Digitalmars-d пишет:
On Tue, Dec 09, 2014 at 07:16:56PM +0300, Dmitry Olshansky via Digitalmars-d
wrote:
08-Dec-2014 18:18, H. S. Teoh via Digitalmars-d пишет:
[...]
Yeah, I find in my own experience that gdc -O3 tends to produce
code that's consistently ~20% faster than dmd -O, especially in
compute-intensive code.
And that's not nearly enough. Also both LDC & GDC often can't inline many functions from phobos due to separate compilation.
[...] Really? Most of the Phobos function I use are templates, so inlining shouldn't be a problem, should it? Besides, gdc is far better at inlining that dmd ever was, though of course there are some constructs that the front-end doesn't inline, and the backend doesn't have enough info to do so. This is an area that should be improved.
std.ascii.isWhite ... and there are plenty of things our templates inevitably unfold to. I mean come on phobos library is big pile of object code, it can't be all templates. Last time I checked if you copy-paste isWhite it to your source code it gets much faster then std one because of inlining.
[...] Hmm. Would it help to change isWhite into a template function? T -- It only takes one twig to burn down a forest.
Dec 09 2014
prev sibling parent reply Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 9 December 2014 at 19:15, H. S. Teoh via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Tue, Dec 09, 2014 at 10:08:35PM +0300, Dmitry Olshansky via Digitalmars-d
wrote:
 09-Dec-2014 20:54, H. S. Teoh via Digitalmars-d пишет:
On Tue, Dec 09, 2014 at 07:16:56PM +0300, Dmitry Olshansky via Digitalmars-d
wrote:
08-Dec-2014 18:18, H. S. Teoh via Digitalmars-d пишет:
[...]
Yeah, I find in my own experience that gdc -O3 tends to produce
code that's consistently ~20% faster than dmd -O, especially in
compute-intensive code.
And that's not nearly enough. Also both LDC & GDC often can't inline many functions from phobos due to separate compilation.
[...] Really? Most of the Phobos function I use are templates, so inlining shouldn't be a problem, should it? Besides, gdc is far better at inlining that dmd ever was, though of course there are some constructs that the front-end doesn't inline, and the backend doesn't have enough info to do so. This is an area that should be improved.
std.ascii.isWhite ... and there are plenty of things our templates inevitably unfold to. I mean come on phobos library is big pile of object code, it can't be all templates. Last time I checked if you copy-paste isWhite it to your source code it gets much faster then std one because of inlining.
[...] Hmm. Would it help to change isWhite into a template function?
That can't be the answer for everything. Iain
Dec 09 2014
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
09-Dec-2014 22:18, Iain Buclaw via Digitalmars-d пишет:
 On 9 December 2014 at 19:15, H. S. Teoh via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On Tue, Dec 09, 2014 at 10:08:35PM +0300, Dmitry Olshansky via Digitalmars-d
wrote:
 09-Dec-2014 20:54, H. S. Teoh via Digitalmars-d пишет:
 On Tue, Dec 09, 2014 at 07:16:56PM +0300, Dmitry Olshansky via Digitalmars-d
wrote:
 08-Dec-2014 18:18, H. S. Teoh via Digitalmars-d пишет:
[...]
 Yeah, I find in my own experience that gdc -O3 tends to produce
 code that's consistently ~20% faster than dmd -O, especially in
 compute-intensive code.
And that's not nearly enough. Also both LDC & GDC often can't inline many functions from phobos due to separate compilation.
[...] Really? Most of the Phobos function I use are templates, so inlining shouldn't be a problem, should it? Besides, gdc is far better at inlining that dmd ever was, though of course there are some constructs that the front-end doesn't inline, and the backend doesn't have enough info to do so. This is an area that should be improved.
std.ascii.isWhite ... and there are plenty of things our templates inevitably unfold to. I mean come on phobos library is big pile of object code, it can't be all templates. Last time I checked if you copy-paste isWhite it to your source code it gets much faster then std one because of inlining.
[...] Hmm. Would it help to change isWhite into a template function?
That can't be the answer for everything.
As someone (ab)using empty template "idiom", I agree, we need a better solution. -- Dmitry Olshansky
Dec 09 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tue, Dec 09, 2014 at 10:22:13PM +0300, Dmitry Olshansky via Digitalmars-d
wrote:
 09-Dec-2014 22:18, Iain Buclaw via Digitalmars-d пишет:
On 9 December 2014 at 19:15, H. S. Teoh via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
On Tue, Dec 09, 2014 at 10:08:35PM +0300, Dmitry Olshansky via Digitalmars-d
wrote:
09-Dec-2014 20:54, H. S. Teoh via Digitalmars-d пишет:
On Tue, Dec 09, 2014 at 07:16:56PM +0300, Dmitry Olshansky via Digitalmars-d
wrote:
[...]
And that's not nearly enough. Also both LDC & GDC often can't
inline many functions from phobos due to separate compilation.
[...] Really? Most of the Phobos function I use are templates, so inlining shouldn't be a problem, should it? Besides, gdc is far better at inlining that dmd ever was, though of course there are some constructs that the front-end doesn't inline, and the backend doesn't have enough info to do so. This is an area that should be improved.
std.ascii.isWhite ... and there are plenty of things our templates inevitably unfold to. I mean come on phobos library is big pile of object code, it can't be all templates. Last time I checked if you copy-paste isWhite it to your source code it gets much faster then std one because of inlining.
[...] Hmm. Would it help to change isWhite into a template function?
That can't be the answer for everything.
As someone (ab)using empty template "idiom", I agree, we need a better solution.
[...] I don't see what's the problem with making it an "empty" template. It eliminates dead code in your executable if you never call that function, it enables attribute inference, and it allows inlining. The only major incompatibility I can see is the ability to ship closed-source libraries, but in that case, inlining is already out of the question anyway, so it's a non-issue. Or am I missing something obvious? T -- IBM = I Blame Microsoft
Dec 09 2014
next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 9 December 2014 at 20:19:59 UTC, H. S. Teoh via
Digitalmars-d wrote:
 As someone (ab)using empty template "idiom", I agree, we need 
 a better
 solution.
[...] I don't see what's the problem with making it an "empty" template. It eliminates dead code in your executable if you never call that function, it enables attribute inference, and it allows inlining. The only major incompatibility I can see is the ability to ship closed-source libraries, but in that case, inlining is already out of the question anyway, so it's a non-issue. Or am I missing something obvious?
Considering the optimizer don't know what a template is, and do the inlining, I'm not sure why everybody think the 2 are that linked.
Dec 09 2014
prev sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Tuesday, 9 December 2014 at 20:19:59 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 I don't see what's the problem with making it an "empty" 
 template. It
 eliminates dead code in your executable if you never call that 
 function,
 it enables attribute inference, and it allows inlining. The 
 only major
 incompatibility I can see is the ability to ship closed-source
 libraries, but in that case, inlining is already out of the 
 question
 anyway, so it's a non-issue.

 Or am I missing something obvious?
Because you don't really create a template that way but workaround broken function behavior. It is not the usage of empty templates that is bad but the fact that plain functions remain broken => not really a solution.
Dec 09 2014
parent reply "Kagamin" <spam here.lot> writes:
On Tuesday, 9 December 2014 at 20:55:51 UTC, Dicebot wrote:
 Because you don't really create a template that way but 
 workaround broken function behavior. It is not the usage of 
 empty templates that is bad but the fact that plain functions 
 remain broken => not really a solution.
You can compile against phobos sources instead of interface files.
Dec 10 2014
next sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 10 December 2014 at 08:43:49 UTC, Kagamin wrote:
 On Tuesday, 9 December 2014 at 20:55:51 UTC, Dicebot wrote:
 Because you don't really create a template that way but 
 workaround broken function behavior. It is not the usage of 
 empty templates that is bad but the fact that plain functions 
 remain broken => not really a solution.
You can compile against phobos sources instead of interface files.
This cannot be the solution if D aspires to be used in contexts where binary libraries are used. C++ is excused to have template code in headers given the primitive tooling, but languages like Ada and Modula-3 support proper information hiding for generic code. -- Paulo
Dec 10 2014
next sibling parent "Kagamin" <spam here.lot> writes:
On Wednesday, 10 December 2014 at 10:24:53 UTC, Paulo  Pinto 
wrote:
 This cannot be the solution if D aspires to be used in contexts 
 where binary libraries are used.
For completely opaque libraries one can compile against interface files.
Dec 10 2014
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/10/2014 2:24 AM, Paulo Pinto wrote:
 This cannot be the solution if D aspires to be used in contexts where binary
 libraries are used.

 C++ is excused to have template code in headers given the primitive tooling,
but
 languages like Ada and Modula-3 support proper information hiding for generic
code.
There's no way you can hide the implementation of a function from the user if it is available to the compiler. Quite a few people thought C++ "exported templates" would make this work, but there is no known way to implement it and keep it hidden from the user, not even if it is encrypted since the compiler must decrypt it.
Dec 10 2014
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 10 December 2014 at 10:48:12 UTC, Walter Bright 
wrote:
 On 12/10/2014 2:24 AM, Paulo Pinto wrote:
 This cannot be the solution if D aspires to be used in 
 contexts where binary
 libraries are used.

 C++ is excused to have template code in headers given the 
 primitive tooling, but
 languages like Ada and Modula-3 support proper information 
 hiding for generic code.
There's no way you can hide the implementation of a function from the user if it is available to the compiler. Quite a few people thought C++ "exported templates" would make this work, but there is no known way to implement it and keep it hidden from the user, not even if it is encrypted since the compiler must decrypt it.
My remark had nothing to do with IP. I prefer the model used by the referred languages, where binary libraries and metadata is used, instead of the C toolchain model. For example, just shipping the .TPU/.DCU libraries in the Object Pascal world. -- Paulo
Dec 10 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/10/2014 4:15 AM, Paulo Pinto wrote:
 I prefer the model used by the referred languages, where binary libraries and
 metadata is used, instead of the C toolchain model.

 For example, just shipping the .TPU/.DCU libraries in the Object Pascal world.
If the metadata had enough info in it to do inlining, it might as well be the source code.
Dec 10 2014
parent reply "Araq" <rumpf_a web.de> writes:
On Wednesday, 10 December 2014 at 23:23:50 UTC, Walter Bright
wrote:
 On 12/10/2014 4:15 AM, Paulo Pinto wrote:
 I prefer the model used by the referred languages, where 
 binary libraries and
 metadata is used, instead of the C toolchain model.

 For example, just shipping the .TPU/.DCU libraries in the 
 Object Pascal world.
If the metadata had enough info in it to do inlining, it might as well be the source code.
Er ... but you're the guy who stresses that lexers cannot be fast enough because they need to look at every input char. (And you're right.) You can at least cache the lexing step. Not that D's compiler is not fast enough anyway, I'm just saying that your statement is really weird.
Dec 11 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 12/11/2014 2:05 AM, Araq wrote:
 On Wednesday, 10 December 2014 at 23:23:50 UTC, Walter Bright
 wrote:
 On 12/10/2014 4:15 AM, Paulo Pinto wrote:
 I prefer the model used by the referred languages, where binary libraries and
 metadata is used, instead of the C toolchain model.

 For example, just shipping the .TPU/.DCU libraries in the Object Pascal world.
If the metadata had enough info in it to do inlining, it might as well be the source code.
Er ... but you're the guy who stresses that lexers cannot be fast enough because they need to look at every input char. (And you're right.) You can at least cache the lexing step. Not that D's compiler is not fast enough anyway, I'm just saying that your statement is really weird.
You're right that a binary version of the token stream would be faster. But not enough to justify the implementation complexity and other issues with a binary format. A text format also has the major advantage of being able to look at it and edit it as necessary without special tools.
Dec 17 2014
prev sibling parent reply "Tobias Pankrath" <tobias pankrath.net> writes:
On Wednesday, 10 December 2014 at 10:24:53 UTC, Paulo  Pinto 
wrote:
 On Wednesday, 10 December 2014 at 08:43:49 UTC, Kagamin wrote:
 On Tuesday, 9 December 2014 at 20:55:51 UTC, Dicebot wrote:
 Because you don't really create a template that way but 
 workaround broken function behavior. It is not the usage of 
 empty templates that is bad but the fact that plain functions 
 remain broken => not really a solution.
You can compile against phobos sources instead of interface files.
This cannot be the solution if D aspires to be used in contexts where binary libraries are used. C++ is excused to have template code in headers given the primitive tooling, but languages like Ada and Modula-3 support proper information hiding for generic code. -- Paulo
A binary blob requirement makes no sense for a standard library. Would you like to explain how the proper information hiding support works for generic code in Ada? I'm really curious how that could work in D.
Dec 10 2014
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 10 December 2014 at 12:24:56 UTC, Tobias Pankrath 
wrote:
 On Wednesday, 10 December 2014 at 10:24:53 UTC, Paulo  Pinto 
 wrote:
 On Wednesday, 10 December 2014 at 08:43:49 UTC, Kagamin wrote:
 On Tuesday, 9 December 2014 at 20:55:51 UTC, Dicebot wrote:
 Because you don't really create a template that way but 
 workaround broken function behavior. It is not the usage of 
 empty templates that is bad but the fact that plain 
 functions remain broken => not really a solution.
You can compile against phobos sources instead of interface files.
This cannot be the solution if D aspires to be used in contexts where binary libraries are used. C++ is excused to have template code in headers given the primitive tooling, but languages like Ada and Modula-3 support proper information hiding for generic code. -- Paulo
A binary blob requirement makes no sense for a standard library.
And yet that has been the way it always worked in the Mesa linage of languages. Mesa, Modula-2, Modula-3, Ada, Oberon, Object Pascal ....
 Would you like to explain how the proper information hiding 
 support works for generic code in Ada? I'm really curious how 
 that could work in D.
The libraries contain the required metadata for symbol tables and code locations that need to be extracted into the executable/library. Package definition files contain the minimum information the compiler needs to know to search for the remaining information. Example, -- Package header generic type Element_T is private; package functions is procedure Swap (X, Y : in out Element_T); end functions; -- Package body package body functions is procedure Swap (X, Y : in out Element_T) is begin -- implementation end Swap; end functions; -- importing it declare package functions_Int is new functions (Int); use functions_Int; x, y : Int; begin x := 1; y := 2; Swap(x, y); end; Lots of options are possible when the C compiler and linker model aren't being used. .. Paulo
Dec 10 2014
next sibling parent reply Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 10 December 2014 at 14:16, Paulo  Pinto via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Wednesday, 10 December 2014 at 12:24:56 UTC, Tobias Pankrath wrote:
 On Wednesday, 10 December 2014 at 10:24:53 UTC, Paulo  Pinto wrote:
 On Wednesday, 10 December 2014 at 08:43:49 UTC, Kagamin wrote:
 On Tuesday, 9 December 2014 at 20:55:51 UTC, Dicebot wrote:
 Because you don't really create a template that way but workaround
 broken function behavior. It is not the usage of empty templates that is bad
 but the fact that plain functions remain broken => not really a solution.
You can compile against phobos sources instead of interface files.
This cannot be the solution if D aspires to be used in contexts where binary libraries are used. C++ is excused to have template code in headers given the primitive tooling, but languages like Ada and Modula-3 support proper information hiding for generic code. -- Paulo
A binary blob requirement makes no sense for a standard library.
And yet that has been the way it always worked in the Mesa linage of languages. Mesa, Modula-2, Modula-3, Ada, Oberon, Object Pascal ....
 Would you like to explain how the proper information hiding support works
 for generic code in Ada? I'm really curious how that could work in D.
The libraries contain the required metadata for symbol tables and code locations that need to be extracted into the executable/library. Package definition files contain the minimum information the compiler needs to know to search for the remaining information. Example, -- Package header generic type Element_T is private; package functions is procedure Swap (X, Y : in out Element_T); end functions; -- Package body package body functions is procedure Swap (X, Y : in out Element_T) is begin -- implementation end Swap; end functions; -- importing it declare package functions_Int is new functions (Int); use functions_Int; x, y : Int; begin x := 1; y := 2; Swap(x, y); end; Lots of options are possible when the C compiler and linker model aren't being used.
In D, this should be akin to: // Package header module functions; void Swap(T)(out T x, out T y); // Package body module functions; void Swap(T)(out T x, out T y) { // Implementation } // Importing it import functions : Swap; void main() { int x = 1; int y = 2; Swap(x, y); } Iain
Dec 10 2014
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 10 December 2014 at 16:56:24 UTC, Iain Buclaw via 
Digitalmars-d wrote:
 On 10 December 2014 at 14:16, Paulo  Pinto via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On Wednesday, 10 December 2014 at 12:24:56 UTC, Tobias 
 Pankrath wrote:
 On Wednesday, 10 December 2014 at 10:24:53 UTC, Paulo  Pinto 
 wrote:
 On Wednesday, 10 December 2014 at 08:43:49 UTC, Kagamin 
 wrote:
 On Tuesday, 9 December 2014 at 20:55:51 UTC, Dicebot wrote:
 Because you don't really create a template that way but 
 workaround
 broken function behavior. It is not the usage of empty 
 templates that is bad
 but the fact that plain functions remain broken => not 
 really a solution.
You can compile against phobos sources instead of interface files.
This cannot be the solution if D aspires to be used in contexts where binary libraries are used. C++ is excused to have template code in headers given the primitive tooling, but languages like Ada and Modula-3 support proper information hiding for generic code. -- Paulo
A binary blob requirement makes no sense for a standard library.
And yet that has been the way it always worked in the Mesa linage of languages. Mesa, Modula-2, Modula-3, Ada, Oberon, Object Pascal ....
 Would you like to explain how the proper information hiding 
 support works
 for generic code in Ada? I'm really curious how that could 
 work in D.
The libraries contain the required metadata for symbol tables and code locations that need to be extracted into the executable/library. Package definition files contain the minimum information the compiler needs to know to search for the remaining information. Example, -- Package header generic type Element_T is private; package functions is procedure Swap (X, Y : in out Element_T); end functions; -- Package body package body functions is procedure Swap (X, Y : in out Element_T) is begin -- implementation end Swap; end functions; -- importing it declare package functions_Int is new functions (Int); use functions_Int; x, y : Int; begin x := 1; y := 2; Swap(x, y); end; Lots of options are possible when the C compiler and linker model aren't being used.
In D, this should be akin to: // Package header module functions; void Swap(T)(out T x, out T y); // Package body module functions; void Swap(T)(out T x, out T y) { // Implementation } // Importing it import functions : Swap; void main() { int x = 1; int y = 2; Swap(x, y); } Iain
But the current object model doesn't support it, right? At least my understanding is that you need to have the full body visible. -- Paulo
Dec 10 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Dec 10, 2014 at 06:15:48PM +0000, Paulo Pinto via Digitalmars-d wrote:
 On Wednesday, 10 December 2014 at 16:56:24 UTC, Iain Buclaw via
 Digitalmars-d wrote:
[...]
In D, this should be akin to:

// Package header
module functions;
void Swap(T)(out T x, out T y);

// Package body
module functions;
void Swap(T)(out T x, out T y)
{
  // Implementation
}

// Importing it
import functions : Swap;
void main()
{
  int x = 1;
  int y = 2;
  Swap(x, y);
}

Iain
But the current object model doesn't support it, right? At least my understanding is that you need to have the full body visible.
[...] Yeah, the compiler cannot instantiate the template without access to the full body. It *could*, though, if we were to store template body IR in object files, perhaps under specially-dedicated object file sections. It wouldn't prevent reverse-engineering (which is moot anyway when templates are involved), but it *would* work as an "opaque" library interface file. T -- Food and laptops don't mix.
Dec 10 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/10/2014 10:28 AM, H. S. Teoh via Digitalmars-d wrote:
 Yeah, the compiler cannot instantiate the template without access to the
 full body. It *could*, though, if we were to store template body IR in
 object files, perhaps under specially-dedicated object file sections. It
 wouldn't prevent reverse-engineering (which is moot anyway when
 templates are involved), but it *would* work as an "opaque" library
 interface file.
Storing it as body IR accomplishes nothing practical over storing it as source file, i.e. .di files.
Dec 10 2014
next sibling parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 10 Dec 2014 17:17:11 -0800
Walter Bright via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 On 12/10/2014 10:28 AM, H. S. Teoh via Digitalmars-d wrote:
 Yeah, the compiler cannot instantiate the template without access to the
 full body. It *could*, though, if we were to store template body IR in
 object files, perhaps under specially-dedicated object file sections. It
 wouldn't prevent reverse-engineering (which is moot anyway when
 templates are involved), but it *would* work as an "opaque" library
 interface file.
=20 Storing it as body IR accomplishes nothing practical over storing it as s=
ource=20
 file, i.e. .di files.
except that there's no need to parse source code over and over again, which is good for other tools (like completion suggesting, intelligent code browsing and so on).
Dec 11 2014
next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Thursday, 11 December 2014 at 08:05:13 UTC, ketmar via 
Digitalmars-d wrote:
 On Wed, 10 Dec 2014 17:17:11 -0800
 Walter Bright via Digitalmars-d <digitalmars-d puremagic.com> 
 wrote:

 On 12/10/2014 10:28 AM, H. S. Teoh via Digitalmars-d wrote:
 Yeah, the compiler cannot instantiate the template without 
 access to the
 full body. It *could*, though, if we were to store template 
 body IR in
 object files, perhaps under specially-dedicated object file 
 sections. It
 wouldn't prevent reverse-engineering (which is moot anyway 
 when
 templates are involved), but it *would* work as an "opaque" 
 library
 interface file.
Storing it as body IR accomplishes nothing practical over storing it as source file, i.e. .di files.
except that there's no need to parse source code over and over again, which is good for other tools (like completion suggesting, intelligent code browsing and so on).
Yes tooling is a big part of it. -- Paulo
Dec 11 2014
prev sibling next sibling parent reply "Tobias Pankrath" <tobias pankrath.net> writes:
 
 Storing it as body IR accomplishes nothing practical over 
 storing it as source file, i.e. .di files.
except that there's no need to parse source code over and over again, which is good for other tools (like completion suggesting, intelligent code browsing and so on).
Which usually hold an AST in memory anyway. We have a fast parser, parsing even a big codebase once is really not a problem, see DCD for example. If the only advantage is to skip a parsing stage here or there, it does not justify the work that would be needed.
Dec 11 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 11 Dec 2014 08:57:56 +0000
Tobias Pankrath via Digitalmars-d <digitalmars-d puremagic.com> wrote:

=20
 Storing it as body IR accomplishes nothing practical over=20
 storing it as source file, i.e. .di files.
except that there's no need to parse source code over and over=20 again, which is good for other tools (like completion suggesting,=20 intelligent code browsing and so on).
=20 Which usually hold an AST in memory anyway. We have a fast=20 parser, parsing even a big codebase once is really not a problem,=20 see DCD for example. =20 If the only advantage is to skip a parsing stage here or there,=20 it does not justify the work that would be needed.
as we have a fast compiler too, i can't see any sense in producing machine code files at all. the only advantage is to skip a parsing and compiling stages here or there, it does not justify the work that would be needed.
Dec 11 2014
next sibling parent reply "Tobias Pankrath" <tobias pankrath.net> writes:
 
 Which usually hold an AST in memory anyway. We have a fast 
 parser, parsing even a big codebase once is really not a 
 problem, see DCD for example.
 
 If the only advantage is to skip a parsing stage here or 
 there, it does not justify the work that would be needed.
as we have a fast compiler too, i can't see any sense in producing machine code files at all. the only advantage is to skip a parsing and compiling stages here or there, it does not justify the work that would be needed.
Come on, that is not even a half decent analogy.
Dec 11 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 11 Dec 2014 09:18:05 +0000
Tobias Pankrath via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Which usually hold an AST in memory anyway. We have a fast=20
 parser, parsing even a big codebase once is really not a=20
 problem, see DCD for example.
=20
 If the only advantage is to skip a parsing stage here or=20
 there, it does not justify the work that would be needed.
as we have a fast compiler too, i can't see any sense in=20 producing machine code files at all. the only advantage is to skip a=20 parsing and compiling stages here or there, it does not justify the work=20 that would be needed.
Come on, that is not even a half decent analogy.
it is. you can't see any uses of (semi)compiled module files (and i can; it's essential for component framework, for example). i can't see any uses of compiled binaries (i don't need that in component framework).
Dec 11 2014
parent reply "Tobias Pankrath" <tobias pankrath.net> writes:
 Come on, that is not even a half decent analogy.
it is. you can't see any uses of (semi)compiled module files (and i can; it's essential for component framework, for example). i can't see any uses of compiled binaries (i don't need that in component framework).
Actually I asked in this thread what the benefits are and the only one that come up was improved compilation speed due to caching of the lexing/parsing stage. If you think it is a good idea for a component framework, would you please explain how? Honest question.
Dec 11 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 11 Dec 2014 10:51:21 +0000
Tobias Pankrath via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Come on, that is not even a half decent analogy.
it is. you can't see any uses of (semi)compiled module files=20 (and i can; it's essential for component framework, for example). i=20 can't see any uses of compiled binaries (i don't need that in component framework).
=20 Actually I asked in this thread what the benefits are and the=20 only one that come up was improved compilation speed due to=20 caching of the lexing/parsing stage. =20 If you think it is a good idea for a component framework, would=20 you please explain how? Honest question.
the core of the component framework a-la BlackBox Component Builder is dynamic module system. this requires dynamic linker, and the linker must know alot about framework internals to be fast and usable. with precompiled modules which keeps symbolic information and ASTs for templates such linker can be written as independend module. you don't need to add hacks to runtime, to care about correct .so building and loading order and so on. it's too long to explain in NG post. if you really interested you can take a look at BlackBox Component Builder itself, it's open-source. having ".sym" and ".cod" files are necessary to make such system usable. D has a great foundation to build component framework a-la BCB. there are *no* competitors for D here, and having such system can boost D popularity to the skies. BCB failed due to two strategic errors: choosing Component Pascal as the system language (CP is great language, but the reality is that you cannot with with "pascal") and resisting to open-source the system until it was too late. with "AST-companions" D is in position to occupy that niche. D is c-like, D has great metaprogramming abilities, D is open-source. it's doomed to win.
Dec 11 2014
next sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Thursday, 11 December 2014 at 12:00:25 UTC, ketmar via 
Digitalmars-d wrote:
 On Thu, 11 Dec 2014 10:51:21 +0000
 Tobias Pankrath via Digitalmars-d <digitalmars-d puremagic.com> 
 wrote:

 Come on, that is not even a half decent analogy.
it is. you can't see any uses of (semi)compiled module files (and i can; it's essential for component framework, for example). i can't see any uses of compiled binaries (i don't need that in component framework).
Actually I asked in this thread what the benefits are and the only one that come up was improved compilation speed due to caching of the lexing/parsing stage. If you think it is a good idea for a component framework, would you please explain how? Honest question.
the core of the component framework a-la BlackBox Component Builder is dynamic module system. this requires dynamic linker, and the linker must know alot about framework internals to be fast and usable. with precompiled modules which keeps symbolic information and ASTs for templates such linker can be written as independend module. you don't need to add hacks to runtime, to care about correct .so building and loading order and so on. it's too long to explain in NG post. if you really interested you can take a look at BlackBox Component Builder itself, it's open-source. having ".sym" and ".cod" files are necessary to make such system usable. D has a great foundation to build component framework a-la BCB. there are *no* competitors for D here, and having such system can boost D popularity to the skies. BCB failed due to two strategic errors: choosing Component Pascal as the system language (CP is great language, but the reality is that you cannot with with "pascal") and resisting to open-source the system until it was too late. with "AST-companions" D is in position to occupy that niche. D is c-like, D has great metaprogramming abilities, D is open-source. it's doomed to win.
To be honest, with .NET Native and OpenJDK getting an AOT compiler around the corner (Java 9 or 10 not yet decided) this opportunity is already lost. -- Paulo
Dec 11 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 11 Dec 2014 12:06:39 +0000
Paulo  Pinto via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 with "AST-companions" D is in position to occupy that niche. D=20
 is
 c-like, D has great metaprogramming abilities, D is=20
 open-source. it's
 doomed to win.
=20 To be honest, with .NET Native and OpenJDK getting an AOT=20 compiler around the corner (Java 9 or 10 not yet decided) this=20 opportunity is already lost.
nope, it's not lost. java was in position to do such thing for more than ten years, yet there is no real component system in java. the same for .net. they just not working in that direction, they still have IDEs, debuggers, compilers, libraries and all that separated crap. nothing will change in following twenty years, i can bet on it.
Dec 11 2014
prev sibling parent reply "Tobias Pankrath" <tobias pankrath.net> writes:
 the core of the component framework a-la BlackBox Component 
 Builder is
 dynamic module system. this requires dynamic linker, and the 
 linker
 must know alot about framework internals to be fast and usable. 
 with
 precompiled modules which keeps symbolic information and ASTs 
 for
 templates such linker can be written as independend module.
You'll still need to compile everything that is a template, so you'd have to provide a compiler as an independent module. That would be quite need, but I don't see how this couldn't work with current object files. There is a REPL using compilation to .so files and dynamical linking, after all. If what you have in mind is indeed impossible with current object files, it may be worthwhile to create our own. But as I see it, the only benefit of storing an AST is compilation speed, which currently is not dominated by parsing. How would your precompiled modules differ from ELF except that they'd contain an AST for things that didn't emit the machine code yet?
Dec 11 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 11 Dec 2014 12:11:35 +0000
Tobias Pankrath via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 If what you have in mind is indeed impossible with current object=20
 files, it may
 be worthwhile to create our own. But as I see it, the only=20
 benefit of storing an AST is compilation speed, which currently=20
 is not dominated by parsing.
it is possible, but it's like today's JIT compilers: first they compile the source to bytecode loosing alot of the info by the way, and then they "decompiling" bytecode to restore the info they throw away on the first step. i was laughing at java since i've seen "juice" project (don't try to google that, you'll find nothing).
 How would your precompiled modules differ from ELF except that=20
 they'd contain an AST for things that didn't emit the machine=20
 code yet?
how did one object file format differs from another object file format? they just targeted at different applications. i can emulate one with another, but it's a klugdery.
Dec 11 2014
prev sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 11 December 2014 at 09:07:18 UTC, ketmar via 
Digitalmars-d wrote:
 On Thu, 11 Dec 2014 08:57:56 +0000
 Tobias Pankrath via Digitalmars-d <digitalmars-d puremagic.com> 
 wrote:

 
 Storing it as body IR accomplishes nothing practical over 
 storing it as source file, i.e. .di files.
except that there's no need to parse source code over and over again, which is good for other tools (like completion suggesting, intelligent code browsing and so on).
Which usually hold an AST in memory anyway. We have a fast parser, parsing even a big codebase once is really not a problem, see DCD for example. If the only advantage is to skip a parsing stage here or there, it does not justify the work that would be needed.
as we have a fast compiler too, i can't see any sense in producing machine code files at all. the only advantage is to skip a parsing and compiling stages here or there, it does not justify the work that would be needed.
Parsing is so fast it's not worth spending huge numbers of man-hours building an effective cacheing system for it. The rest of compilation is comparatively much slower and is therefore more important to cache. You're being sarcastic to a straw-man.
Dec 11 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 11 Dec 2014 09:44:49 +0000
John Colvin via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Parsing is so fast it's not worth spending huge numbers of=20
 man-hours building an effective cacheing system for it.
and generating machine code is useless at all, it's enough to simply improve CTFE.
 The rest=20
 of compilation is comparatively much slower and is therefore more=20
 important to cache.
what does "the rest of compilation" mean? there are alot of things you can do with AST before writing it to disk. ah, just writing compressed AST to disk is good enough, as reading it back is *way* *faster* than parsing the source. and any other tool -- like lint, or completion tool, or documentation generators can use that compressed AST without reparsing the sources. you can't see how this can help 'cause we don't have such AST-companions yet. i can see how this will help 'cause i have alot of expirience with turbo/borland pascal and BlackBox Component Builder. think a-la BCB can be a killer app for D, but it's very hard to build it without good AST-companions.
Dec 11 2014
next sibling parent reply "Tobias Pankrath" <tobias pankrath.net> writes:
On Thursday, 11 December 2014 at 11:46:50 UTC, ketmar via 
Digitalmars-d wrote:
 you can't see how this can help 'cause we don't have such
 AST-companions yet. i can see how this will help 'cause i have 
 alot of
 expirience with turbo/borland pascal and BlackBox Component 
 Builder.
 think a-la BCB can be a killer app for D, but it's very hard to 
 build
 it without good AST-companions.
And we're back to parsing speed. Starting DCD takes 1 (one) second on my pc and is parsing the hole of phobos and druntime at startup. And it only starts once a day. The code I'm constantly changing needs to be reparsed anyway. I never to wait for the tooltip though. So, using an preparsed ast yields me nothing today, but might brake all the tools that work with object files. Giving the amount of manpower available, I'd say we have better things to do. But I guess no one would mind if you just make it happen.
Dec 11 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 11 Dec 2014 12:02:43 +0000
Tobias Pankrath via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 On Thursday, 11 December 2014 at 11:46:50 UTC, ketmar via=20
 Digitalmars-d wrote:
 you can't see how this can help 'cause we don't have such
 AST-companions yet. i can see how this will help 'cause i have=20
 alot of
 expirience with turbo/borland pascal and BlackBox Component=20
 Builder.
 think a-la BCB can be a killer app for D, but it's very hard to=20
 build
 it without good AST-companions.
=20 And we're back to parsing speed.
nope. at least not more than in my "we don't need to generate native binaries" example. it's not about "code suggestions" at all.
 So, using an preparsed ast yields me nothing=20
 today, but might brake all the tools that work with object files.
how can it break any tool? and what tool is going to break? it's incredibly easy to convert "precompiled module file" to ".o" file which ld wants, but not vice versa.
 Giving the amount of manpower available, I'd say we have better=20
 things to do. But I guess no one would mind if you just make it=20
 happen.
i'm slowly working on it. as i really want my component system to work into, i *have* to write all the things i need for it. yet i'm a lone developer with alot of other things to do, and any slight change in D frontend means that i have to change my code too. with "AST-companions" integrated into frontend it's much easier to keep such system alive, 'cause the one who changes frontend will fix the "companion generator" too. it's not that hard (we have to fix .di generators now, for example), but almost impossible for lone independend developer who have to do payed work for living. as i'm pretty sure that such work will never be integrated into mainline code, i'm just hacking it sometimes, but never tried to keep it up with DMD developement.
Dec 11 2014
prev sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Thursday, 11 December 2014 at 11:46:50 UTC, ketmar via 
Digitalmars-d wrote:
 On Thu, 11 Dec 2014 09:44:49 +0000
 John Colvin via Digitalmars-d <digitalmars-d puremagic.com> 
 wrote:

 Parsing is so fast it's not worth spending huge numbers of 
 man-hours building an effective cacheing system for it.
and generating machine code is useless at all, it's enough to simply improve CTFE.
 The rest of compilation is comparatively much slower and is 
 therefore more important to cache.
what does "the rest of compilation" mean? there are alot of things you can do with AST before writing it to disk. ah, just writing compressed AST to disk is good enough, as reading it back is *way* *faster* than parsing the source. and any other tool -- like lint, or completion tool, or documentation generators can use that compressed AST without reparsing the sources. you can't see how this can help 'cause we don't have such AST-companions yet. i can see how this will help 'cause i have alot of expirience with turbo/borland pascal and BlackBox Component Builder. think a-la BCB can be a killer app for D, but it's very hard to build it without good AST-companions.
BlackBox! A fellow user. :) Another example, the Oberon operating system, specially the System 3 Gadgets framework. -- Paulo
Dec 11 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 11 Dec 2014 12:04:28 +0000
Paulo  Pinto via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 BlackBox! A fellow user. :)
yeah! i miss BCB almost every day i'm doing any coding.
 Another example, the Oberon operating system, specially the=20
 System 3 Gadgets framework.
yep, they have the same roots. i didn't mention Oberon as it's not "commercial programming environment", yet i still dreaming about Oberon as my primary OS... ;-)
Dec 11 2014
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-12-11 09:05, ketmar via Digitalmars-d wrote:

 except that there's no need to parse source code over and over again,
 which is good for other tools (like completion suggesting, intelligent
 code browsing and so on).
Wouldn't you need to parse the IR? -- /Jacob Carlborg
Dec 11 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Fri, 12 Dec 2014 08:32:39 +0100
Jacob Carlborg via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 On 2014-12-11 09:05, ketmar via Digitalmars-d wrote:
=20
 except that there's no need to parse source code over and over again,
 which is good for other tools (like completion suggesting, intelligent
 code browsing and so on).
=20 Wouldn't you need to parse the IR?
with good IR it's way easier than parsing the source. to the extent that it is possible to just mmap() compiled module and use it as ready data structure.
Dec 11 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/11/2014 12:05 AM, ketmar via Digitalmars-d wrote:
 On Wed, 10 Dec 2014 17:17:11 -0800
 Walter Bright via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 On 12/10/2014 10:28 AM, H. S. Teoh via Digitalmars-d wrote:
 Yeah, the compiler cannot instantiate the template without access to the
 full body. It *could*, though, if we were to store template body IR in
 object files, perhaps under specially-dedicated object file sections. It
 wouldn't prevent reverse-engineering (which is moot anyway when
 templates are involved), but it *would* work as an "opaque" library
 interface file.
Storing it as body IR accomplishes nothing practical over storing it as source file, i.e. .di files.
except that there's no need to parse source code over and over again, which is good for other tools (like completion suggesting, intelligent code browsing and so on).
Yeah, you just need to write another parser for the binary format, rather than reuse a canned D parser. :-)
Dec 17 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 17 December 2014 at 19:48:00 UTC, Walter Bright 
wrote:
 Yeah, you just need to write another parser for the binary 
 format, rather than reuse a canned D parser. :-)
A binary format for IR should just be mmap'ed and work without any parsing.
Dec 17 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/17/2014 11:50 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Wednesday, 17 December 2014 at 19:48:00 UTC, Walter Bright wrote:
 Yeah, you just need to write another parser for the binary format, rather than
 reuse a canned D parser. :-)
A binary format for IR should just be mmap'ed and work without any parsing.
I know how to do binary formats - the Digital Mars C++ compiler does precompiled headers using memory mapped files. It isn't worth it.
Dec 17 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 17 December 2014 at 20:48:39 UTC, Walter Bright 
wrote:
 I know how to do binary formats - the Digital Mars C++ compiler 
 does precompiled headers using memory mapped files.

 It isn't worth it.
Maybe not for headerfiles, but if you have an indexed database representing a compiled framework (partial evaluation/incomplete type support in templates) it should pay off if you preload the pages you need to access. If you run low on memory it also means you don't have to save pages to disk, the OS can just evict them.
Dec 17 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/17/2014 1:29 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Wednesday, 17 December 2014 at 20:48:39 UTC, Walter Bright wrote:
 I know how to do binary formats - the Digital Mars C++ compiler does
 precompiled headers using memory mapped files.

 It isn't worth it.
Maybe not for headerfiles, but if you have an indexed database representing a compiled framework (partial evaluation/incomplete type support in templates) it should pay off if you preload the pages you need to access. If you run low on memory it also means you don't have to save pages to disk, the OS can just evict them.
You're welcome to try it. I've spent a great deal of time on it, and it doesn't pay off.
Dec 17 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 18 December 2014 at 00:27:49 UTC, Walter Bright 
wrote:
 You're welcome to try it. I've spent a great deal of time on 
 it, and it doesn't pay off.
Regular HD I/O is quite slow, but with fast SSD on PCIe and a good database-like index locked to memory…
Dec 17 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Dec 18, 2014 at 12:37:43AM +0000, via Digitalmars-d wrote:
 On Thursday, 18 December 2014 at 00:27:49 UTC, Walter Bright wrote:
You're welcome to try it. I've spent a great deal of time on it, and
it doesn't pay off.
Regular HD I/O is quite slow, but with fast SSD on PCIe and a good database-like index locked to memory…
That's hardly a solution that will work for the general D user, many of whom may not have this specific setup. T -- "I speak better English than this villain Bush" -- Mohammed Saeed al-Sahaf, Iraqi Minister of Information
Dec 17 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 18 December 2014 at 01:16:38 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 On Thu, Dec 18, 2014 at 12:37:43AM +0000, via Digitalmars-d 
 wrote:
 Regular HD I/O is quite slow, but with fast SSD on PCIe and a 
 good
 database-like index locked to memory…
That's hardly a solution that will work for the general D user, many of whom may not have this specific setup.
By the time this would be ready, most programmers will have PCIe interfaced SSD. At 100.000 IOPS it is pretty ok.
Dec 18 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 18 Dec 2014 08:09:08 +0000
via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 On Thursday, 18 December 2014 at 01:16:38 UTC, H. S. Teoh via=20
 Digitalmars-d wrote:
 On Thu, Dec 18, 2014 at 12:37:43AM +0000, via Digitalmars-d=20
 wrote:
 Regular HD I/O is quite slow, but with fast SSD on PCIe and a=20
 good
 database-like index locked to memory=E2=80=A6
That's hardly a solution that will work for the general D user,=20 many of whom may not have this specific setup.
=20 By the time this would be ready, most programmers will have PCIe=20 interfaced SSD. At 100.000 IOPS it is pretty ok.
didn't i say that the whole "64-bit" hype sux? ;-) that's about "memory as database".
Dec 18 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 18 December 2014 at 08:56:29 UTC, ketmar via 
Digitalmars-d wrote:
 didn't i say that the whole "64-bit" hype sux? ;-) that's about 
 "memory as database".
Did you? :-) Regular HDD is at 100 IOPS, so I think reading 100K random pages per second would work out fine. PCIe is going mainstream next year or so according to anantech, if I got that right. (There are solutions that do 1000K+ IOPS)
Dec 18 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 18 Dec 2014 09:37:35 +0000
via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 On Thursday, 18 December 2014 at 08:56:29 UTC, ketmar via=20
 Digitalmars-d wrote:
 didn't i say that the whole "64-bit" hype sux? ;-) that's about=20
 "memory as database".
=20 Did you? :-) Regular HDD is at 100 IOPS, so I think reading 100K=20 random pages per second would work out fine. PCIe is going=20 mainstream next year or so according to anantech, if I got that=20 right. (There are solutions that do 1000K+ IOPS)
i'm about "hey, we are out of address space!" issues. ok, ok, *i'm* out of address space. ;-)
Dec 18 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 18 December 2014 at 08:56:29 UTC, ketmar via 
Digitalmars-d wrote:
 On Thu, 18 Dec 2014 08:09:08 +0000
 via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 On Thursday, 18 December 2014 at 01:16:38 UTC, H. S. Teoh via 
 Digitalmars-d wrote:
 On Thu, Dec 18, 2014 at 12:37:43AM +0000, via Digitalmars-d 
 wrote:
 Regular HD I/O is quite slow, but with fast SSD on PCIe and 
 a good
 database-like index locked to memory…
That's hardly a solution that will work for the general D user, many of whom may not have this specific setup.
By the time this would be ready, most programmers will have PCIe interfaced SSD. At 100.000 IOPS it is pretty ok.
didn't i say that the whole "64-bit" hype sux? ;-) that's about "memory as database".
Heh, btw, I just read on osnews.com that HP is going to create a new hardware platform The Machine and a new operating system for it based on resistor based non-volatile memory called memristors that is comparable to dram in speed. Pretty interesting actually: http://www.technologyreview.com/news/533066/hp-will-release-a-revolutionary-new-operating-system-in-2015/
Dec 22 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, 22 Dec 2014 15:36:27 +0000
via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Heh, btw, I just read on osnews.com that HP is going to create a=20
 new hardware platform The Machine and a new operating system for=20
 it based on resistor based non-volatile memory called memristors=20
 that is comparable to dram in speed. Pretty interesting actually:
=20
 http://www.technologyreview.com/news/533066/hp-will-release-a-revolutiona=
ry-new-operating-system-in-2015/ yes, i read about that some time ago. it's fun conception, yet they will be forced to emulate files anyway. i.e. such machine can be either "only one user, only one task" (a-la Oberon, for example), or a mad to program. everyone using files today, and nobody will write everything again for some OS without files. so it will be a toy, or will try hard to emulate current arch. either way nothing new from the programmer's POV.
Dec 22 2014
prev sibling parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 17 Dec 2014 11:47:24 -0800
Walter Bright via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Storing it as body IR accomplishes nothing practical over storing it a=
s source
 file, i.e. .di files.
except that there's no need to parse source code over and over again, which is good for other tools (like completion suggesting, intelligent code browsing and so on).
=20 Yeah, you just need to write another parser for the binary format, rather=
than=20
 reuse a canned D parser. :-)
yes. but with good design the mmaped binary file can be used as data structure. and we can avoid lookaheads that current textual parser do, as we already got AST built for us. just drop the idea that binary file must be small (in a sense of "squeezing bytes by clever integer encoding" and such) and portable, and think about "how can we use this mmaped?" with very good design we can also generate readers and writers almost automatically from AST definitions.
Dec 17 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 12/17/2014 12:03 PM, ketmar via Digitalmars-d wrote:
 yes. but with good design the mmaped binary file can be used as data
 structure. and we can avoid lookaheads that current textual parser do,
 as we already got AST built for us. just drop the idea that binary file
 must be small (in a sense of "squeezing bytes by clever integer
 encoding" and such) and portable, and think about "how can we use this
 mmaped?" with very good design we can also generate readers and writers
 almost automatically from AST definitions.
As I replied to Ola, I've been there and done that. I know the ground. It isn't worth it.
Dec 17 2014
prev sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
11-Dec-2014 04:17, Walter Bright пишет:
 On 12/10/2014 10:28 AM, H. S. Teoh via Digitalmars-d wrote:
 Yeah, the compiler cannot instantiate the template without access to the
 full body. It *could*, though, if we were to store template body IR in
 object files, perhaps under specially-dedicated object file sections. It
 wouldn't prevent reverse-engineering (which is moot anyway when
 templates are involved), but it *would* work as an "opaque" library
 interface file.
Storing it as body IR accomplishes nothing practical over storing it as source file, i.e. .di files.
Even if we just bundle .di with .obj in one file, or better the whole library there are operational advantages. Consider that if a compiled library is trivially redistributable as a single file. Importantly _always_ up to date "headers", a curse that follows C/C++ is out of sync header files or using wrong header files. Futher options may include pre-tokenized .di files potentially with generated ddocs in one bundle. All in all looks like Java JAR files ;) -- Dmitry Olshansky
Dec 11 2014
prev sibling next sibling parent reply "Tobias Pankrath" <tobias pankrath.net> writes:
On Wednesday, 10 December 2014 at 14:16:47 UTC, Paulo  Pinto 
wrote:

 Lots of options are possible when the C compiler and linker 
 model aren't being used.

 ..
 Paulo
I don't see how symbol table information and relocation meta data is sufficient to produce the correct object code if the template parameters are unknown. // library void foo(T, U)(T t, U u) { t.tee(); u.uuuh(); } // my code foo!(ArcaneType1, DubiousType2)(a, d);
Dec 10 2014
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Dec 10, 2014 at 05:19:53PM +0000, Tobias Pankrath via Digitalmars-d
wrote:
 On Wednesday, 10 December 2014 at 14:16:47 UTC, Paulo  Pinto wrote:
 
Lots of options are possible when the C compiler and linker model
aren't being used.

..
Paulo
I don't see how symbol table information and relocation meta data is sufficient to produce the correct object code if the template parameters are unknown. // library void foo(T, U)(T t, U u) { t.tee(); u.uuuh(); } // my code foo!(ArcaneType1, DubiousType2)(a, d);
That's why the current object file model doesn't work very well. You'd have to extend the object file format to include compiler IR for templates, then the compiler can instantiate templates from that IR without needing access to the source. Which is a feature I've brought up several times, but nobody seems to be interested in doing anything about it. T -- Real Programmers use "cat > a.out".
Dec 10 2014
next sibling parent reply "Tobias Pankrath" <tobias pankrath.net> writes:
 // my code
 foo!(ArcaneType1, DubiousType2)(a, d);
That's why the current object file model doesn't work very well. You'd have to extend the object file format to include compiler IR for templates, then the compiler can instantiate templates from that IR without needing access to the source. Which is a feature I've brought up several times, but nobody seems to be interested in doing anything about it. T
What information would / could that IR contain besides an AST?
Dec 10 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Dec 10, 2014 at 07:00:24PM +0000, Tobias Pankrath via Digitalmars-d
wrote:
// my code
foo!(ArcaneType1, DubiousType2)(a, d);
That's why the current object file model doesn't work very well. You'd have to extend the object file format to include compiler IR for templates, then the compiler can instantiate templates from that IR without needing access to the source. Which is a feature I've brought up several times, but nobody seems to be interested in doing anything about it. T
What information would / could that IR contain besides an AST?
It could include additional attributes computed by the compiler that could help in later optimization, such as whether the function escapes references, any inferred function attributes, etc.. The compiler could recompute all this from the IR, of course, but if it's already computed before, why not just reuse the previous results. Also, storing a full AST is probably overkill -- lexing and parsing the source generally doesn't take up too much of the compiler's time, so we might as well just use the source code instead. What makes it more worthwhile is if the AST has already been somewhat processed, e.g., constants have been folded, etc.. Probably after semantic1 and semantic2 have been run (though I'm not sure how far one can get if the template hasn't been instantiated yet). This way, work that has already been done doesn't need to be repeated again. T -- In theory, software is implemented according to the design that has been carefully worked out beforehand. In practice, design documents are written after the fact to describe the sorry mess that has gone on before.
Dec 10 2014
parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"H. S. Teoh via Digitalmars-d"  wrote in message 
news:mailman.3042.1418240846.9932.digitalmars-d puremagic.com...

 Also, storing a full AST is probably overkill -- lexing and parsing the
 source generally doesn't take up too much of the compiler's time, so we
 might as well just use the source code instead.
Exactly
 What makes it more
 worthwhile is if the AST has already been somewhat processed, e.g.,
 constants have been folded, etc.. Probably after semantic1 and semantic2
 have been run (though I'm not sure how far one can get if the template
 hasn't been instantiated yet).
Not very far at all.
 This way, work that has already been done
 doesn't need to be repeated again.
When it's templates that haven't been instantiated, you haven't done any of the work yet.
Dec 10 2014
next sibling parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 10 Dec 2014 18:30, "H. S. Teoh via Digitalmars-d"
<digitalmars-d puremagic.com> wrote:
 On Wed, Dec 10, 2014 at 06:15:48PM +0000, Paulo Pinto via Digitalmars-d wrote:
 On Wednesday, 10 December 2014 at 16:56:24 UTC, Iain Buclaw via
 Digitalmars-d wrote:
[...]
In D, this should be akin to:

// Package header
module functions;
void Swap(T)(out T x, out T y);

// Package body
module functions;
void Swap(T)(out T x, out T y)
{
  // Implementation
}

// Importing it
import functions : Swap;
void main()
{
  int x = 1;
  int y = 2;
  Swap(x, y);
}

Iain
But the current object model doesn't support it, right? At least my understanding is that you need to have the full body visible.
[...] Yeah, the compiler cannot instantiate the template without access to the full body. It *could*, though, if we were to store template body IR in object files, perhaps under specially-dedicated object file sections. It wouldn't prevent reverse-engineering (which is moot anyway when templates are involved), but it *would* work as an "opaque" library interface file.
So long as it's instantiated somewhere in the provided by the library object shipped with the module interface, then all symbols will resolve at link-time. I can't imagine Ada being much different at the object level. To even quote from a book of Ada that covers information hiding in a section (changing the function names to be relevant for this discussion). """ In the above example, the full definition of Swap can be indeed deferred until the package body. The reason, of course, is that nearly all current machines have a uniform addressing structure, so that an access value always looks the same regardless of what it is designating. To summarise, the logical interface corresponds to the visible part; the physical interface corresponds to the complete package specification, that is, to both the visible part and the private part. As long as a package specification is not changed, the package body that implements it can be defined and redefined without affecting other units that use this specification as an interface to the package. Hence it is possible to compile a package body separately from its package specification. """ Now if you swap 'package specification' for 'function/template signature', you've got yourself more or less describing how D modules/packages work. Iain.
Dec 10 2014
prev sibling parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Dec 11, 2014 at 10:17:29AM +1100, Daniel Murphy via Digitalmars-d wrote:
 "H. S. Teoh via Digitalmars-d"  wrote in message
 news:mailman.3042.1418240846.9932.digitalmars-d puremagic.com...
 
Also, storing a full AST is probably overkill -- lexing and parsing
the source generally doesn't take up too much of the compiler's time,
so we might as well just use the source code instead.
Exactly
What makes it more worthwhile is if the AST has already been somewhat
processed, e.g., constants have been folded, etc.. Probably after
semantic1 and semantic2 have been run (though I'm not sure how far
one can get if the template hasn't been instantiated yet).
Not very far at all.
This way, work that has already been done doesn't need to be repeated
again.
When it's templates that haven't been instantiated, you haven't done any of the work yet.
Well, I was thinking more of how far we *could* go, rather than how far dmd *actually* goes currently (which is nowhere at all, since all it does right now is to parse the template, until you instantiate it). But I suspect you can't go too far -- at least, not if the code actually depends on the template arguments in any meaningful way. As an extreme case, Dicebot's example is one where you can't go anywhere at all: auto myFunc(string code)() { return mixin(code); } On the other extreme, you have templates that really shouldn't be templates because they don't actually depend on their template arguments: auto myFunc(A...)() { return 1+2*3/4-5; } You could basically already compile the entire function without caring for the template arguments. Real-life use cases, of course, generally fall somewhere in between these two extremes. So I'd expect they would have some parts that cannot be processed any further than parsing, and other parts that can ostensibly go quite far, depending on how independent they are of the template arguments. Hmm, on second thoughts, this seems to be an interesting direction to explore, because template code that you *can* get quite far on, also represents code that is quite independent of template arguments, which means a large part of them should be identical across different template arguments. That makes them candidates for being merged, which would help reduce template bloat. By attempting analysis of template bodies, the compiler might be able to automatically identify these "mostly-independent" pieces of code, and perhaps apply some strategies for automatic code merging. T -- Windows 95 was a joke, and Windows 98 was the punchline.
Dec 10 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-12-10 18:43, H. S. Teoh via Digitalmars-d wrote:

 That's why the current object file model doesn't work very well.

 You'd have to extend the object file format to include compiler IR for
 templates, then the compiler can instantiate templates from that IR
 without needing access to the source. Which is a feature I've brought up
 several times, but nobody seems to be interested in doing anything about
 it.
Can't you just put it in a custom section? Or perhaps that what you're saying. Although, I'm not sure if OMF supports custom sections. -- /Jacob Carlborg
Dec 10 2014
parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Dec 10, 2014 at 08:33:22PM +0100, Jacob Carlborg via Digitalmars-d
wrote:
 On 2014-12-10 18:43, H. S. Teoh via Digitalmars-d wrote:
 
That's why the current object file model doesn't work very well.

You'd have to extend the object file format to include compiler IR
for templates, then the compiler can instantiate templates from that
IR without needing access to the source. Which is a feature I've
brought up several times, but nobody seems to be interested in doing
anything about it.
Can't you just put it in a custom section? Or perhaps that what you're saying. Although, I'm not sure if OMF supports custom sections.
[...] That *is* what I'm saying. But so far, it seems people aren't that interested in doing that. *shrug* T -- The volume of a pizza of thickness a and radius z can be described by the following formula: pi zz a. -- Wouter Verhelst
Dec 10 2014
prev sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 10 December 2014 at 17:19:53 UTC, Tobias Pankrath 
wrote:
 On Wednesday, 10 December 2014 at 14:16:47 UTC, Paulo  Pinto 
 wrote:

 Lots of options are possible when the C compiler and linker 
 model aren't being used.

 ..
 Paulo
I don't see how symbol table information and relocation meta data is sufficient to produce the correct object code if the template parameters are unknown. // library void foo(T, U)(T t, U u) { t.tee(); u.uuuh(); } // my code foo!(ArcaneType1, DubiousType2)(a, d);
Simple, by dropping C based linker model as I state on my comment. -- Paulo
Dec 10 2014
next sibling parent "Tobias Pankrath" <tobias pankrath.net> writes:
On Wednesday, 10 December 2014 at 18:16:54 UTC, Paulo Pinto wrote:
 On Wednesday, 10 December 2014 at 17:19:53 UTC, Tobias Pankrath 
 wrote:
 On Wednesday, 10 December 2014 at 14:16:47 UTC, Paulo  Pinto 
 wrote:

 Lots of options are possible when the C compiler and linker 
 model aren't being used.

 ..
 Paulo
I don't see how symbol table information and relocation meta data is sufficient to produce the correct object code if the template parameters are unknown. // library void foo(T, U)(T t, U u) { t.tee(); u.uuuh(); } // my code foo!(ArcaneType1, DubiousType2)(a, d);
Simple, by dropping C based linker model as I state on my comment. -- Paulo
I don't care for the C based linker model. You'll have to recompile the template, symbol table information and relocation data is just not enough, in any linker model. So you'll need the body of foo and you'll need to compile it at "link time". What advantages of a hypothetical Pascal inspired D linker model are left now? If we just want to have binary, because binary, we could share zipped library source and teach dmd how to unzip.
Dec 10 2014
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 10 December 2014 at 18:16:54 UTC, Paulo Pinto wrote:
 Simple, by dropping C based linker model as I state on my 
 comment.
Ho please, that is a salesman answer, not an engineer one.
Dec 10 2014
parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 10 December 2014 at 21:59:57 UTC, deadalnix wrote:
 On Wednesday, 10 December 2014 at 18:16:54 UTC, Paulo Pinto 
 wrote:
 Simple, by dropping C based linker model as I state on my 
 comment.
Ho please, that is a salesman answer, not an engineer one.
I was talking how the toolchains for other programming languages work. It is complete clear to me this wouldn't work for D, besides there are lots of more important areas to improve on. As a language geek that just lurks and works in other languages, I don't have anything to sell. -- Paulo
Dec 10 2014
prev sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 10 December 2014 at 14:16:47 UTC, Paulo  Pinto 
wrote:
 The libraries contain the required metadata for symbol tables 
 and code locations that need to be extracted into the 
 executable/library.

 Package definition files contain the minimum information the 
 compiler needs to know to search for the remaining information.

 Example,

 ...
Example shows generics, not templates. Full blown template/mixin support is impossible without anything else but full source access in one form or another.
Dec 10 2014
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 10 December 2014 at 19:24:17 UTC, Dicebot wrote:
 On Wednesday, 10 December 2014 at 14:16:47 UTC, Paulo  Pinto 
 wrote:
 The libraries contain the required metadata for symbol tables 
 and code locations that need to be extracted into the 
 executable/library.

 Package definition files contain the minimum information the 
 compiler needs to know to search for the remaining information.

 Example,

 ...
Example shows generics, not templates. Full blown template/mixin support is impossible without anything else but full source access in one form or another.
That was just an example, I could have written lots of other stuff. - Paulo
Dec 10 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 10 December 2014 at 21:39:42 UTC, Paulo Pinto wrote:
 That was just an example, I could have written lots of other 
 stuff.
Then please show something that actually helps and is applicable to D template system. There is not much value in vague references with "imagine rest yourself" flavor. To be specific I am interested how would it handle pattern like this (very common for D code and not present in Ada at all AFAIK): void foo(T)(T t) { mixin(generator!T()); } What exactly would one put in IR for plain uninstantiated foo?
Dec 10 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 10 December 2014 at 22:34:50 UTC, Dicebot wrote:
 Then please show something that actually helps and is 
 applicable to D template system. There is not much value in 
 vague references with "imagine rest yourself" flavor.

 To be specific I am interested how would it handle pattern like 
 this (very common for D code and not present in Ada at all 
 AFAIK):

 void foo(T)(T t)
 {
     mixin(generator!T());
 }

 What exactly would one put in IR for plain uninstantiated foo?
Maybe you should agree on what a high level IR is first? Here are some alternatives: 1. Regular IR: A high level IR basically is a flattened AST where you do substitutions, perform partial evaluation and strip out symbols. In a high level IR that supports templating you might preserve several partial instances using incomplete types. If it is non-transformable to begin with… then the compiler won't transform it. 2. Search optimizing IR: What the compiler does could be based on heuristics and statistical information. Think term rewriting systems working on "possible programs" with heuristics as a guide so you build up a database of precompiled segments for "possible programs" and produce guiding datastructures that speed the optimization search. The basic idea being that you spend a lot of time precompiling libraries, and cut down on compiling concrete instances of programs. 3. Generating IR: The compiler could build JITable compiler code for templates, e.g. the templates are stored as a VM program that is executed by the compiler and is allowed to make calls into the compiler code. Basically "compile-time-functions" with inside compiler know-how.
Dec 10 2014
prev sibling parent "Dicebot" <public dicebot.lv> writes:
On Wednesday, 10 December 2014 at 08:43:49 UTC, Kagamin wrote:
 On Tuesday, 9 December 2014 at 20:55:51 UTC, Dicebot wrote:
 Because you don't really create a template that way but 
 workaround broken function behavior. It is not the usage of 
 empty templates that is bad but the fact that plain functions 
 remain broken => not really a solution.
You can compile against phobos sources instead of interface files.
As far as I understand gdc/ldc problem the way it currently works there is no difference between .di and .d files - if function is not a template its binary is expected to be found in matching object file - and if that object file belongs to prebuild static lib it is completely out of question, even if the sources are completely available. I have never understood exactly what in frontend makes it that much of a problem though
Dec 10 2014
prev sibling next sibling parent Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, 2014-12-08 at 07:18 -0800, H. S. Teoh via Digitalmars-d wrote:
[=E2=80=A6]
 Yeah, I find in my own experience that gdc -O3 tends to produce code
 that's consistently ~20% faster than dmd -O, especially in
 compute-intensive code. The downside is that gdc usually lags behind dmd
 by one release, which, given the current rate of development in D, can
 be quite a big difference in feature set available.
GDC is tied to the GCC release program I guess, so gdc can only be updated when there is a new GCC release. I am not up to compiling gdc from source, but compiling ldc2 is very straightforward, so I tend to use that by default to get something fast that is more or less up-to-date with DMD.=20 --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Dec 09 2014
prev sibling next sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Tue, 09 Dec 2014 11:09:44 +0000
Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 On Mon, 2014-12-08 at 07:18 -0800, H. S. Teoh via Digitalmars-d wrote:
 [=E2=80=A6]
 Yeah, I find in my own experience that gdc -O3 tends to produce code
 that's consistently ~20% faster than dmd -O, especially in
 compute-intensive code. The downside is that gdc usually lags behind dmd
 by one release, which, given the current rate of development in D, can
 be quite a big difference in feature set available.
=20 GDC is tied to the GCC release program I guess
nope. it's just lack of developers.
 I am not up to compiling gdc from source, but compiling ldc2 is very
 straightforward,
to the extent that i can't build git head. ;-)
Dec 09 2014
prev sibling next sibling parent Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Tue, 2014-12-09 at 13:22 +0200, ketmar via Digitalmars-d wrote:
 On Tue, 09 Dec 2014 11:09:44 +0000
 Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> wrote:
[=E2=80=A6]
 GDC is tied to the GCC release program I guess
nope. it's just lack of developers.
Too much effort expended on DMD I guess ;-)
 I am not up to compiling gdc from source, but compiling ldc2 is very
 straightforward,
to the extent that i can't build git head. ;-)
Works fine for me, I just built it 15 mins ago. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Dec 09 2014
prev sibling next sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Tue, 09 Dec 2014 11:34:34 +0000
Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 I am not up to compiling gdc from source, but compiling ldc2 is very
 straightforward,
to the extent that i can't build git head. ;-)
=20 Works fine for me, I just built it 15 mins ago.
to be honest i tried that month or two ago. it failed somewhere in the middle with some error message and i deleted it.
Dec 09 2014
prev sibling parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tue, Dec 09, 2014 at 11:09:44AM +0000, Russel Winder via Digitalmars-d wrote:
 On Mon, 2014-12-08 at 07:18 -0800, H. S. Teoh via Digitalmars-d wrote:
 […]
 Yeah, I find in my own experience that gdc -O3 tends to produce code
 that's consistently ~20% faster than dmd -O, especially in
 compute-intensive code. The downside is that gdc usually lags behind
 dmd by one release, which, given the current rate of development in
 D, can be quite a big difference in feature set available.
GDC is tied to the GCC release program I guess, so gdc can only be updated when there is a new GCC release. I am not up to compiling gdc from source, but compiling ldc2 is very straightforward, so I tend to use that by default to get something fast that is more or less up-to-date with DMD.
[...] I used to compile gdc from source, but unfortunately, the gcc build scripts are so very temperamental and sensitive... the slightest environment variable set wrong in your system, and you're up for unending hours of hair-pulling frustration trying to figure out what went wrong given only an error message that almost always has nothing whatsoever to do with the real problem, which has already happened half an hour earlier. This is especially so if you attempt to build with a gcc version that isn't the latest development version, which is inevitably incompatible with my current system's gcc version, which means I have to install it in a custom path, which is often a source of trouble because anytime you change the default settings, you've to be prepared for lots of things exploding in your face unless you know exactly what you're doing (and I don't). T -- You have to expect the unexpected. -- RL
Dec 09 2014
prev sibling next sibling parent Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sat, 2014-12-06 at 07:33 -0800, H. S. Teoh via Digitalmars-d wrote:
 
[…]
 Whoa. So they're basically going to rely on JIT to convert those 
 boxed Integers into hardware ints for performance? Sounds like I 
 will never consider Java for computation-heavy tasks then...
 
Exactly the opposite, the JVM and JIT technology is getting to the stage where boxing and hence unboxing happens less and less. For most computationally intensive tasks there will be no boxing and unboxing at all. Currently I still have to use primitive types to get Java to be faster than C and C++, but there are hints that the next round of JIT technology and JVM improvements will make that unnecessary. Of course there are elements in the JVM-verse who think that primitive types are the only way of doing this and that relying on JVM/JIT technology is anathema. This is still a moot point, no decisions as yet. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Dec 06 2014
prev sibling next sibling parent Ziad Hatahet via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sat, Dec 6, 2014 at 7:26 AM, Russel Winder via Digitalmars-d <
digitalmars-d puremagic.com> wrote:
 Primitive types are scheduled for removal, leaving only reference
 types.
Are you referring to: http://openjdk.java.net/jeps/169 ?
Dec 07 2014
prev sibling parent Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sun, 2014-12-07 at 13:57 -0800, Ziad Hatahet via Digitalmars-d wrote:
 
 Are you referring to: http://openjdk.java.net/jeps/169 ?
That is one part of it, but it alone will not achieve the goal. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Dec 08 2014