www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - OT: Nature on the 'end' of Moore's Law

reply Laeeth Isharc <laeethnospam nospam.laeeth.com> writes:
http://www.nature.com/news/the-chips-are-down-for-moore-s-law-1.19338
Feb 16 2016
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Tuesday, 16 February 2016 at 10:20:57 UTC, Laeeth Isharc wrote:
 http://www.nature.com/news/the-chips-are-down-for-moore-s-law-1.19338
It is more a sign of Intel not having competition from AMD and the cheap low energy chip market taking over the consumer market. Many Apple devices are only 2-core. Most programs don't benefit much from more than 2 cores. Other vendors are considering switching away from silicone to more expensive materials, integrating computing with memory, layering/stacking etc. Intel may go for making chips slower (less heat) while researching the next technology shift.
Feb 16 2016
prev sibling parent reply Joakim <dlang joakim.fea.st> writes:
On Tuesday, 16 February 2016 at 10:20:57 UTC, Laeeth Isharc wrote:
 http://www.nature.com/news/the-chips-are-down-for-moore-s-law-1.19338
Good news for D and other AoT-compiled languages, as software will have to take up the slack. Software has been able to get much more inefficient over the years because the faster hardware from Moore's law would come in and make it all run just as fast. Now, devs will have to actually start worrying about efficiency in their code again. We've already seen Intel and x86 hit hard by the mobile shift, because they cannot hit the performance to battery power ratio that Qualcomm and other ARM vendors routinely hit, which is why Intel has AMD-like share on mobile devices. :) I'm guessing it's a similar situation for Microsoft with Windows, they just couldn't get it turned around fast enough for mobile. This is going to affect inefficient programming languages in the same way.
Feb 16 2016
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Tuesday, 16 February 2016 at 11:43:05 UTC, Joakim wrote:
 On Tuesday, 16 February 2016 at 10:20:57 UTC, Laeeth Isharc 
 wrote:
 http://www.nature.com/news/the-chips-are-down-for-moore-s-law-1.19338
Good news for D and other AoT-compiled languages, as software will have to take up the slack. Software has been able to get
Just because improvements on density is slowing down does not mean that hardware won't change. D may need to be focus more on SIMD and GPU/coprocessor processing to keep up. What good is there in running fast non-SIMD cpu code if "slower" high level languages autogenerate SIMD/GPU code on the fly using JIT compilation?
Feb 16 2016
prev sibling parent reply Laeeth Isharc <laeethnospam nospam.laeeth.com> writes:
On Tuesday, 16 February 2016 at 11:43:05 UTC, Joakim wrote:
 On Tuesday, 16 February 2016 at 10:20:57 UTC, Laeeth Isharc 
 wrote:
 http://www.nature.com/news/the-chips-are-down-for-moore-s-law-1.19338
Good news for D and other AoT-compiled languages, as software will have to take up the slack. Software has been able to get much more inefficient over the years because the faster hardware from Moore's law would come in and make it all run just as fast. Now, devs will have to actually start worrying about efficiency in their code again. We've already seen Intel and x86 hit hard by the mobile shift, because they cannot hit the performance to battery power ratio that Qualcomm and other ARM vendors routinely hit, which is why Intel has AMD-like share on mobile devices. :) I'm guessing it's a similar situation for Microsoft with Windows, they just couldn't get it turned around fast enough for mobile. This is going to affect inefficient programming languages in the same way.
Seems likely. If one treats a cheap resource as if it were free then eventually it won't be cheap anymore. I doubt very much the phase transition is a consequence of market structure. More like the natural way that technology unfolds (see Theodore Modis). No doubt at some point something else will come along, but there may be an energy gap in the meantime. A pity given data sets keep getting bigger. GPU programming in D is just a matter of time. People are ready to pay to sponsor it, and on the other hand I am familiar with people who have written such libraries and done GPU work in D. But there are many things to work on, and it's a matter of priorities. If people who spent time moaning about D's perceived weaknesses would spend just a little time actually trying to make a contribution to improve things, we'd all be better off. Amazing the importance of the work you have done, Joakim, when in the beginning I guess you were just trying to solve your own problem. I wonder why others don't have this kind of constructive spirit. I am reminded a bit of a close associate of Chuck Moore's very funny (although it turns out slightly unfair) comments on a particular colorforth enthusiast (and thereby language geeks in general): http://yosefk.com/blog/my-history-with-forth-stack-machines.html Forth seems to mean programming applications to some and porting Forth or dissecting Forth to others. And these groups don't seem to have much in common. …One learns one set of things about frogs from studying them in their natural environment or by getting a doctorate in zoology and specializing in frogs. And people who spend an hour dissecting a dead frog in a pan of formaldehyde in a biology class learn something else about frogs. …One of my favorite examples was that one notable colorforth [a Forth dialect] enthusiast who had spent years studying it, disassembling it, reassembling it and modifying it, and made a lot of public comments about it, but had never bothered running it and in two years of 'study' had not been able to figure out how to do something in colorforth as simple as: 1 dup + …[such Forth users] seem to have little interest in what it does, how it is used, or what people using it do with it. But some spend years doing an autopsy on dead code that they don't even run. Live frogs are just very different than dead frogs. I guess I feel that I could say that if it isn't solving a significant real problem in the real world it isn't really Forth.
Feb 16 2016
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Tuesday, 16 February 2016 at 18:28:19 UTC, Laeeth Isharc wrote:
 I guess I feel that I could say that if it isn't solving a 
 significant real problem in the real world it isn't really 
 Forth.
Completely wrong, it was called Postscript, ran on laser printers and solved a significant real problem for decades. As in: describing graphics. Nobody in their right mind would willingly choose to implement an application in Forth, beyond a trivial micro-controller. It was a fun little toy language for the 80s, and yes, I used it; but would frankly much rather program in assembly. Is there a point in this?
Feb 16 2016
prev sibling parent reply Kagamin <spam here.lot> writes:
On Tuesday, 16 February 2016 at 18:28:19 UTC, Laeeth Isharc wrote:
 A pity given data sets keep getting bigger.
Can't it be parallelized on server, and client will only receive presentable data? Then your only concern will be energy consumption.
Feb 17 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Wednesday, 17 February 2016 at 17:10:37 UTC, Kagamin wrote:
 On Tuesday, 16 February 2016 at 18:28:19 UTC, Laeeth Isharc 
 wrote:
 A pity given data sets keep getting bigger.
Can't it be parallelized on server, and client will only receive presentable data? Then your only concern will be energy consumption.
Yes, it looks like server/HPC increasingly is becoming a separate CPU market again. I read somewhere that next gen high end APU from AMD might have a TDP of 200/300W (pretty hot) and basically integrate a full-blown GPU and use HBM memory (>128GB/s?). Intel is going from 14nm to 10nm in 2017. IBM has succeeded with 7nm silicon-germanium and it is projected for 2018? So yes, sure, density is reaching a limit, but that does not mean that you won't get more effective CPUs, larger dies, stacked layers, higher yields (cheaper chips), more cpus per server, integrated cooling solutions, cheaper FPGAs, faster memory etc...
Feb 17 2016
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
And well, with new materials, the potential for higher speeds. 
These researchers managed to get a silicon-germanium transistor 
up to 800Ghz (cryonic) and the article speaks of the possibility 
of running at Thz.

http://www.news.gatech.edu/2014/02/17/silicon-germanium-chip-sets-new-speed-record

Moore's law deals with the number of transistors on the same 
chip. But who cares if you can have faster and more?
Feb 17 2016
parent Chris Wright <dhasenan gmail.com> writes:
On Wed, 17 Feb 2016 18:06:00 +0000, Ola Fosheim Grøstad wrote:

 And well, with new materials, the potential for higher speeds.
 These researchers managed to get a silicon-germanium transistor up to
 800Ghz (cryonic) and the article speaks of the possibility of running at
 Thz.
 
 http://www.news.gatech.edu/2014/02/17/silicon-germanium-chip-sets-new-
speed-record
 
 Moore's law deals with the number of transistors on the same chip. But
 who cares if you can have faster and more?
Distance penalties. First there's the design issue of routing electrical impulses to different parts of the chip without interfering with other paths. You can solve that by making the chip even bigger, and you can partially address it with heavy duty constraint solvers. Then there's that pesky speed of electrical signal transmission. A bigger chip incurs that penalty more often. One thing you can do is simply replicate your CPU multiple times. We currently have multicore CPUs to do this in a convenient way, but this involves some caution with cache invalidation and shared memory. Muck about with scheduling and shared memory stuff and you could get more isolated parallelism, allowing cheaper manycore CPUs. Not sure if that would be much of a benefit.
Feb 17 2016
prev sibling parent reply Kagamin <spam here.lot> writes:
On Wednesday, 17 February 2016 at 17:57:11 UTC, Ola Fosheim 
Grøstad wrote:
 On Wednesday, 17 February 2016 at 17:10:37 UTC, Kagamin wrote:
 On Tuesday, 16 February 2016 at 18:28:19 UTC, Laeeth Isharc 
 wrote:
 A pity given data sets keep getting bigger.
Can't it be parallelized on server, and client will only receive presentable data? Then your only concern will be energy consumption.
Yes, it looks like server/HPC increasingly is becoming a separate CPU market again. I read somewhere that next gen high end APU from AMD might have a TDP of 200/300W (pretty hot) and basically integrate a full-blown GPU and use HBM memory (>128GB/s?).
I'm thinking more about distributed platforms. We made our server support farm configuration, and the customer was happy to buy 6 farm nodes and plans to add 3 more. For some reason a farm is cheaper than one big iron server?
Feb 17 2016
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Wednesday, 17 February 2016 at 18:24:36 UTC, Kagamin wrote:
 I'm thinking more about distributed platforms. We made our 
 server support farm configuration, and the customer was happy 
 to buy 6 farm nodes and plans to add 3 more. For some reason a 
 farm is cheaper than one big iron server?
It probably has to do with yield (number of faulty chips), market size and competition. The volume was too low for IBM, so IBM recently "sold" their chip manufacturing plant to Global Foundaries by paying them $1billion to take it. (a negative price of $1billion) High end Xeon CPU (22nm): E7-8893 v3 (45M Cache, 3.20 GHz), 4 cores tray price: $6841 price in Norway: $11000 Desktop (14nm): i7-6700K Processor (8M Cache, up to 4.20 GHz), 4 cores street price in Norway: $344 The beefy Xeon has a very big cache and is more reliable, but slower and eeeexpensive...
Feb 17 2016