Jump to page: 1 2
Thread overview
OT: Nature on the 'end' of Moore's Law
Feb 16, 2016
Laeeth Isharc
Feb 16, 2016
Joakim
Feb 16, 2016
Laeeth Isharc
Feb 17, 2016
Kagamin
Feb 17, 2016
Chris Wright
Feb 17, 2016
Kagamin
February 16, 2016
http://www.nature.com/news/the-chips-are-down-for-moore-s-law-1.19338
February 16, 2016
On Tuesday, 16 February 2016 at 10:20:57 UTC, Laeeth Isharc wrote:
> http://www.nature.com/news/the-chips-are-down-for-moore-s-law-1.19338

It is more a sign of Intel not having competition from AMD and the cheap low energy chip market taking over the consumer market. Many Apple devices are only 2-core. Most programs don't benefit much from more than 2 cores.

Other vendors are considering switching away from silicone to more expensive materials, integrating computing with memory, layering/stacking etc.

Intel may go for making chips slower (less heat) while researching the next technology shift.

February 16, 2016
On Tuesday, 16 February 2016 at 10:20:57 UTC, Laeeth Isharc wrote:
> http://www.nature.com/news/the-chips-are-down-for-moore-s-law-1.19338

Good news for D and other AoT-compiled languages, as software will have to take up the slack.  Software has been able to get much more inefficient over the years because the faster hardware from Moore's law would come in and make it all run just as fast.  Now, devs will have to actually start worrying about efficiency in their code again.

We've already seen Intel and x86 hit hard by the mobile shift, because they cannot hit the performance to battery power ratio that Qualcomm and other ARM vendors routinely hit, which is why Intel has AMD-like share on mobile devices. :) I'm guessing it's a similar situation for Microsoft with Windows, they just couldn't get it turned around fast enough for mobile.

This is going to affect inefficient programming languages in the same way.
February 16, 2016
On Tuesday, 16 February 2016 at 11:43:05 UTC, Joakim wrote:
> On Tuesday, 16 February 2016 at 10:20:57 UTC, Laeeth Isharc wrote:
>> http://www.nature.com/news/the-chips-are-down-for-moore-s-law-1.19338
>
> Good news for D and other AoT-compiled languages, as software will have to take up the slack.  Software has been able to get

Just because improvements on density is slowing down does not mean that hardware won't change.

D may need to be focus more on SIMD and GPU/coprocessor processing to keep up.

What good is there in running fast non-SIMD cpu code if "slower" high level languages autogenerate SIMD/GPU code on the fly using JIT compilation?

February 16, 2016
On Tuesday, 16 February 2016 at 11:43:05 UTC, Joakim wrote:
> On Tuesday, 16 February 2016 at 10:20:57 UTC, Laeeth Isharc wrote:
>> http://www.nature.com/news/the-chips-are-down-for-moore-s-law-1.19338
>
> Good news for D and other AoT-compiled languages, as software will have to take up the slack.  Software has been able to get much more inefficient over the years because the faster hardware from Moore's law would come in and make it all run just as fast.  Now, devs will have to actually start worrying about efficiency in their code again.
>
> We've already seen Intel and x86 hit hard by the mobile shift, because they cannot hit the performance to battery power ratio that Qualcomm and other ARM vendors routinely hit, which is why Intel has AMD-like share on mobile devices. :) I'm guessing it's a similar situation for Microsoft with Windows, they just couldn't get it turned around fast enough for mobile.
>
> This is going to affect inefficient programming languages in the same way.

Seems likely.

If one treats a cheap resource as if it were free then eventually it won't be cheap anymore.

I doubt very much the phase transition is a consequence of market structure.  More like the natural way that technology unfolds (see Theodore Modis).  No doubt at some point something else will come along, but there may be an energy gap in the meantime.  A pity given data sets keep getting bigger.

GPU programming in D is just a matter of time.  People are ready to pay to sponsor it, and on the other hand I am familiar with people who have written such libraries and done GPU work in D.  But there are many things to work on, and it's a matter of priorities.

If people who spent time moaning about D's perceived weaknesses would spend just a little time actually trying to make a contribution to improve things, we'd all be better off.  Amazing the importance of the work you have done, Joakim, when in the beginning I guess you were just trying to solve your own problem.  I wonder why others don't have this kind of constructive spirit.

I am reminded a bit of a close associate of Chuck Moore's very funny (although it turns out slightly unfair) comments on a particular colorforth enthusiast (and thereby language geeks in general):

http://yosefk.com/blog/my-history-with-forth-stack-machines.html

    Forth seems to mean programming applications to some and porting Forth or dissecting Forth to others. And these groups don't seem to have much in common.

    …One learns one set of things about frogs from studying them in their natural environment or by getting a doctorate in zoology and specializing in frogs. And people who spend an hour dissecting a dead frog in a pan of formaldehyde in a biology class learn something else about frogs.

    …One of my favorite examples was that one notable colorforth [a Forth dialect] enthusiast who had spent years studying it, disassembling it, reassembling it and modifying it, and made a lot of public comments about it, but had never bothered running it and in two years of 'study' had not been able to figure out how to do something in colorforth as simple as:

    1 dup +

    …[such Forth users] seem to have little interest in what it does, how it is used, or what people using it do with it. But some spend years doing an autopsy on dead code that they don't even run.

    Live frogs are just very different than dead frogs.

I guess I feel that I could say that if it isn't solving a significant real problem in the real world it isn't really Forth.
February 16, 2016
On Tuesday, 16 February 2016 at 18:28:19 UTC, Laeeth Isharc wrote:
> I guess I feel that I could say that if it isn't solving a significant real problem in the real world it isn't really Forth.

Completely wrong, it was called Postscript, ran on laser printers and solved a significant real problem for decades. As in: describing graphics.

Nobody in their right mind would willingly choose to implement an application in Forth, beyond a trivial micro-controller. It was a fun little toy language for the 80s, and yes, I used it; but would frankly much rather program in assembly.

Is there a point in this?

February 17, 2016
On Tuesday, 16 February 2016 at 18:28:19 UTC, Laeeth Isharc wrote:
> A pity given data sets keep getting bigger.

Can't it be parallelized on server, and client will only receive presentable data? Then your only concern will be energy consumption.
February 17, 2016
On Wednesday, 17 February 2016 at 17:10:37 UTC, Kagamin wrote:
> On Tuesday, 16 February 2016 at 18:28:19 UTC, Laeeth Isharc wrote:
>> A pity given data sets keep getting bigger.
>
> Can't it be parallelized on server, and client will only receive presentable data? Then your only concern will be energy consumption.

Yes, it looks like server/HPC increasingly is becoming a separate CPU market again.
I read somewhere that next gen high end APU from AMD might have a TDP of 200/300W (pretty hot) and basically integrate a full-blown GPU and use HBM memory (>128GB/s?).

Intel is going from 14nm to 10nm in 2017. IBM has succeeded with 7nm silicon-germanium and it is projected for 2018?

So yes, sure, density is reaching a limit, but that does not mean that you won't get more effective CPUs, larger dies, stacked layers, higher yields (cheaper chips), more cpus per server, integrated cooling solutions, cheaper FPGAs, faster memory etc...


February 17, 2016
And well, with new materials, the potential for higher speeds. These researchers managed to get a silicon-germanium transistor up to 800Ghz (cryonic) and the article speaks of the possibility of running at Thz.

http://www.news.gatech.edu/2014/02/17/silicon-germanium-chip-sets-new-speed-record

Moore's law deals with the number of transistors on the same chip. But who cares if you can have faster and more?

February 17, 2016
On Wednesday, 17 February 2016 at 17:57:11 UTC, Ola Fosheim Grøstad wrote:
> On Wednesday, 17 February 2016 at 17:10:37 UTC, Kagamin wrote:
>> On Tuesday, 16 February 2016 at 18:28:19 UTC, Laeeth Isharc wrote:
>>> A pity given data sets keep getting bigger.
>>
>> Can't it be parallelized on server, and client will only receive presentable data? Then your only concern will be energy consumption.
>
> Yes, it looks like server/HPC increasingly is becoming a separate CPU market again.
> I read somewhere that next gen high end APU from AMD might have a TDP of 200/300W (pretty hot) and basically integrate a full-blown GPU and use HBM memory (>128GB/s?).

I'm thinking more about distributed platforms. We made our server support farm configuration, and the customer was happy to buy 6 farm nodes and plans to add 3 more. For some reason a farm is cheaper than one big iron server?
« First   ‹ Prev
1 2