Will Parallel Code Ever Be Embraced?

by noidion 7/19/2012, 7:08 AMwith 8 comments

by danieldkon 7/19/2012, 9:42 AM

But the shiny object that has my attention at present consists of low-voltage ARM-type chips running on tiny inexpensive systems that can be stacked together to do all kinds of interesting things for a fraction of the power my Intel Xeon uses

This sentiment is repeated very often, but has someone actually done the math (in the case you do temporarily need a lot of processing power)? E.g., the following post estimates the power use of a Raspberry Pi around 2W:

http://www.raspberrypi.org/phpBB3/viewtopic.php?f=2&t=60...

A recent Xeon or Core i7 is many times faster, and has the advantage of providing shared-memory parallelism (as opposed to a cluster of Pi's, where you have to distribute work over a 100MBit network).

Also, when he wants to save power, he shouldn't use a Xeon. Intel Core mobile CPUs, draw a relatively small amount of power as well. E.g. last time I measured my Mac Mini, it used 12W during normal use. And it's actually a usable desktop machine, in contrast to the Raspberry Pi.

by iso-8859-1on 7/19/2012, 9:07 AM

I see this as two separate prophesies:

* One is the Intel MIC, which is due to arrive THIS YEAR (http://blogs.intel.com/technology/2012/06/intel-xeon-phi-cop...)

* One is a total revamp of Operating Systems, so that everything is virtualized.

The first is a hardly a prophesy, because it's so near to being reality, and the article was written on the day of the Intel announcement.

The second is maybe, what, 20 years into the future? I'm not even sure there's a need. Security problems are not technical anymore. They are caused by a breach of trust. Integration between all those different VMs will still be needed. Badware will use this interface too. People like integration. Separating everything into its own VM will hinder that, and customers will not like it.

by unwindon 7/19/2012, 10:52 AM

Except for games and some build cycles, I'm almost never waiting because the CPU has maxed out.

That's just ... Weird. What would "maxing out" even mean for a CPU? Going "fast enough", somehow? Maybe because build cycles is exactly what I do a lot of my time in front of a computer, I really don't think CPU:s are ever going to be "fast enough". Even if just doing "ordinary computing", I often think e.g. browsers and office applications are rather slow.

by URSpider94on 7/19/2012, 9:31 AM

Well put.

There will always be some applications (games, compilers, Photoshop, 3D rendering) that will benefit from fine-grained parallelism. For the rest, being able to run your web browser, DVD ripper, music streamer and IDE on four separate cores is good enough.