Thursday, October 11, 2007

Re: [BLUG] NOV meeting topic

On Wed, Oct 10, 2007 at 08:52:01PM GMT, Simón Ruiz [simon.a.ruiz@gmail.com] said the following:
> At the OLF, maddog gave a presentation about computing and power
> consumption. Interesting numbers game: At 350 watts per computer, in
> order to double the number of personal computers on the planet
> (1,000,000,000), we would need to build 25 power plants with output
> equal to the single largest power plant we have today.

That's not right. Sorry Simon, nothing personal, but he's off on
those figures. He's using big numbers to wow people and I think that's
a bad thing to do. He should use real figures. People take numbers
like that go off and think its fact and then we have an entire
population that thinks that the Rand corporation in 1954 predicted what
a computer would look like in 2004. When in reality it was just part of
a photoshop contest for Fark.

First of all, a lot of people at home turn off their computers at
night and while they away from their house. Plus many people have a
computer at home and at work. So its not 24/7 usage like servers or
geeks who leave all their computers on. Secondly, 350 watts is only
when you're running full tilt. Playing solitaire is not full tilt,
although the new Vistaness(TM) OpenGL(TM) Solitaire(TM) stuff might be
close. ;-)

As Ben mentioned in his email, power consumption of desktops is well
documented to be much lower. The power supply wattage rating is for
what it can support as far as extra hardware.

So I would *guess* overall that we're only using 20% of the power he
is stating we use. Still, I know what he's getting at and conservation
is a good thing so its good to think about. Sorry for being down on him
but I would have thought Maddog would be more careful with his
statistics than that. I mean, people quote him.

> I think the ideal would probably be ultra-power-efficient massive
> back-end servers and solid-state, fanless, maybe even PoE-fed
> thin-client front ends.

Servers won't ever be as power efficient as desktops because you can't
(or shouldn't) turn them off. Virtualization is something that will
help because you won't have power waste where you are running a power
supply 24/7 so that a DNS server can have its own box. I'm able to buy
a bit beefier server and run 16 isolated virtual machines on a single
physical cmachine.

For example, let's say I have a DNS server that has its own box and
its power supply is 350 watts and the box under normal operations is
using an average of 20% of its power. So that's 70 watts its using.

Now let's say I have a Xen server that has enough RAM and CPU power to
host 64 machines (16 GB of RAM and 8 cores would do nicely). This
machine has a 700 Watt power supply. With all the virtual machines
running it would use a maximum of 700 watts, maybe even less. There
would be a little more latency overall, but the average watt usage per
virtual machine is only 11 watts. That's a pretty significant savings.
you're doing the work of 64 machines using the power of 10. And as for
cost, I just priced it out and you could buy a server like that for
$4000, whereas 64 physical servers would cost $32,000 if you paid only
$500 for them.

For the curious, this is what I had in mind for hardware off of
Newegg.com:

1 x SUPERMICRO SYS-6015P-TRB 1U Barebone Server chassis - $1,289.99
4 x WD 500GB 7200 RPM SATA2 Hard drives - $519.96 ($129.99 each)
2 x Intel Xeon E5335 Clovertown 2.0GHz Processors - $709.98 ($354.99 each)
1 x 3ware 9650SE-4LPML 4 lane SATA II Raid controller - $344.99
4 x Crucial 4GB(2 x 2GB) 240-Pin DDR2 FB-DIMM DDR2 667 memory - $1,079.96 ($269.99 each)

Total: $3,944.88

Pretty good and would have 1.5 TB of RAID-5 storage and power supply
redundancy and hot swap hard drives. For another $2000 you could make
those processors 3GHz. Get two of those and a shared disk and you could
have quick changeover if you have a serious hardware failure.

> The other thing I see as a big possibility would be to have beefier
> thin-client front-ends that contribute to the processor requirements
> of the whole system through an OpenMosix-style back-end pool of
> processor time.

Interesting idea. A lot of people are starting to do that with things
like Citrix, Linux Terminal Server Project, MS Terminal Server, etc.

> So in a building of 300 connected workstations, the whole building
> would have 300 workstations worth of processor power. If you're only
> using a word processor or a browser or an e-mail client or something,
> your idle processor time would be up for someone else's use. But say
> you're trying to render 3-d graphics or fold proteins or something
> that can use as much processor time as it can get, you could then
> start enlisting the idle processor time of all your neighbors for your
> task.
>
> What do you think? Silly or workable?

Doable and done. Matt Liggett, a former sysadmin at Kiva and the guy
who wrote the original knowledge base at IU, was working on a program
for IU that would slave the idle time of desktops into one giant super
computer. I'm not sure how far he got with it, but I've heard of other
places doing this before.

I thought it would be awesome if someplace like Dreamworks or Pixar
would let people donate spare time to help render frames for their next
movie. Of course that would encourage people to leave their computers
on, which is what you were hoping to avoid.

--
Mark Krenz
Bloomington Linux Users Group
http://www.bloomingtonlinux.org/

_______________________________________________
BLUG mailing list
BLUG@linuxfan.com
http://mailman.cs.indiana.edu/mailman/listinfo/blug

No comments: