Thursday, October 11, 2007

Re: [BLUG] NOV meeting topic

> > Doable and done. Matt Liggett, a former sysadmin at Kiva and the guy
> > who wrote the original knowledge base at IU, was working on a program
> > for IU that would slave the idle time of desktops into one giant super
> > computer. I'm not sure how far he got with it, but I've heard of other
> > places doing this before.
>
> Is that the OpenMosix project?
>
> As I understood it, it only worked with specific software. You couldn't,
> say, offload any old video encoding or (place random mundane
> high-processor-time task here) job yet.

No it's not OpenMosix.. It may be condor (someone from my group will
chime in if I am wrong (At 1:30 I am sure I am wrong :) )) and it can
run video rendering. Sorta like Seti@home for clustering... I believe
it has been rewritten since Liggett was involved but the same purpose
exists.
_______________________________________________
BLUG mailing list
BLUG@linuxfan.com
http://mailman.cs.indiana.edu/mailman/listinfo/blug

Re: [BLUG] NOV meeting topic

Dare I say it?

25 power plants to double the number of computers in the world seems
like a bargain to me. Looking at:

http://www.industcards.com/ppworld.htm

I roughly counted how many power plants are in the world right now,
and easily got over 2500. So, a 1% increase in world power plants
would enable a 100% increase in computers..? That's quite a return on
investment, and that's using the numbers that seem high by Mark's
reckoning (which makes sense to me).

Looking at it another way, apparently Indiana and Ohio together
produce roughly enough electricity to power all of the worlds
computers. Probably a little bit short, but... these two states are a
sliver of the world's land area, and the air here is still clean even
though we're burning a lot of dirty coal in old plants.

Suppose we wanted to double the number of automobiles in the world.
We'd likely be looking at something like an 80% increase in power
consumption. That's a rough estimate based on zero research, but
whatever the correct number is, it's going to be way way way over 1%.
And a car takes WAY more energy when it's in use!

http://auto.howstuffworks.com/horsepower2.htm

Says that a ford escort uses 110 horsepower = 82,026 watts. So,
driving a Ford Escort (my old one used to get about 33 mpg (need I
point out that the escort is not among the highest performance
vehicles ever designed?)) for ONE MINUTE is roughly the same power
consumption of leaving a desktop computer idle (at 60 watts) for ONE
DAY. The twelve hours of driving I'm planning on this weekend to
visit my parents will the amount of energy of two years of leaving my
desktop turned on 24/7. And cars are mobile things with difficult
emissions control problems. The computers' new power plants could be
anything from wind farms to fuel cells.

If we want to conserve energy, we're much better off focusing on
vehicles than computers. Meanwhile, we're probably all drawing more
power for lighting than we are for computing (60W?!? I've got
individual light bulbs that draw that much! (though fewer and fewer of
them)).

Having said all of that, I love the idea of thin clients, and I see it
as a minor tragedy that they aren't more popular. But, I would focus
on deploying them in business environments. Gone would be the days of
people wondering whether to save to their computer or to the
server... and if their computer broke, just give 'em a new one, have
them log in, and they're right back in business... Save power? All
the better!

David

On Thu, Oct 11, 2007 at 11:10:44PM -0400, "Simón A. Ruiz" wrote:
>Wow, lots of thoughts. Awesome.
>
>ben lipkowitz wrote:
>> A typical desktop will draw about 60W when idle, and a typical laptop
>> will draw 25W at idle. Let's face it, most computers are idle most of
>> the time. Both types of computers draw near zero when in sleep or
>> standby mode. Many computers are not configured by default to use sleep
>> mode, and people are not going to figure out how to enable it. Then
>> there is the short delay in starting up which leads many people to turn
>> off this feature. A computer could "learn" its user's habits so that it
>> will already be started up by the time they wanted to use it. So, this
>> is a software problem really.
>
>I disagree. There is definitely a software variable in this equation,
>but do we need to be drawing 60W for an idle computer, when we could be
>drawing 5W for an active computer? No amount of software hacking will
>make that change.
>
>In the move to alternative energy sources, there are many variables in
>the equation. Software is one. Hardware is one. Power supply efficiency
>is one. The efficiency of solar cells, wind turbines, hydro-electric
>dams in creating the energy. The inefficiency of transmitting that
>energy long distances and the need to decentralize power generation.
>
>Human habit is the biggest.
>
>We need to examine every single Watt in and every single Watt out,
>because spread across 5 billion humans, every Watt matters tremendously.
>
>The world will not step into the light (to borrow from a famous science
>fiction writer) until it is not only economically feasible to do so, but
>economically stupid not to.
>
>Let's ignore the environmental impact of every single Watt we use. What
>about getting computing up and working in places off the grid? Places
>where you might have a solar panel or a guy running a bicycle (it's
>happening) to generate all the power for miles around. Do you use a
>computer that draws 60W when it isn't doing anything? Or twelve
>computers that draw 60W collectively when twelve people are using them
>to learn, to connect, to communicate, to drive business, to create
>art...to get lost in the Wikipedia?
>
>Chris Colvard wrote:
>> I was thinking about this awhile ago. Now think of these thin-client
>> front ends being provided by an ISP with the ISP providing online
>> storage space and the typical apps people use (Word Processor,
>> Spreadsheet, etc.) as AJAX applications (or something else hosted). If
>> the ISP owns the thin-client then a subscriber doesn't have to manage
>> software or buy new computers and the ISP probably gets a much easier
>> support environment since the thin-client's only function is connecting
>> to the ISP's servers. Thoughts?
>
>One thought: http://www.koolu.com/
>
>Incidentally, less than 10 Watts, running full force.
>
>Mark Krenz wrote:
>> That's not right. Sorry Simon, nothing personal, but he's off on
>> those figures. He's using big numbers to wow people and I think that's
>> a bad thing to do. He should use real figures.
>
>Ack! It would lose the power of simplicity, get boring, and still make
>the same point (only to a less engaged audience) if he tried to make a
>scientifically accurate estimate up on stage. It was a numbers game, not
>an objective statistic, and I felt that was made clear.
>
>We could go through the intellectual exercise of coming up with a more
>accurate figure, but you have to admit, we'd be off on that as well.
>
>I'm sure the figure we're tossing around here of 60W for an idle
>computer could be analyzed further, but that would detract from the
>basic point of the conversation, which is not an intellectual exercise
>in computer power usage; at least it isn't for me.
>
>The basic point of this conversation, at least insofar as what I said
>regarding computer power consumption is that computers as we know them
>today are inefficient and impractical for use outside of a first-world
>infrastructure, as we look towards inviting previously unrepresented
>folks into the global digital world.
>
>Which also happened to be the basic point of maddog's talk.
>
>> Still, I know what he's getting at and conservation
>> is a good thing so its good to think about.
>
>Well, what he's getting at is that it's more than just "a good thing",
>it's unavoidable as we look towards the future of technology.
>
>In my opinion, beyond the future of technology, it's a matter of
>survival for me and my family, you all.
>
>> Sorry for being down on him
>> but I would have thought Maddog would be more careful with his
>> statistics than that. I mean, people quote him.
>
>Yeah, you're probably right, though I'd be surprised if anyone quotes
>his number game as an authoritative fact.
>
>It was certainly useful in bring this list to life. ;-)
>
>> Now let's say I have a Xen server that has enough RAM and CPU power to
>> host 64 machines (16 GB of RAM and 8 cores would do nicely). This
>> machine has a 700 Watt power supply. With all the virtual machines
>> running it would use a maximum of 700 watts, maybe even less. There
>> would be a little more latency overall, but the average watt usage per
>> virtual machine is only 11 watts.
>
>Precisely my point.
>
>However, drop the Xen server. Add an LTSP server, and you can probably
>server somewhere in the 200 or more thin-clients at less than 5 watts
>per, add less than 10 watts for a Koolu thin-client front end, and
>you've got less than 15 watts per workstation. (Leaving out the monitor,
>of course...)
>
>That's a dramatic increase in efficiency. And my bet is that the end
>users won't notice the fact that they don't have a full workstation's
>processor-time at their command. Firefox and OpenOffice would come up
>like lightning, since they've already been loaded into memory by one of
>the other 200 people sharing that terminal server.
>
>How long would it take for that setup to pay for itself in power
>savings? A couple of years?
>
>Say 5 years later, you decide you want to upgrade the system to handle
>Ubuntu Pretty Parrot's amazing new on-by-default holographic desktop
>environment (humor me), What does it cost to upgrade all 200 of those
>workstations? The price of a new server.
>
>And the migration can be done with no disruption, just point the DHCP
>server at the new thin-client server, and have everyone turn off their
>thin-client at the end of the day.
>
>I think workstations like what we have will only be used by geeks like
>us in the not too distant future. People who don't just use computers as
>a portal to the Internet are the only people who wouldn't be 100%
>satisfied with a thin/thick-client solution.
>
>> Doable and done. Matt Liggett, a former sysadmin at Kiva and the guy
>> who wrote the original knowledge base at IU, was working on a program
>> for IU that would slave the idle time of desktops into one giant super
>> computer. I'm not sure how far he got with it, but I've heard of other
>> places doing this before.
>
>Is that the OpenMosix project?
>
>As I understood it, it only worked with specific software. You couldn't,
>say, offload any old video encoding or (place random mundane
>high-processor-time task here) job yet.
>
>But, yeah, talking to that guy at the IU LinuxFest the year before last
>got me thinking about merging a cluster of physical computers into one
>processor pool and basically make each of the computers a thin-client to
>that collective "machine".
>
>I think that this type of thing has the potential to become big.
>
>Steven Black wrote:
>> That sounds like the definition of "Internet Appliance".
>
>*SnIp!* Nod. Well, yeah, what do most people use a computer for?
>
>> In an environment where people expect Desktop-like computers, it is
>> likely to be a hard sell. However, if you do what Símon was talking
>> about, and go overseas and sell it to an entire building at a time...
>> It may be doable, especially if you're dealing with an environment
>> in which most people are not expected to have computers at all.
>
>Again, http://www.koolu.com
>
>Check it out.
>
>Joe Auty wrote:
>> Perhaps all of this will come to fruition regardless with Google Docs
>> and Spreadsheets?
>
>No, seriously ;-) check out http://www.koolu.com that's their whole shtick.
>
>It's a $200 no-moving-parts box, draws less than 10W of power, that can
>run full-fledged Ubuntu, except that in order to keep the processor load
>low, they suggest you use Google Apps on Firefox rather than load both
>Firefox and OpenOffice at the same time. This is also a solution to the
>problem of not wanting to have a hard disk locally. You CAN buy a $300
>version with a hard drive, but that removes a few of the advantages of
>getting a no-moving parts, low-wattage box.
>
>They also work really well as thin-clients, in which case (with a decent
>back-end), they'd function almost indistinguishably, on less than 10W,
>from a computer that idles at 60W...completely indistinguishably when
>it's idle. ;-)
>
>Their main markets are abroad, of course, because we Americans like
>driving Hummers.
>
>Anyhow, yeah, I'd like one or two to play with.
>
>Thanks for all the responses, I appreciate this dialog.
>
>Have a good night!
>
>Simón
>_______________________________________________
>BLUG mailing list
>BLUG@linuxfan.com
>http://mailman.cs.indiana.edu/mailman/listinfo/blug

--
_______________________________________________
BLUG mailing list
BLUG@linuxfan.com
http://mailman.cs.indiana.edu/mailman/listinfo/blug

Re: [BLUG] Processor speed.

On 11/10/2007, Steven Black <blacks@indiana.edu> wrote:
> I expect in a time to come there will be full clustering solutions on
> a single chip. It will be interesting to see how these are marketted
> to the home consumer, and how much bloat the software manufacturers
> are willing to add to make the hardware interesting.
>
Intel's behind the curve on this -- IBM (Cell processor), Sun (8 cores
x 4 threads/core per die in 1st-gen Niagara, 16 cores in 2nd-gen), AMD
(the crossbar architecture used by Opteron motherboards, in which each
CPU controls separate memory banks, and thus the memory layout is no
longer uniform) mean that "commodity" hardware is already encroaching
on what is traditionally the domain of computer clusters.

> My guess: Microsoft will create a new proprietary platform that only
> runs on the new hardware. All of their software will then be rewritten
> for this new platform. They'll stop supporting the old software and

I am not sure how successful Microsoft will be in this new market. The
last success they have with a new hardware platform is with
WinCE/Windows Mobile, and even that takes many iterations. X-Box 360,
which also has an interesting CPU architecture, is a loss-leader (MS'
gaming division lost, what, a billion dollars a year?). And Vista does
not even run well on dual-core laptops!

> Technology may change focus once we get to a point where rolling
> blackouts occur widespread throughout the US. I suspect this is
> just a matter of time. It wasn't that long ago in which we thought
> the idea of rolling blackouts occuring anywhere in the U.S. was
> an absurd idea. Now there's a (relatively) big market for home
> generators tied directly in to the wiring and kicking in
> automatically.
IBM's putting the Cell processor in workstations, I think. And blade
clusters are becoming more common. It's quite interesting how even the
server marketplace is becoming more interested in efficiency (due in
part to growing cooling requirements). I recall this recent study
that, interestingly, shows Opteron systems being more efficient than
Xeon equivalents if power usage is measured at the wall socket, but
less efficient at the CPU level, or if the system load is high.
Culprit? FB-DIMM memory controllers behave very poorly when idling --
their power requirement is mostly load-invariant.

--
Michel
_______________________________________________
BLUG mailing list
BLUG@linuxfan.com
http://mailman.cs.indiana.edu/mailman/listinfo/blug

Re: [BLUG] NOV meeting topic

Wow, lots of thoughts. Awesome.

ben lipkowitz wrote:
> A typical desktop will draw about 60W when idle, and a typical laptop
> will draw 25W at idle. Let's face it, most computers are idle most of
> the time. Both types of computers draw near zero when in sleep or
> standby mode. Many computers are not configured by default to use sleep
> mode, and people are not going to figure out how to enable it. Then
> there is the short delay in starting up which leads many people to turn
> off this feature. A computer could "learn" its user's habits so that it
> will already be started up by the time they wanted to use it. So, this
> is a software problem really.

I disagree. There is definitely a software variable in this equation,
but do we need to be drawing 60W for an idle computer, when we could be
drawing 5W for an active computer? No amount of software hacking will
make that change.

In the move to alternative energy sources, there are many variables in
the equation. Software is one. Hardware is one. Power supply efficiency
is one. The efficiency of solar cells, wind turbines, hydro-electric
dams in creating the energy. The inefficiency of transmitting that
energy long distances and the need to decentralize power generation.

Human habit is the biggest.

We need to examine every single Watt in and every single Watt out,
because spread across 5 billion humans, every Watt matters tremendously.

The world will not step into the light (to borrow from a famous science
fiction writer) until it is not only economically feasible to do so, but
economically stupid not to.

Let's ignore the environmental impact of every single Watt we use. What
about getting computing up and working in places off the grid? Places
where you might have a solar panel or a guy running a bicycle (it's
happening) to generate all the power for miles around. Do you use a
computer that draws 60W when it isn't doing anything? Or twelve
computers that draw 60W collectively when twelve people are using them
to learn, to connect, to communicate, to drive business, to create
art...to get lost in the Wikipedia?

Chris Colvard wrote:
> I was thinking about this awhile ago. Now think of these thin-client
> front ends being provided by an ISP with the ISP providing online
> storage space and the typical apps people use (Word Processor,
> Spreadsheet, etc.) as AJAX applications (or something else hosted). If
> the ISP owns the thin-client then a subscriber doesn't have to manage
> software or buy new computers and the ISP probably gets a much easier
> support environment since the thin-client's only function is connecting
> to the ISP's servers. Thoughts?

One thought: http://www.koolu.com/

Incidentally, less than 10 Watts, running full force.

Mark Krenz wrote:
> That's not right. Sorry Simon, nothing personal, but he's off on
> those figures. He's using big numbers to wow people and I think that's
> a bad thing to do. He should use real figures.

Ack! It would lose the power of simplicity, get boring, and still make
the same point (only to a less engaged audience) if he tried to make a
scientifically accurate estimate up on stage. It was a numbers game, not
an objective statistic, and I felt that was made clear.

We could go through the intellectual exercise of coming up with a more
accurate figure, but you have to admit, we'd be off on that as well.

I'm sure the figure we're tossing around here of 60W for an idle
computer could be analyzed further, but that would detract from the
basic point of the conversation, which is not an intellectual exercise
in computer power usage; at least it isn't for me.

The basic point of this conversation, at least insofar as what I said
regarding computer power consumption is that computers as we know them
today are inefficient and impractical for use outside of a first-world
infrastructure, as we look towards inviting previously unrepresented
folks into the global digital world.

Which also happened to be the basic point of maddog's talk.

> Still, I know what he's getting at and conservation
> is a good thing so its good to think about.

Well, what he's getting at is that it's more than just "a good thing",
it's unavoidable as we look towards the future of technology.

In my opinion, beyond the future of technology, it's a matter of
survival for me and my family, you all.

> Sorry for being down on him
> but I would have thought Maddog would be more careful with his
> statistics than that. I mean, people quote him.

Yeah, you're probably right, though I'd be surprised if anyone quotes
his number game as an authoritative fact.

It was certainly useful in bring this list to life. ;-)

> Now let's say I have a Xen server that has enough RAM and CPU power to
> host 64 machines (16 GB of RAM and 8 cores would do nicely). This
> machine has a 700 Watt power supply. With all the virtual machines
> running it would use a maximum of 700 watts, maybe even less. There
> would be a little more latency overall, but the average watt usage per
> virtual machine is only 11 watts.

Precisely my point.

However, drop the Xen server. Add an LTSP server, and you can probably
server somewhere in the 200 or more thin-clients at less than 5 watts
per, add less than 10 watts for a Koolu thin-client front end, and
you've got less than 15 watts per workstation. (Leaving out the monitor,
of course...)

That's a dramatic increase in efficiency. And my bet is that the end
users won't notice the fact that they don't have a full workstation's
processor-time at their command. Firefox and OpenOffice would come up
like lightning, since they've already been loaded into memory by one of
the other 200 people sharing that terminal server.

How long would it take for that setup to pay for itself in power
savings? A couple of years?

Say 5 years later, you decide you want to upgrade the system to handle
Ubuntu Pretty Parrot's amazing new on-by-default holographic desktop
environment (humor me), What does it cost to upgrade all 200 of those
workstations? The price of a new server.

And the migration can be done with no disruption, just point the DHCP
server at the new thin-client server, and have everyone turn off their
thin-client at the end of the day.

I think workstations like what we have will only be used by geeks like
us in the not too distant future. People who don't just use computers as
a portal to the Internet are the only people who wouldn't be 100%
satisfied with a thin/thick-client solution.

> Doable and done. Matt Liggett, a former sysadmin at Kiva and the guy
> who wrote the original knowledge base at IU, was working on a program
> for IU that would slave the idle time of desktops into one giant super
> computer. I'm not sure how far he got with it, but I've heard of other
> places doing this before.

Is that the OpenMosix project?

As I understood it, it only worked with specific software. You couldn't,
say, offload any old video encoding or (place random mundane
high-processor-time task here) job yet.

But, yeah, talking to that guy at the IU LinuxFest the year before last
got me thinking about merging a cluster of physical computers into one
processor pool and basically make each of the computers a thin-client to
that collective "machine".

I think that this type of thing has the potential to become big.

Steven Black wrote:
> That sounds like the definition of "Internet Appliance".

*SnIp!* Nod. Well, yeah, what do most people use a computer for?

> In an environment where people expect Desktop-like computers, it is
> likely to be a hard sell. However, if you do what Símon was talking
> about, and go overseas and sell it to an entire building at a time...
> It may be doable, especially if you're dealing with an environment
> in which most people are not expected to have computers at all.

Again, http://www.koolu.com

Check it out.

Joe Auty wrote:
> Perhaps all of this will come to fruition regardless with Google Docs
> and Spreadsheets?

No, seriously ;-) check out http://www.koolu.com that's their whole shtick.

It's a $200 no-moving-parts box, draws less than 10W of power, that can
run full-fledged Ubuntu, except that in order to keep the processor load
low, they suggest you use Google Apps on Firefox rather than load both
Firefox and OpenOffice at the same time. This is also a solution to the
problem of not wanting to have a hard disk locally. You CAN buy a $300
version with a hard drive, but that removes a few of the advantages of
getting a no-moving parts, low-wattage box.

They also work really well as thin-clients, in which case (with a decent
back-end), they'd function almost indistinguishably, on less than 10W,
from a computer that idles at 60W...completely indistinguishably when
it's idle. ;-)

Their main markets are abroad, of course, because we Americans like
driving Hummers.

Anyhow, yeah, I'd like one or two to play with.

Thanks for all the responses, I appreciate this dialog.

Have a good night!

Simón
_______________________________________________
BLUG mailing list
BLUG@linuxfan.com
http://mailman.cs.indiana.edu/mailman/listinfo/blug

Re: [BLUG] Processor speed.

On Thu, Oct 11, 2007 at 03:58:24PM -0400, Paul Purdom wrote:
> For single processor speed, I think that you will find that processor speed
> slightly more than doubled each year until about 2003, but that since then
> there has been very little increase in processor speed. At present, the
> number of processors on a chip has been going up, but that is unlikely to
> last as long as the rapid increase in processor speed did.

Intel has done research on how long they expect to be able to keep
increasing total CPU power. IIRC, their current plans have them
increasing total speed for a surprisingly long period of time. (Like,
IIRC, another 20 years.)

I say total CPU power, as they're very close to the theoretical limits
for a single core. This was the plateau that they hit.

I expect in a time to come there will be full clustering solutions on
a single chip. It will be interesting to see how these are marketted
to the home consumer, and how much bloat the software manufacturers
are willing to add to make the hardware interesting.

My guess: Microsoft will create a new proprietary platform that only
runs on the new hardware. All of their software will then be rewritten
for this new platform. They'll stop supporting the old software and
force business users to upgrade. The new software will be so different
than the old that the business users will want to upgrade their home
computers to the same software/OS. Additionally, all new computers will
only come with the new slightly incompatible software. There will be
new file-formats to take advantage of the new features, and this will
be incompatible with the previous versions. This will make the people
that need to interact with those users need to upgrade. -- Simply
stated: I expect business as usual.

Technology may change focus once we get to a point where rolling
blackouts occur widespread throughout the US. I suspect this is
just a matter of time. It wasn't that long ago in which we thought
the idea of rolling blackouts occuring anywhere in the U.S. was
an absurd idea. Now there's a (relatively) big market for home
generators tied directly in to the wiring and kicking in
automatically.

Cheers,
Steven Black

_______________________________________________
BLUG mailing list
BLUG@linuxfan.com
http://mailman.cs.indiana.edu/mailman/listinfo/blug

[BLUG] Processor speed.

For single processor speed, I think that you will find that processor speed
slightly more than doubled each year until about 2003, but that since then
there has been very little increase in processor speed. At present, the
number of processors on a chip has been going up, but that is unlikely to
last as long as the rapid increase in processor speed did.


_______________________________________________
BLUG mailing list
BLUG@linuxfan.com
http://mailman.cs.indiana.edu/mailman/listinfo/blug

Re: [BLUG] NOV meeting topic

On Thu, Oct 11, 2007 at 12:48:02PM -0400, Joe Auty wrote:
> Perhaps all of this will come to fruition regardless with Google Docs
> and Spreadsheets?
>
> We are making hardware that continues to operate faster and faster, but
> surely there will be a time where the demand for blazing fast processors
> levels out when it comes to your average home PC and the non-gaming
> crowd? After all, your average user doesn't need an eight core rig to
> type up stuff, get their email, instant message, and access the web...

The processor people work with the OS and application people to try
to find neat ways to require more processor power. This has been going
on for several years now.

Only when people step outside of the commercial world do you get the
option to slow the upgrades. Otherwise they're forced to buy newer
hardware to maintain compatibility with their friends whose new
hardware came with the new version of X that doesn't play quite nicely
with older versions.

Free software has a tendancy to support old hardware for a long, long
time. Even when it isn't supported in all applications, you find things
like text-based applications that use the same data-formats as GUI
cousins. The open nature allows for competition, and the competition
finds a way to work on the available hardware -- old and new alike.

Truthfully, the processor speed has been quite nice for some time now.
It is really other components of the system that need more work. We may
even see a more radical rethinking of human/computer interaction once
there's no where else to go.

> I'm hoping that 3 year product upgrade cycles will slowly become 4, 5,
> or even longer product upgrade cycles for your average computer user in
> the coming years. There is planned obsolescence, but nobody is forcing
> upgrades either. How much crap can you cram into a word processor or
> spreadsheet app anyway before people start to realize that they don't
> really need to upgrade?

This is really about marketting. Marketting is all about convincing people
they need a product that they didn't need yesterday.

It's been 15 years or more since word processors actually added any new
features that 90% of the population really needed. Hopefully the growing
trend towards office documents with open formats will also help slow
this down.

> I'm also thinking that we may have to gut the way we think of
> programming web apps and start over. AJAX is a pretty sloppy hack that
> attempts to work around the fundamental idea that HTTP retrieves entire
> pages. Maybe we need a new version of the HTTP standard that will handle
> partial fetches and sending of data? If this was handled at the protocol
> level, it would certainly take pressure off of browser and web
> developers in developing cross browser Javascript?

The HTTP protocol does, actually, handle partial fetches. Not all
webservers honor them, and it doesn't work with generated pages due
to the lack of state. (Check 'man wget' and search for "--continue".
It is a real-world example of a product attempting to use partial
page fetches and some of the problems it encounters.)

Ultimately you have to have a solution that accepts the unreliability of
the Internet, and the totally unknown number of people that may connect
at once. This would appear to be one of the primary factors for the
HTTP protocol being totally stateless. The stateless nature is also why
it is so hard to program for it.

Cheers,
Steven Black

_______________________________________________
BLUG mailing list
BLUG@linuxfan.com
http://mailman.cs.indiana.edu/mailman/listinfo/blug

Re: [BLUG] NOV meeting topic

Perhaps all of this will come to fruition regardless with Google Docs
and Spreadsheets?

We are making hardware that continues to operate faster and faster, but
surely there will be a time where the demand for blazing fast processors
levels out when it comes to your average home PC and the non-gaming
crowd? After all, your average user doesn't need an eight core rig to
type up stuff, get their email, instant message, and access the web...

It seems that the 3D desktop thing may have pushed for more demanding
hardware specs to run modern operating systems in recent years, but I
don't think it is overly naive to predict that this will level out too.

I'm hoping that 3 year product upgrade cycles will slowly become 4, 5,
or even longer product upgrade cycles for your average computer user in
the coming years. There is planned obsolescence, but nobody is forcing
upgrades either. How much crap can you cram into a word processor or
spreadsheet app anyway before people start to realize that they don't
really need to upgrade?


I'm also thinking that we may have to gut the way we think of
programming web apps and start over. AJAX is a pretty sloppy hack that
attempts to work around the fundamental idea that HTTP retrieves entire
pages. Maybe we need a new version of the HTTP standard that will handle
partial fetches and sending of data? If this was handled at the protocol
level, it would certainly take pressure off of browser and web
developers in developing cross browser Javascript?


Steven Black wrote:
> On Thu, Oct 11, 2007 at 09:19:02AM -0400, Chris Colvard wrote:
>> I was thinking about this awhile ago. Now think of these thin-client
>> front ends being provided by an ISP with the ISP providing online
>> storage space and the typical apps people use (Word Processor,
>> Spreadsheet, etc.) as AJAX applications (or something else hosted). If
>> the ISP owns the thin-client then a subscriber doesn't have to manage
>> software or buy new computers and the ISP probably gets a much easier
>> support environment since the thin-client's only function is connecting
>> to the ISP's servers. Thoughts?
>
> That sounds like the definition of "Internet Appliance".
>
> In the commercial sector this has been tried a number of times, and in
> each case it has failed. (I worked on one in the late 90's and saw it,
> and that market as a whole die off.)
>
> Was the market not ready for the Internet Appliance in homes? Perhaps.
> It also hasn't been ready for the previous several attempts at creating
> light-weight computers for home use.
>
> In an environment where people expect Desktop-like computers, it is
> likely to be a hard sell. However, if you do what Símon was talking
> about, and go overseas and sell it to an entire building at a time...
> It may be doable, especially if you're dealing with an environment
> in which most people are not expected to have computers at all.
>
> Cheers,
> Steven Black
>
> _______________________________________________
> BLUG mailing list
> BLUG@linuxfan.com
> http://mailman.cs.indiana.edu/mailman/listinfo/blug


--
Joe Auty
NetMusician: web publishing software for musicians
http://www.netmusician.org
joe@netmusician.org
_______________________________________________
BLUG mailing list
BLUG@linuxfan.com
http://mailman.cs.indiana.edu/mailman/listinfo/blug

Re: [BLUG] NOV meeting topic

On Thu, Oct 11, 2007 at 09:19:02AM -0400, Chris Colvard wrote:
> I was thinking about this awhile ago. Now think of these thin-client
> front ends being provided by an ISP with the ISP providing online
> storage space and the typical apps people use (Word Processor,
> Spreadsheet, etc.) as AJAX applications (or something else hosted). If
> the ISP owns the thin-client then a subscriber doesn't have to manage
> software or buy new computers and the ISP probably gets a much easier
> support environment since the thin-client's only function is connecting
> to the ISP's servers. Thoughts?

That sounds like the definition of "Internet Appliance".

In the commercial sector this has been tried a number of times, and in
each case it has failed. (I worked on one in the late 90's and saw it,
and that market as a whole die off.)

Was the market not ready for the Internet Appliance in homes? Perhaps.
It also hasn't been ready for the previous several attempts at creating
light-weight computers for home use.

In an environment where people expect Desktop-like computers, it is
likely to be a hard sell. However, if you do what Símon was talking
about, and go overseas and sell it to an entire building at a time...
It may be doable, especially if you're dealing with an environment
in which most people are not expected to have computers at all.

Cheers,
Steven Black

_______________________________________________
BLUG mailing list
BLUG@linuxfan.com
http://mailman.cs.indiana.edu/mailman/listinfo/blug

Re: [BLUG] NOV meeting topic

On Thu, Oct 11, 2007 at 03:48:16PM GMT, Brian Wheeler [bdwheele@indiana.edu] said the following:
>
> Yeah, but you have to admit that having a submarine steering wheel on
> your computer would rule.
>

Speaking of wasting energy, I always thought it would be cool to have
gas powered hard drives with a pedal. You know, you're cruising along
through menus and then you start copying files. FLOOR IT!!

Just imagine in windows that piece of paper flying between those
folders really fast.


--
Mark Krenz
Bloomington Linux Users Group
http://www.bloomingtonlinux.org/
_______________________________________________
BLUG mailing list
BLUG@linuxfan.com
http://mailman.cs.indiana.edu/mailman/listinfo/blug

Re: [BLUG] NOV meeting topic

On Thu, 2007-10-11 at 14:32 +0000, Mark Krenz wrote:
> People take numbers
> like that go off and think its fact and then we have an entire
> population that thinks that the Rand corporation in 1954 predicted what
> a computer would look like in 2004. When in reality it was just part of
> a photoshop contest for Fark.

Yeah, but you have to admit that having a submarine steering wheel on
your computer would rule.

Brian

_______________________________________________
BLUG mailing list
BLUG@linuxfan.com
http://mailman.cs.indiana.edu/mailman/listinfo/blug

Re: [BLUG] NOV meeting topic

On Wed, Oct 10, 2007 at 08:52:01PM GMT, Simón Ruiz [simon.a.ruiz@gmail.com] said the following:
> At the OLF, maddog gave a presentation about computing and power
> consumption. Interesting numbers game: At 350 watts per computer, in
> order to double the number of personal computers on the planet
> (1,000,000,000), we would need to build 25 power plants with output
> equal to the single largest power plant we have today.

That's not right. Sorry Simon, nothing personal, but he's off on
those figures. He's using big numbers to wow people and I think that's
a bad thing to do. He should use real figures. People take numbers
like that go off and think its fact and then we have an entire
population that thinks that the Rand corporation in 1954 predicted what
a computer would look like in 2004. When in reality it was just part of
a photoshop contest for Fark.

First of all, a lot of people at home turn off their computers at
night and while they away from their house. Plus many people have a
computer at home and at work. So its not 24/7 usage like servers or
geeks who leave all their computers on. Secondly, 350 watts is only
when you're running full tilt. Playing solitaire is not full tilt,
although the new Vistaness(TM) OpenGL(TM) Solitaire(TM) stuff might be
close. ;-)

As Ben mentioned in his email, power consumption of desktops is well
documented to be much lower. The power supply wattage rating is for
what it can support as far as extra hardware.

So I would *guess* overall that we're only using 20% of the power he
is stating we use. Still, I know what he's getting at and conservation
is a good thing so its good to think about. Sorry for being down on him
but I would have thought Maddog would be more careful with his
statistics than that. I mean, people quote him.

> I think the ideal would probably be ultra-power-efficient massive
> back-end servers and solid-state, fanless, maybe even PoE-fed
> thin-client front ends.

Servers won't ever be as power efficient as desktops because you can't
(or shouldn't) turn them off. Virtualization is something that will
help because you won't have power waste where you are running a power
supply 24/7 so that a DNS server can have its own box. I'm able to buy
a bit beefier server and run 16 isolated virtual machines on a single
physical cmachine.

For example, let's say I have a DNS server that has its own box and
its power supply is 350 watts and the box under normal operations is
using an average of 20% of its power. So that's 70 watts its using.

Now let's say I have a Xen server that has enough RAM and CPU power to
host 64 machines (16 GB of RAM and 8 cores would do nicely). This
machine has a 700 Watt power supply. With all the virtual machines
running it would use a maximum of 700 watts, maybe even less. There
would be a little more latency overall, but the average watt usage per
virtual machine is only 11 watts. That's a pretty significant savings.
you're doing the work of 64 machines using the power of 10. And as for
cost, I just priced it out and you could buy a server like that for
$4000, whereas 64 physical servers would cost $32,000 if you paid only
$500 for them.

For the curious, this is what I had in mind for hardware off of
Newegg.com:

1 x SUPERMICRO SYS-6015P-TRB 1U Barebone Server chassis - $1,289.99
4 x WD 500GB 7200 RPM SATA2 Hard drives - $519.96 ($129.99 each)
2 x Intel Xeon E5335 Clovertown 2.0GHz Processors - $709.98 ($354.99 each)
1 x 3ware 9650SE-4LPML 4 lane SATA II Raid controller - $344.99
4 x Crucial 4GB(2 x 2GB) 240-Pin DDR2 FB-DIMM DDR2 667 memory - $1,079.96 ($269.99 each)

Total: $3,944.88

Pretty good and would have 1.5 TB of RAID-5 storage and power supply
redundancy and hot swap hard drives. For another $2000 you could make
those processors 3GHz. Get two of those and a shared disk and you could
have quick changeover if you have a serious hardware failure.

> The other thing I see as a big possibility would be to have beefier
> thin-client front-ends that contribute to the processor requirements
> of the whole system through an OpenMosix-style back-end pool of
> processor time.

Interesting idea. A lot of people are starting to do that with things
like Citrix, Linux Terminal Server Project, MS Terminal Server, etc.

> So in a building of 300 connected workstations, the whole building
> would have 300 workstations worth of processor power. If you're only
> using a word processor or a browser or an e-mail client or something,
> your idle processor time would be up for someone else's use. But say
> you're trying to render 3-d graphics or fold proteins or something
> that can use as much processor time as it can get, you could then
> start enlisting the idle processor time of all your neighbors for your
> task.
>
> What do you think? Silly or workable?

Doable and done. Matt Liggett, a former sysadmin at Kiva and the guy
who wrote the original knowledge base at IU, was working on a program
for IU that would slave the idle time of desktops into one giant super
computer. I'm not sure how far he got with it, but I've heard of other
places doing this before.

I thought it would be awesome if someplace like Dreamworks or Pixar
would let people donate spare time to help render frames for their next
movie. Of course that would encourage people to leave their computers
on, which is what you were hoping to avoid.

--
Mark Krenz
Bloomington Linux Users Group
http://www.bloomingtonlinux.org/

_______________________________________________
BLUG mailing list
BLUG@linuxfan.com
http://mailman.cs.indiana.edu/mailman/listinfo/blug

Re: [BLUG] NOV meeting topic

Simón Ruiz wrote:
> I think the ideal would probably be ultra-power-efficient massive
> back-end servers and solid-state, fanless, maybe even PoE-fed
> thin-client front ends.
>

I was thinking about this awhile ago. Now think of these thin-client
front ends being provided by an ISP with the ISP providing online
storage space and the typical apps people use (Word Processor,
Spreadsheet, etc.) as AJAX applications (or something else hosted). If
the ISP owns the thin-client then a subscriber doesn't have to manage
software or buy new computers and the ISP probably gets a much easier
support environment since the thin-client's only function is connecting
to the ISP's servers. Thoughts?

-Chris Colvard
_______________________________________________
BLUG mailing list
BLUG@linuxfan.com
http://mailman.cs.indiana.edu/mailman/listinfo/blug