Hi all,
programs quality should be marked in an analogue
way as refrigerators and freezers, i.e following
a catagorization alike the EU one about energy saving

http://en.wikipedia.org/wiki/European_Union_energy_label

that a program is open source doesnt mean
it is a good tool, nor open source is enough
to tell how good it is.

i am finding a way to attribute a "class" to a program
working on an empirical foumula; many different ideas,
related to code density, "activity",
semantics in the source language etc.

this will indirectly avoid people overbloating on their "facts"
about HLL or on the contrary, telling the old story that
raw coding the machine imply too much time and effort.

once having a class A,B,C for example as a goal for your app,
time and effort will be measured from the output,
and how the output code works on the destination machine.
i.e, not only from the source point of view.

your opinions ?
Cheers,

--
.:mrk
  .:x64lab:.
group http://groups.google.com/group/x64lab
site http://sites.google.com/site/x64lab
Posted on 2011-12-07 10:13:04 by hopcode
And if the program does exactly what you need it to do how do you gauge it based on your criteria?  And what about user feedback?  For example: some people like Windows, some people don't.

Unlike the traditional benchmarks for hardware - it can be very difficult to provide unbiased ratings for software.  Market share is "usually" a good indicator of a programs worth beyond statistics such as lines of code.

From a supplier point of view is it easier to make revisions and maintain code in language X over language Y?  Will you have enough personnel with the necessary expertise to do the job?

Crap, I'm starting to sound like a corporate manager!  :shock:
Posted on 2011-12-07 18:48:26 by p1ranha
Hi p1,
thanks for answering.

Crap, I'm starting to sound like a corporate manager!  :shock:

if You like it, it sounds ok; OSI is a corporation too,
FSF a foundation. "foundation", "corporation" these are just labels.
You will agree upon the fact that quality code has
requirements just like the free one. for my side, i am a
free-and-open-source-developer. sharing knowledge, as far as i can,
is the only quality development line i know.


Will you have enough personnel with the necessary expertise to do the job?

it depends on which tools one uses to make tests on the output code.

From a supplier point of view is it easier to make revisions and maintain code in language X over language Y?

the following lines rapresents an assignment after
a cast from/to classes in Qt.
c_labitem *labitem =
    (c_labitem*)(
      (item->data(column,Qt::UserRole))
      .toLongLong(0));
  ...

and that all sounds to me this crude way:


  000000013FDC10FB  488B02              mov rax,
  000000013FDC10FE  41B920000000  mov r9d,00000020
  000000013FDC1104  488D542428      lea rdx,
  000000013FDC1109  498BCA    mov rcx,r10
  000000013FDC110C  FF5010      call qword
  000000013FDC110F  90            nop
  000000013FDC1110  33D2        xor edx,edx
  000000013FDC1112  488BC8    mov rcx,rax
  000000013FDC1115  FF1545710000 call qword [000000013FDC8260] ;QtCore4.?toLongLong@QVariant
  000000013FDC111B  488BD8          mov rbx,rax
  000000013FDC111E  488D4C2428    lea rcx,
  000000013FDC1123  FF153F710000 call qword [000000013FDC8268] ;QtCore4.??1QVariant
  ...

Qt is a wonderful framework. i like it but
i dont use it. the compiler makes a fast and good job on it.
one is free to use the language/framework he likes,imho

but the output code will be benchmarked;
- cycles
- density
- activity/load
- goal

programs by code, i.e output code running on a system
while for languages, assume please the following:

when one want to start learning assembly i suggest
"Art of Assembly" because of R.Hyde's exaustiveness
in examples and visual explanations; and NASM as tool.
What ?  :shock: NASM, spelled NASM
a real assembly tool like NASM. i like it personally only at 40%
but it is common like english speaking among coders.

HLA isnt assembly, by definition; it is HL,
where assembly is not a language, nor HL.
assembly means giving a machine functional opcodes
to let it work for a goal. people believe it is assembly.
that belief makes "semantics".

programs by code as i said, and languages by semantics.


Unlike the traditional benchmarks for hardware - it can be very difficult to provide unbiased ratings for software.


yeap, but a multidimensional tool for multidimensional
kriteria is being required... and not yet invented.
anyway, tuning software is a bit like classifying it good/bad
automatically.


And if the program does exactly what you need it to do how do you gauge it based on your criteria?

one can emulate 80386 on a Quad Core
wasting the astronomical power of Quad Core.
(in numerical and timing terms)

or one can use an old 386 machine, wasting lot
of energy and resources, temperatures, indirect
Co2 emission etc. (in thermal and energy-saving terms).


And what about user feedback?  For example: some people like Windows, some people don't.

arent hobbists already at work out there, comparing browser vs browser, or
firewall vs firewall ? a scientific way to benchmark the output code
allows you to repeat the experiment under the same conditions and have back
the same results. hobbists will have a super-tool to be lucky with.


Market share is "usually" a good indicator of a programs worth beyond statistics such as lines of code.


isnt it exciting the idea to be everytime on the point
to break that funny roaring capitalismus lying in front of
one's own eyes ?

VV peace VV

Cheers,
hopcode

Posted on 2011-12-08 12:06:57 by hopcode

one can emulate 80386 on a Quad Core
wasting the astronomical power of Quad Core.
(in numerical and timing terms)

or one can use an old 386 machine, wasting lot
of energy and resources, temperatures, indirect
Co2 emission etc. (in thermal and energy-saving terms).


Not sure what point you're trying to make here, but I'd like to point out that a 386 CPU used FAR less power than any modern CPU. 386 processors did not even require any kind of cooling whatsoever. Not even a heatsink.
A 386 at 25 MHz was 2.89W:
http://www.alternatewars.com/BBOW/Computing/Computing_Power.htm

It wasn't until the mid-90s that active cooling was introduced, and the MHz-race ensued, making CPUs go from < 10W to 100W+ in just a few years time. They've never gone down since... Those 'superhot' Athlons and Pentium 4/D processors we used to ridicule, were no more than the 130W that today's high-end CPUs require.
Posted on 2011-12-08 12:37:46 by Scali
Hi Scali,
granted that i am not an overclockers, correct me if i am wrong.


I'd like to point out that a 386 CPU used FAR less power than any modern CPU. 386 processors did not even require any kind of cooling whatsoever. Not even a heatsink. A 386 at 25 MHz was 2.89W


ok, i did some fast calculations and Tc at 2.89W is ~90 Celsius from this paper
http://datasheets.chipdb.org/Intel/x86/386/datashts/27242006.pdf chap 5.2, table 6 and fig. 4
where max Tc should be 110 and power dissipation ~3,5 Watt.
now,


...making CPUs go from < 10W to 100W+ in just a few years time...


one should decide how to pay the penalties. in fact whenever 30 x 386 machines reach
my 95watt profile Quad Core,  100 of them cannot equal the speed and the Tc of one only of my 4 cores
running at ~43 (watt=0), Norton and Vista payed, installed and running.

i can conlude the following:
technology is not simply technology, but the consequences of technology;
rich lands run payed software at a relatively low cost; on the contrary poor lands,
that can afford only 100 x 386 machines practically for free need a suplementary technology
like cloud or parallel computing to solve their problems dued to their extreme circumstancies.
but this results to be a double penalty in my opinion; i would say, the trap
hidden under the leaves in the forest, and bushes of colorfoul wild strawberries all
around.

whenever people suspect that payed software is not so good as it seems,
or they blog telling us their sharp opinions about payed software -open source too
they have no way to "benchmark" it and prove how/whether
that software is good or bad.
this lack of proofs makes the market as described
above by p1ranha, a "good indicator".

Cheers,

.:mrk
.:x64lab:.
group http://groups.google.com/group/x64lab
site http://sites.google.com/site/x64lab
Posted on 2011-12-08 19:41:12 by hopcode

one should decide how to pay the penalties. in fact whenever 30 x 386 machines reach
my 95watt profile Quad Core,  100 of them cannot equal the speed and the Tc of one only of my 4 cores
running at ~43 (watt=0), Norton and Vista payed, installed and running.


I think you just phrased it wrong, or I don't get what you meant.
one can emulate 80386 on a Quad Core
wasting the astronomical power of Quad Core.
(in numerical and timing terms)

or one can use an old 386 machine, wasting lot
of energy and resources, temperatures, indirect
Co2 emission etc. (in thermal and energy-saving terms).


The way I read it, you have it exactly the wrong way around. If you were to emulate the speed of a 386 with a quad-core you'd be wasting a lot of energy, since even at idle speeds a modern CPU is more powerhungry than a 386 was at full speed.

Obviously if you DON'T emulate the speed of a 386, but use the full power of a modern CPU, it will be more power-efficient, because although it may use more than 40 times as much energy, it is far more than 40 times as fast.
But that is not what you said.

The temperature figures you quote are completely irrelevant: ~90C is more or less the maximum safe temperature at which silicon can operate. This maximum temperature has been more or less constant over the years, despite all advances in manufacturing process. You will generally find that this is the maximum temperature for any chip, and therefore it is the recommended maximum case temperature for case designs.

However, you should look at HOW this temperature is reached. As I said, a 386 does not have any cooling whatsoever, where a modern CPU needs a big heatsink and fan in order to stay below 90 degrees. Without it, an modern CPU would overheat instantly.
So a 386 generates a LOT less heat (which is obvious, since it is ~3W, not 100W+).

Therefore, if you only need the processing power of a 386, it is better to use that 386. A 386 running 100% all the time is more power-efficient than a modern CPU idling.
Posted on 2011-12-09 01:49:31 by Scali

If you were to emulate the speed of a 386 with a quad-core you'd be wasting a lot of energy, since even at idle speeds a modern CPU is more powerhungry than a 386 was at full speed.
Obviously if you DON'T emulate the speed of a 386, but use the full power of a modern CPU, it will be more power-efficient, because although it may use more than 40 times as much energy, it is far more than 40 times as fast.
But that is not what you said.

perhaps ;)  but this i what a was speaking about
dT = Q x R
also, i didnt mean the max. presumed safe temperature as in Your following quote

The temperature figures you quote are completely irrelevant: ~90C is more or less the maximum safe temperature...


the real missing point now is the general vision of the things; given the following


Therefore, if you only need the processing power of a 386, it is better to use that 386. A 386 running 100% all the time is more power-efficient than a modern CPU idling.


386 doesnt exist, and whenever it exists it holds in no way the comparison
with the course of technology.

Cheers,

Posted on 2011-12-09 04:07:52 by hopcode

perhaps ;)  but this i what a was speaking about
dT = Q x R
also, i didnt mean the max. presumed safe temperature as in Your following quote


What does thermal resistance have to do with anything?
Temperature in chips/computers is a direct result of the current passing through them.
There is less current passing through a 386 than through a modern CPU, ergo it generates less heat.
Just because modern CPUs have larger cooling solutions that dissipate heat better than a 386 doesn't take away the fact that more heat is generated.
In other words: larger currents pass through the system.
In other words: more energy is consumed.


386 doesnt exist, and whenever it exists it holds in no way the comparison
with the course of technology.


Again, I don't see your point.
You really need to try and explain yourself better, and support your statements with some arguments, because you present some conclusions, with no indication whatsoever of how you reached them.
It makes it hard to follow what exactly you are trying to say.
386 doesn't exist? Sure it does! I have one myself.
Aside from that, you were the one who brought up the 386 in particular.
The issue however is not related to 386 alone.
There are plenty of CPUs that are below the idle power consumption of a modern CPU, going all the way up to Pentium III, which may use up to ~30W at 100% load, but due to early power saving features, the average will be much lower.
See here for example:
http://www.anandtech.com/show/4083/the-sandy-bridge-review-intel-core-i7-2600k-i5-2500k-core-i3-2100-tested/21
Even at idle, a Phenom II X6 still uses 20W. Many Pentium II/III CPUs will use less power on average workloads than that CPU at idle.
It's just ridiculous, really.
Posted on 2011-12-09 04:35:58 by Scali

http://www.anandtech.com/show/4083/the-sandy-bridge-review-intel-core-i7-2600k-i5-2500k-core-i3-2100-tested/21
Even at idle, a Phenom II X6 still uses 20W. Many Pentium II/III CPUs will use less power on average workloads than that CPU at idle.
It's just ridiculous, really.

i dont see why and how it's ridicoulous. in the example above, all listed CPUs are of
relative modern times. people who invented them didnt think to win the competition
with the wonderful but very very old 386's heat dissipation and low watt consumption.
one should think relative, not absolutely. that is the reason i inserted a
386 in our discussion. i quote it again

one can emulate 80386 on a Quad Core
wasting the astronomical power of Quad Core.
(in numerical and timing terms)

or one can use an old 386 machine, wasting lot
of energy and resources, temperatures, indirect
Co2 emission etc. (in thermal and energy-saving terms).

you cannot have both. technology tries to solve the problem
in different ways. but one should decide in every moment
the penalty according to the case.
not agreeing with the fact doesnt mean that the fact is ridiculous.
not agreiing with the fact means ignoring how technology
evolves.
i hope it is plain now.
Cheers,


Posted on 2011-12-09 05:01:33 by hopcode

i dont see why and how it's ridicoulous.


Well, I think it's quite obvious:
In the old days, all my computers were cooled passively. They didn't generate a lot of heat, and more importantly: they did not generate any noise. Okay, so 386'es don't quite fit in there, since most of them had a fan in the PSU, but most other home/personal computers of the time did not... from a C64 to a NES, Amiga, Atari ST, etc... At any rate, it wasn't the CPU that caused the need for cooling. With a different/external PSU, a 386 system could be fanless, like the others.
Then we went to heatsinks on our CPUs... not too bad, at least they don't make noise...
But then we got fans too... and we didn't stop at the CPU... noooo, the GPU would get big fans as well.
Especially in the days of early Athlon/Pentium 4, PCs were horribly noisy.
These days it's reasonably under control, as long as your system is idle, that is... If you system runs at 100%, it's still incredibly noisy.

And that's just one of the problems. Another one is how useless laptops are with these powerhungry CPUs. It's hard to find a laptop that can last for more than 2-3 hours on a single battery... which doesn't weigh a ton... and doesn't try to burn a hole in your lap.


one should think relative, not absolutely. that is the reason i inserted a
386 in our discussion. i quote it again

one can emulate 80386 on a Quad Core
wasting the astronomical power of Quad Core.
(in numerical and timing terms)

or one can use an old 386 machine, wasting lot
of energy and resources, temperatures, indirect
Co2 emission etc. (in thermal and energy-saving terms).



That is still broken logic, as the quadcore will generate more heat, consume more energy, and as such will cause more CO2 emissions than a 386 (yes, even when the quadcore is completely idle, as demonstrated by the Anandtech article I linked earlier: modern quadcores range from 3W-20W when idle).


you cannot have both.


You can, actually. Just not from the ridiculousness that is the x86 architecture.
ARM is a fine example of how CPUs can be both power-efficient, and deliver modern levels of performance.
You can get a quadcore ARM processor in a tablet or smartphone, with no special cooling required, and with reasonable battery life.
No doubt ARM-based laptops will be quite popular once Windows 8 arrives.


not agreeing with the fact doesnt mean that the fact is ridiculous.
not agreiing with the fact means ignoring how technology
evolves.
i hope it is plain now.
Cheers,


Technology involves in more ways than just the x86-world.
Because Intel and AMD failed to deliver power-efficient solutions, many other hardware vendors jumped into that market. Are you ignoring that?
Posted on 2011-12-09 06:39:36 by Scali

And that's just one of the problems. Another one is how useless laptops are with these powerhungry CPUs. It's hard to find a laptop that can last for more than 2-3 hours on a single battery... which doesn't weigh a ton... and doesn't try to burn a hole in your lap.


I too find it a sorry state of affairs for those exact reasons.


Technology involves in more ways than just the x86-world.
Because Intel and AMD failed to deliver power-efficient solutions, many other hardware vendors jumped into that market.


As you mentioned earlier, obviously Intel and AMD aren't the only companies trying to provide users with powerful solutions.  However, that power comes at a price.

I recently purchased an nVidia GTX 560 Ti graphics card for my system.  The card itself takes up two slots and is nearly as thick as a CDROM player.  It has a heatsink, pipes, and two fans to cool it.  Wicked fast card card that never breaks a sweat playing anything I've thrown at it.

The computer case that houses all my components contains 4 120mm fans to keep air moving through.  Fortunately, I still don't require water cooling my system just yet.

Until science discovers some other method besides silicon that can be economically manufactured and reasonably priced I believe we'll be dealing with cooling/power/efficiency issues for a long time...

Posted on 2011-12-09 07:51:25 by p1ranha

As you mentioned earlier, obviously Intel and AMD aren't the only companies trying to provide users with powerful solutions.  However, that power comes at a price.


Luckily you still have a choice with videocards.
You can either use onboard graphics, which requires no additional cooling at all...
Or you could choose a card with a passive cooling solution.
The more low-end cards are generally passively cooled anyway. And specialist brands offer passively cooled versions of high-end cards as well.
With CPUs, you're not that lucky: it's very hard to find x86 CPUs that can be passively cooled at all, outside the horribly underperforming Atoms of this world.
And non-x86 CPUs aren't an option for desktop systems at this moment, since most software won't work on them.


Fortunately, I still don't require water cooling my system just yet.


I actually DO have watercooling on my CPU.
These days there are inexpensive kits from brands such as Corsair. Intel and AMD are even starting to offer them as 'stock' cooling solutions.
I don't use it because a fan solution can't keep my system cool. I use it because the watercooling moves the heat to a radiator which can be cooled with a 120mm case fan, which is a nice and silent solution.
Posted on 2011-12-09 08:01:00 by Scali

...Another one is how useless laptops are with these powerhungry CPUs. It's hard to find a laptop that can last for more than 2-3 hours on a single battery... which doesn't weigh a ton... and doesn't try to burn a hole in your lap.

my laptop is a Siemens 32 bit Xp Os. running ZoneAlarm and Kaspersky disconnected
battery dies after 1 hour and 10 minutes. disabling them and running 20 or more of my tools working in background
battery lives almost 3 hours and 45 mins. programs by code


That is still broken logic, as the quadcore will generate more heat, consume more energy, and as such will cause more CO2 emissions than a 386 (yes, even when the quadcore is completely idle, as demonstrated by the Anandtech article I linked earlier: modern quadcores range from 3W-20W when idle).


no, because the point is marking code as "good" when it keeps the CPU at an almost idle state
while it executes its task though.


you cannot have both.


You can, actually. Just not from the ridiculousness that is the x86 architecture.
ARM is a fine example of how CPUs can be both power-efficient, and deliver modern levels of performance.

i wouldnt say "ridiculosness" to 386 anyway, for the reasons involving what technology is,
as i explained above.
but ARM is another namespace, another chapter, worth to be investigated. for the reasons
i told above about thermal performances of 386 machines, i suggested already and publicly those people obsessioned from 386 and "Dos-is-not-dead" ghost to resolve to ARM.
obviously, You couldnt have known it before i explicitely
manifested here on board.
now,
Q: what about 386 machines ?
A: they actually do not exist.





Posted on 2011-12-09 08:07:45 by hopcode

no, because the point is marking code as "good" when it keeps the CPU at an almost idle state
while it executes its task though.


I think that depends on what the task is and does.
If it's mainly IO-bound, then yes, it should keep the CPU idle while it is waiting for IO to complete (rather than spinning the CPU uselessly while polling for results).
It cannot complete faster than the IO allows it to.
However, if it's a computational task, then the more important thing is how quickly it gets the job done. Running the CPU at 100%, but for a shorter amount of time, means the CPU gets to be idle longer.
Generally you'll just want the code to have the shortest possible execution time. The actual CPU load is not that relevant. If you don't want 100% CPU usage, you can always limit the amount of cores the process can have, or lower its priority. You have all the control you want.


for the reasons i told above about thermal performances of 386 machines, i suggested already and publicly those people obsessioned from 386 and "Dos-is-not-dead" ghost to resolve to ARM.


From here it looked more like you thought that modern CPUs used less power than old CPUs.


Q: what about 386 machines ?
A: they actually do not exist.


Not sure why you keep saying that. 386 CPUs clearly DO exist. In fact, NASA still uses them:
http://www.cpushack.com/space-craft-cpu.html
Posted on 2011-12-09 08:18:32 by Scali
IO-constraint is a good point in the list of the to-be-solved things already,
and i have any idea at the moment.

From here it looked more like you thought that modern CPUs used less power than old CPUs.

no, again. because the thing is complex, and i try to express in few words
what i mean. you demand low wattage and hi computational
power from a machine. they both should be always bound together.


Not sure why you keep saying that. 386 CPUs clearly DO exist. In fact, NASA still uses them:
http://www.cpushack.com/space-craft-cpu.html

good, but dont forget the point: Programs by code.
now, if You provide source code (or binaries), that would be worth
for a new discussion thread. perhaps they have patented code i would like to benchmark;
code that both You and me cannot imagine, capable on 386 to decode
a youtube video at the same computational power and low energy
requirements of a recent ARM processor.

Cheers,


Posted on 2011-12-11 06:07:32 by hopcode

no, again. because the thing is complex, and i try to express in few words
what i mean. you demand low wattage and hi computational
power from a machine. they both should be always bound together.


Firstly, I demand no such thing.
I'm just pointing out a flaw in what *YOU* said (not me):
one can emulate 80386 on a Quad Core
wasting the astronomical power of Quad Core.
(in numerical and timing terms)

or one can use an old 386 machine, wasting lot
of energy and resources, temperatures, indirect
Co2 emission etc. (in thermal and energy-saving terms).


Again, this is *WRONG*. A 386 does not generate a lot of heat and does not consume a lot of energy compared to a modern quadcore. Hence there is no way I can make sense of this statement.
The only thing you WOULD be wasting with the 386 is time, since obviously the 386 does not have a lot of computing power.
However, the point is this:
Say you perform a task that takes 1 hour to complete on a 386, and 1 second to complete on the quadcore.
The quadcore then is more energy-efficient *if* you turn it off after it is complete. If you leave the quadcore idling for the remainder of the hour, it will still have used more power than the 386.

I really don't see why you keep arguing against that. In fact, you're not even arguing at all.
Arguing would imply that you present arguments to support your position, which you haven't done at all. You just don't acknowledge my arguments, which are clear as day.

Secondly, I don't demand low wattage and high computational power. I would just like the two to be a bit more balanced. Modern-day computing power in a < 10W TDP envelope would be interesting indeed for small and/or silent devices. It should be possible with today's technology to have the performance of 5-10 year old CPUs at < 10W. That should still be sufficient power for most everyday tasks.


good, but dont forget the point: Programs by code.


What do you mean by that exactly? "Programs by code"?
Programs are code, but other than that I don't see what you are driving at here.
Could you explain that?


perhaps they have patented code i would like to benchmark;
code that both You and me cannot imagine, capable on 386 to decode
a youtube video at the same computational power and low energy
requirements of a recent ARM processor.


I think you're barking up the wrong tree there.
There are chips on the market that are capable of playing HD video at under 3W. But they are not conventional CPUs, they are video decoding chips.
Most modern GPUs contain very efficient video decoding circuitry as well, requiring little or no CPU power to play videos.
For example, if I play this video in 1080p: http://www.youtube.com/watch?v=12NRj4RD0Io
My CPU usage remains below 10%, and the CPU and GPU do not come out of idle mode, with low voltage and clockspeed (it runs at 1.2 GHz and 0.8v, where it runs at 2.8 GHz or more at full speed/turbo, and 1.2v):

So that is an example of energy-efficient 'code'.
Although ARM CPUs are more efficient than x86 CPUs, in this case it is irrelevant, as ARM systems work the same way: special video decoding circuitry handles the decoding more efficiently than a general purpose CPU.
Posted on 2011-12-11 08:48:58 by Scali

However, the point is this:
Say you perform a task that takes 1 hour to complete on a 386, and 1 second to complete on the quadcore.
The quadcore then is more energy-efficient *if* you turn it off after it is complete. If you leave the quadcore idling for the remainder of the hour, it will still have used more power than the 386.

the NASA ?  :lol:
it happens already,after moving the mouse on Windows and then restarting
the PC to let changes become effective. also,
show me code please, spelled "Programs by code", not videos
Cheers,

.:mrk
  .:x64lab:.
group http://groups.google.com/group/x64lab
site http://sites.google.com/site/x64lab
Posted on 2011-12-11 12:06:48 by hopcode

the NASA ?  :lol:
it happens already,after moving the mouse on Windows and then restarting
the PC to let changes become effective.


Is anyone else having as much trouble following this guy as I am?
I feel lost in a sea of non-sequiturs.


show me code please, spelled "Programs by code"


What code? What programs? What are you talking about?
Posted on 2011-12-11 12:11:30 by Scali

What code? What programs? What are you talking about?

I told You about it above in the subject.
Programs by code,languages by semantics


Posted on 2011-12-11 12:46:52 by hopcode

I told You about it above in the subject.
Programs by code,languages by semantics


And I asked you to clarify, because I don't understand what you mean.
Posted on 2011-12-11 13:20:32 by Scali