And I asked you to clarify, because I don't understand what you mean.

well, i think my answer to p1rahna is plain.

p.s.
i remain logged
Posted on 2011-12-11 13:30:57 by hopcode
Is anyone else having as much trouble following this guy as I am?


To Scali
Very much so. I'm not sure if it's a language barrier or what, but I think the "code" he's talking about is some kind of benchmarking code he wants someone to write...(?)

To hopcode
My understanding of this thread is; you want to come up with some form a standard for benchmarking software for general speed, stability, development times, etc.

If that's the case, give up. There is no way you'll ever create an impartial system for judging all cases of development because very few people develop in the same fashion. Theoretically, it's a neat idea. But there is very little practical use for such a system/software.

Now, as for individually analysing each aspect of development to improve your OWN methodology is always a good idea. You can optimize software speed by simply counting the clock-cycles of your software and validating that against some high performance counters. For obtaining the best stability, I suggest adapting some form of safety critical design standard, I personally have adapted quite a bit of NASA's standard into my own coding style [1]. And for the sake of analysing development times, learn to create a analyse GANTT charts to find out where you spend the most time working and try to figure out what parts you might be able to multi-task.

You see, most of these things have been addressed individually and overall performance mostly boils down to the skill of the individuals working on each project. Trying to create a classification system which tells consumers and developers how a specific software stands up to your idea of "good software" is just ridiculous. You should think of software as a solution to a problem which can evolve to fit various adaptations of the problem. "Good" and "Bad" are just your opinion on a developer's solution in your given situation. What really should be done is to remove all forms of classifications from software and let the users decide on their own. People are capable of doing that.

[1] www.hq.nasa.gov/office/codeq/doctree/871913.pdf

Now, as for the 386 argument.. I don't really see how you can say that it doesn't exist any more. I quite regularly develop on 8-bit systems! 16-bit systems if I'm feeling a need for heavy processing. For people like me, the 386 is still considered a high-end chip. I think the most modern CPU I have is an Intel Atom 32-bit processor in my first generation netbook. These quad-core systems are something I just don't have a use for and probably won't for quite a while.




I'm probably completely off base with the conversation, like Scali, I have trouble following your discussion and I'm completely lost on what you mean by things like 'show me code please, spelled "Programs by code", not videos'.  I'm sure if you clarified yourself much better, people might make better sense of the discussion.

If I am completely off base on what your discussing, sorry :lol:

Regards,
Bryant Keller
Posted on 2011-12-11 18:03:59 by Synfire
Hi Synfire,
Y-ou is you You
y-ou is you all

i see that from Your link and Your statements You have understood very well what the matter is.
ergo its not a language problem. whenever it is so, dont forget please that
not all people over the planet speak the language you speak.

i am discussing the same subject "Programs by code,languages by semantics"
on 2 other external forums, using almost the same language  that people
understand enough;from them i have already recieved some valuable hints.

i didnt set it on the moral side "bad" "good" etc, also please dont set it so up.
good is that code that can hi computational power and minimal resource
requirements.
should i explain it again for Scali too ?

Cheers,
.:mrk
  .:x64lab:.
group http://groups.google.com/group/x64lab
site http://sites.google.com/site/x64lab
Posted on 2011-12-11 19:30:23 by hopcode
btw, Syn
thanks for Your link. i had taken a fast look to it

[1] www.hq.nasa.gov/office/codeq/doctree/871913.pdf

it's the Overbloating par excellence.
is that really Your personal concrete thinking and coding way ?
Cheers,

Posted on 2011-12-11 19:45:13 by hopcode
whenever it is so, dont forget please that
not all people over the planet speak the language you speak.


I believe that was the point I was making....

i didnt set it on the moral side "bad" "good" etc, also please dont set it so up.


I feel the language barrier is kicking in again here. I didn't suggest any moral baring on the terms good and bad. My suggesting was that the view of good is always going to be subjective. Because of this, creating a "standard" would be futile.

good is that code that can hi computational power and minimal resource
requirements.


But how can you standardize something like that? Both of these terms are subjective to the specific project. My minimal resource requirements usually include something along the lines of an 8-bit AVR and a Linux HID module. Neither of those are going to be useful if someone is developing a video game or something that does scientific number crunching.

btw, Syn
thanks for Your link. i had taken a fast look to it


No problem.

it's the Overbloating par excellence.
is that really Your personal concrete thinking and coding way ?


To be honest, it really depends on the project itself. The idea of safety critical development is to apply fault protection to applications which simply cannot fail. If you write something which might cause harm to a person or an environment, you must take as many steps as possible to prevent this. That's not something people who write consumer products deal with a lot, but not all programmers are writing consumer products.
Posted on 2011-12-11 23:32:35 by Synfire
High computational power AND low resource requirements?
If your idea of 'high computational power' is SPEED, then these terms are mutually exclusive.
There is ALWAYS a tradeoff between speed and size, this is an observable and undeniable philosophical truth of programming, at every level this is true, at the lowlevel decision of which opcodes to use, at the highlevel choice of which algorithm to use, and even inside every variant of the algorithm, you trade speed for size, so choose your poison! There is no middle ground, unless you accept mediocrity.
Posted on 2011-12-12 01:00:28 by Homer

My suggesting was that the view of good is always going to be subjective. Because of this, creating a "standard" would be futile.


On top of that, even things that you COULD objectively measure, such as total program size, memory consumption, total running time and such, have only limited use.
Namely, there is a large variety of hardware out there. What is the best/smallest/fastest/etc for one configuration may not necessarily hold true for another.
For example, one could write an optimized routine for MMX-capable CPUs. This may be the best possible implementation known to man... But it still works within the limits of MMX. For newer CPUs, which have newer instructionsets, such as SSE or AVX, the MMX version is likely to be quite suboptimal.

In fact, I know from experience that optimized code for one MMX-capable CPU (eg Pentium MMX) is not necessarily very efficient for another CPU (eg Pentium II), because of architectural differences (in this case, the code was written for the lower latencies of the Pentium MMX, and could not deal with the higher latencies that the Pentium II had. The routine had to be rewritten to deal with the new architecture, but this resulted in slightly lower performance on Pentium MMX).

So in theory, the optimal code (whether it be speed, size, power consumption, or whatever other criteria) is very specific to a single system (and goes beyond just the CPU as well, as the YouTube example above demonstrates: some systems can offload work from the CPU to dedicated processing units).
In practice, you will only find such optimized code for systems with fixed hardware specs, such as game consoles. For most applications, the code is not really optimized for a specific system, but is written in a way so that it performs acceptably on a wide variety of configurations. In this sense, optimizing is more about avoiding pitfalls on specific systems than on extracting the absolute maximum out of the hardware.
Posted on 2011-12-12 07:44:08 by Scali
Hi Synfire,

...the lines of an 8-bit AVR...

i find interesting the AVR point You said,
because a clever tuning/profiling/benchmarking the code in there is not an option.
Actually i dont know nothing about AVR, and i may ask You something.

for Scali, i am sorry; i think he is completely out-of-scope;
apart the divine inspiration he got from the IO-constraints,
he is always there on stating something obvious again (no
offense)

for the other, were they speaking about "standards" or
about "subjective computing" ?;  i will try to understand.

for now i can provide the following quote that
will give you an enlightenment

"First, energy-efficiency is a key metric for these designs. 
Second, energy-proportional computing must be the ultimate goal
for both hardware architecture and software-application design.
While this ambition is noted in macro-scale computing in large-scale data centers,
the idea of micro-scale energy-proportional computing in microprocessors is even more challenging.
For microprocessors operating within a finite energy budget, energy efficiency corresponds
directly to higher performance, so the quest for extreme energy efficiency
is the ultimate driver for performance."


consider it a present for you and an hommage to
R. Noyce for his birthday, coming from this paper

http://cacm.acm.org/magazines/2011/5/107702-the-future-of-microprocessors/fulltext

Cheers,

p.s please,Syn,do it for me,
dont debase Your posts again with the matter of the language
barrier kicking again. i know what You are capable of from
Your code.
You may think to confess to Yourself the following truth:  ;)
"Yes, I understand what hopcode said"

..:mrk
  .:x64lab:.
group http://groups.google.com/group/x64lab
site http://sites.google.com/site/x64lab
Posted on 2011-12-12 17:20:02 by hopcode
well,after 3 days thinking upon it, (i am quite slow in comprehension  :))
i have learned it at heart like a psalm, because i think it's revealing.

  • K multi-Kore

  • A embedded Accelerators

  • C Cache

  • DCL dynamic customizable logic (DCL)


then considering IO-constraints, as from Scali's suggestion
let us bake a general function for the computational power (delta cp)
of some code running on a system

dcp = f(dcl,k,a,c,io)


where the cache above will be bound to

  • is instruction set

  • ds data set

  • t threading capability (common feature to SW and HW)


dc = f(t,is,ds)


this is for now,
any else ingredient/modificatin to the fantastic recipe ?

Cheers,

.:mrk
  .:x64lab:.
group http://groups.google.com/group/x64lab
site http://sites.google.com/site/x64lab
Posted on 2011-12-15 01:33:46 by hopcode