I think it is more relevant for us guys who are interested about asm at all. Anyway I was wasting my free time stalking some pages on programming, as I usually do, and I found again Ken's page. For those who do not know, Ken Silverman made the Build Engine, etc. In his personal page he has this:
My favorite one: "sub eax, 128" -> "add eax, -128"


I asked myself if we do the right thing then. I mean, we do try to advice everybody how things should be to the best of our knowledge, etc (at least I try to help) but do we really practice what we say? It is something to think of. Myself, I have to recognize I probably never did. I am not an optmization freak but I think maybe we are almost always so worried about the complex math that we forget the very basics. I think sometimes we could be better programmers if we could chill out and relax instead of trying to find problems to make solutions. My humble opinion if you are interested.

Ken's website url - (check "interests/hobbies")
Posted on 2007-06-30 20:46:30 by codename
Ehm, some time ago we discussed the speed of add/sub - and both with practice and theory they turned-out identical in speed. Because modern cpus don't use the obsolete simple ADDSUB schematics, that have to invert and increment the input...
It's not true that we always strive hard to optimize everything... Around API, the purpose is defeated; and at many places extra cycles don't mean a thing.
Posted on 2007-07-01 04:52:50 by Ultrano
Oh but I am not saying we always strive to optimize everything. I am saying that we often forget the basics when dealing with complex problems. Sometimes the answer is as simple as that and we are trying to find a solution which is far way much more hard work. Maybe minimalistic code could be simple code instead of run time execution simplicity.

Anyway thanks for, your comments are always useful. I did not know that modern processors did sub without add at all. At the moment I cant think of another way to do it without actually adding, so I will have to search more papers on the subject.
Posted on 2007-07-01 09:18:37 by codename
X - Y = X + (~Y +1) . But actually this extra "+1" can be directly fed as a previous carry on the lowest bit's unit. So, for SUB you'd be just adding a XOR element right after Y, and control whether it inverts the input (Y). Thus, it will be really the same timing of add and sub if they share the unit in this way. Make it all in the form of a "bypass adder", do some tuning and you get the fastest addsub unit :).

http://www.ece.rice.edu/Courses/422/1996/bomb/alu.html
Posted on 2007-07-01 10:06:21 by Ultrano
Hello :)

Guys, I think the optimization is not about the internal complexity of the execution unit but about the signed 8-bit inmediate encoding. "sub eax, 128" has to be encoded with a 32-bit inmediate but in "add eax, -128" the inmediate fits in a signed inmediate encoding, thus, the instruction is smaller.

Here a disassembling of both:
00401000 > 2D 80000000      SUB EAX,80
00401005  83C0 80          ADD EAX,-80


Cheers
Posted on 2007-07-01 10:37:51 by LocoDelAssembly
LocoDelAssembly,

    What assembler/disassembler are you using?  MASM generates the following code.  Ratch

 00000002  83 E8 50		  SUB EAX,80
00000005  83 C0 B0   ADD EAX,-80
Posted on 2007-07-01 18:49:22 by Ratch
Doesn't matter, since he meant 80h, not 80 :).  With +-127 it'll be a 3-byte instruction. Because this 128-value case is one in 4 billion, it would be a funny statement of Ken Silverman's if he didn't mean the ALU internals.
Posted on 2007-07-01 19:08:46 by Ultrano
fasm & OllyDbg. The radix of OllyDbg is 16 so for that reason it show hex numbers without decorating them with 0x or $ or h.

And yes, I think he want to point out that signed 8-bit inmediates goes from -128 to 127. This property could be useful for accessing fixed locations in arrays or even structures fields by using a pointer that points at the end of the array/structure allowing us to access one more byte with signed inmediate. It's the gift that the 2's complement binary representation gives to us :P

Note that actually the idea is stupid, a much better one would be placing the pointer 128 bytes ahead of the beginning of the data allowing us to access 256 bytes using signed inmediate (a little more if we count unaligned word, dword, qword or tword access).
Posted on 2007-07-01 19:58:16 by LocoDelAssembly
Ehm, I just pointed out how uncanny such an idea is ^^"

But it's funny/ironic how we ended-up discussing/nitpicking things, when codename's topic was exactly against it  :lol: :lol: :lol:

As for the original topic, I'd say it differently (after all, complex maths and programming .. o_O ??!  ). Knowing asm, any programmer can start seeing many more ways to do the same thing, and often he can clearly see the pros and cons of each method. But add to it ease-to-make, and the decision becomes harder, and we can often throw Logical Thinking out through the window.
Posted on 2007-07-01 20:35:17 by Ultrano
Sign extention is a nice trick with size optimization.
and  eax, 0FFFFFFFEh
and  eax,-2
The first is longer. And it happens with like instructions.
Assembler cannot decide whether it is signed or just unsigned like in case of 80h to byte size op. It needs little help. just put not 0FFFFFFFFh but -1 and code become more /easily/ portable.
Posted on 2007-07-02 00:52:36 by asmfan
fasm uses the short encoding for both. Which assembler can't decide by itself this?
Posted on 2007-07-02 09:27:03 by LocoDelAssembly

fasm uses the short encoding for both. Which assembler can't decide by itself this?

IMHO it's not too smart that it does the short encoding of "AND EAX, 0FFFFFFFFh" - the constant entered by the programmer is long, so this is not WYTIWYG translation (it's fine to choose the short encoding for "AND EAX, -1" though).

Yeah, this won't matter most of the time, and you can manually do "AND EAX, DWORD 0FFFFFFFFh" , but still...
Posted on 2007-07-02 09:33:59 by f0dder
I think the OP is correct .. it is too easy to get caught up in the equations themselves .. especialy when you hop around between a compiler and assembly .. this is because compilers target abstract machines

Compilers are based on equations and flow graphs .. these essentialy define their abstract machine

Assembly is based on state transformations - the numerical components of these state transformations are only half the story - what compiler has an abstract machine that allows the programmer to emit an 'rcl' or 'rcr' instruction? (hell, half the languages dont even have a rotation operator, let alone the concept of a flags register)

Compilers have gotten really good at optimizing equations .. sure I might be able to shave a cycle off of IntelC's version of some long numerical equation but I honestly no longer expect to do better than that by just attacking the equation itself.. these days the big wins must use either an algorithm that cannot be described simply in C, or attack the links in the flow graph itself (for instance I know which branches are more likely and which branches will be hard to predict, and so forth)

Posted on 2007-10-23 09:50:27 by Rockoon
I was thinking about this the other before reading your post. I came to the conclusion that a programmer, when getting deeper and deeper into the confusing and bloated architectures of the computers on which he or she programs, it will get more and more difficult for him or her to write a program that is usable by a wide range of computer users. In HCI and UCD we learn how to write programs that are actually usable by others and not only accurate in our logic but I sometimes find my own thinking and logic very different from others. That happens to all of us I believe. You can never write a program and say "Yeah this is good and logical".
Posted on 2007-10-27 06:23:24 by XCHG
XCHG: programming != user interface design. For real world programs, you really ought to (have somebody who is qualified) spend some time on usability analysis, and involve the end-users at various stages in the design lifecycle (NOT just getting an initial problem description, and then final beta-testing).

The guts of the program doesn't have much to do with the user interface, when you move beyond the simplistic small things that most hobby programmers (including me) write :)
Posted on 2007-10-27 06:56:03 by f0dder
That was not my point. Somehow you always manage not to get my points! What I am saying is that the more we program, further we think differently. The more we do that, the less we become aware of our users. For example, in YouTube; when you open the website, you have to move to the search box manually. That is like one "very basic" point they had to think of. That's programming and also UCD. The user wouldn't normally care about the videos shown in the main page. The user wants to search for what he or she wants to see and yet I believe YouTube web programmers are like "That's perfect the way we have made it. That's logical. That's great. If you want to search, you have to click on the search edit box. That's beyond fantastic". So my point is that when you become aware of details, your way of thinking will become different by a wide margin from what the users expect from you.
Posted on 2007-10-27 09:35:46 by XCHG
But YouTube wants to sell advertising which requires some stats: x number of people views of our home page each day.

The two concepts are in conflict by definition - aren't they? Who can define the user? (c: If we do define the user then the problem is solved! And in doing so we exclude anyone who is !user!
Posted on 2007-10-27 17:06:46 by bitRAKE

That was not my point. Somehow you always manage not to get my points!

:)


What I am saying is that the more we program, further we think differently. The more we do that, the less we become aware of our users.

I guess it depends on how you work. If you're a one-man team, you need to handle everything yourself, including user interfaces... but then you're probably not doing very large projects, and probably don't do (or need) user interface testing etc.

My point is that programmers aren't necessarily the best choices for designing user interfaces or doing usability testing. And why should they, anyway? A programmer's job is to write code, and hopefully efficient (enough) and bugfree.

I think individual programmers that aren't part of big teams care a lot about usability, but unless they're developing commercial projects, they often focus on their own needs. And other individual programmers just aren't professional enough to design things well.
As for youtube, dunno. Imho their main page is fine for such a service, so what if the searchbar doesn't have focus by default, it's a single <tab> to get there. Is there even a (clean and crossbrowser) way to specify which inputbox on a website has default focus? I personally think the youtube site is reasonably well done, and when somebody sends me a link (and I actually have the time (or lack of things to do :)) to watch, I often end up clicking through quite a few related videos.

It's also a clean and easy-to-navigate design, which isn't hard on the eyes. I think you could find some MUCH better (or worse, really ;)) examples of bad interface design and lack of care for the end-user (for instance, I often end up using google with a site: tag when I need to find anything at Microsoft).

Anyway, that was somewhat of a sidetrack.

Should programmers keep the end-user in mind? That really depends on what part of the program the programmer is working at. And the end-user and his needs have to be defined as well...
Posted on 2007-10-27 18:04:52 by f0dder
Let me put it this way  :lol:

In my operating system, when I was coding the scheduler, I was just overwhelmed with all the things that I had to consider. IRQs, task switching, critical regions, etc. I created a simple scheduler and well it worked but it wasn't what I wanted. Then I decided to see what I personally do for scheduling my every day life and that helped a little bit. Then I started asking some other people what they do for scheduling priorities in their lives and one of the interesting answers that I hadn't thought about came from my 12 year old cousin "I will play basketball all day if mom lets me but if she wants to talk to me, I pretend like I wasn't playing at all". That gave me the idea of how the scheduler should for example serve an application that has set a timer right on time and leave other processes to be served next. I couldn't have thought about that as fast as he came up with the idea.

He thought about it so fast because first he wasn't aware why I wasn't asking him and then because he wasn't aware of all the complicated things I was aware of while making the scheduler. THAT IS WHAT I AM SAYING. f0dder please understand this time? no one here is the end-user! I am talking about the logic behind things. I am saying that once you get too much into complicated matters, you can't think simple.
Posted on 2007-10-29 03:09:04 by XCHG
Everyone here is the end-user!
am saying that once you get too much into complicated matters, you can't think simple.
I kind of fight something like that everyday. When there is a risk of the code ever being slowed-down or presented with limited resources, I sit down and think hard how to shuffle the whole code from "doing the task" to "being convenient for the cpu/system, while doing the task in a similar way flawlessly". Programmers, that just know the HLL syntax (and only think they just have to conform to the syntax) ... which are prolly 98% of the programmers out there... will do things 50 times faster than me. And then count bugs (and patch them in a half-assed way, just to get by), degrade stability (due to bloating the usage of resources, and not making fail-safe handlers for every possible case) and increasing the required system-specs (to mask the stability issues).
But optimizing and adding fail-safe handling is only done on several parts of most software, not on all of it. It's easy to get all nit-picky everywhere, so you've got to have something like a timer-interrupt in your mind, constantly making you re-think what you're doing (and is it really the right thing to do).

About designing user-interfaces, .... Joel Spolsky's first 100 articles are a must-read. After that, I think my ability to design interfaces improved drastically. I don't remember if he mentioned this, but here's how to check if your design is perfect: At least several hours after you've made the design (and meanwhile did not do any programming/design!) just run your app, like a regular user would. Just don't claim you can't be a regular user - unless you've never seen/used other people's software.. . Make yourself extremely lazy and unwilling to read onscreen text, that is longer than 3 words. Start clicking around and toying with the keyboard randomly (almost like mad). Then keeping the lazy mindset, try to actually use your software; and in the back of your head keep statistics on which tasks take how much time and now many clicks and how many more keystrokes than wanted. Just don't remember/imagine what code executes when. You're in "user-mode" now, all you care about is to finish some task using that software, and express your anger when you feel something makes you miserable, and express hope when something else could make your life much easier. You don't want to study the new interface, you've already studied Word/Excel/Windows, so the menus and right-click should behave exactly like that - thus, conform to standards (when you code that GUI). No-one's ever going to right-click a button, for instance. 
Go into user-mode on Monday mornings (provided that you didn't work on the project on Saturday and Sunday) - you have forgotten the code a bit more, so it's easy to really feel like a regular user (and detect all flaws in your design).
Ah, and there's something Spolsky accented on: "design is all about trade-offs"
Anyway, just take a breather, and you can easily switch to user-mode.
Posted on 2007-10-29 15:00:11 by Ultrano