I'm having trouble finding ANY pic burning software which RELIABLY accesses the printer port under XP - and it's infuriating !!
To this end, I am willing to write my OWN god damned LPT lab.
I'll start with 16f84 but make it flexible to handle others later.

I'll do this because I have already tackled the issue of direct lpt i/o under xp.
Someone stop me before I get started - please !!
I've tried everything I could get my hands on but nothing is reliable.
You know what happens under most packages?
All the printer data lines lines go high-z as if its in ecp 8-bit INPUT mode :|
Posted on 2003-06-07 01:48:07 by Homer

I think it would be nice to have such a source to work with for future problems. But i cant in good heart let you go through with such a project without first pointing out that you might want to try reconfiguring your BOIS for the LTP port. I remember having a headache with my laptop untill i changed it from EPT (or what ever is the newest standard) to basic LPT (BIOS has 3 mode selections for the port).

Hope this helps..
Posted on 2003-06-07 02:14:53 by NaN
Yeps - my software is naive, it determines the CURRENT state of the parallel port PIO which is from memory an 8255? Anyway, its not BIOS that has mode selections per se, its really that PIO chip which is the driver for the parallel port.
You can actually access it yourself in software, which mine does, and then sets it to "bidirectional 8 bit" mode.
Well, thats my understanding of it, and my demo works - you can watch the pinouts in relative realtime, and toggle the bits of the data, control and status ports. I'm using WinIo for the driver, and I'm polling the ports in the messageloop.
I modify global vars which are monitored by the display code within WndProc.

It's something I wrote to debug a DRO not too long ago...
Posted on 2003-06-07 07:32:45 by Homer
Ernie, that would probably be the DATARASE IIAC ER2-ND for $39.95. Unfortunately its no longer available from Digi-Key. A quick google shows that http://www.generaldevice.com/ sells it.

MArtial_Code's Homebrewed UV Eraser
Posted on 2003-06-08 01:04:47 by eet_1024
IF I was going to roll my own PIC programmer, I'd make it USB based, with both a GUI and a command line interface (or maybe a COM control, so other programs could direct the task). Varying VCC for min and max (a Microchip must requirement for concideration as a 'production' programmer) would be there. Programming the older parallel types is not essential.

But that's an IF.
Posted on 2003-06-09 22:50:07 by Ernie
Yeah, I was day dreaming about a USB based programmer too. I hate having that thick parallel port cable in the way. At least Microchip provides full documentation on programming thier PIC's for free.
Posted on 2003-06-11 02:05:08 by eet_1024
Its a chicken and egg problem too, since the logical choice for a PIC programmer is another PIC (the PIC16C745 being a good start). However, that still leaves the problem of how do you program your programmer?

Incidentally, I just got my Pro Mate programmer up and running for ICP (in circuit programming) the PIC16C620A's I'm using. Have a small project box to hold the PCB on a bed of nails, that connects to the Pro Mate thru a video cable (15 pin D type cable).

The Pro Mate can be enabled to program with a switch on the project box, so your hands stay on one box. You also get drive lines for a PASS and FAIL bin indicators, two LEDS serve that purpose.

It would be a perfect stand-alone programmer IF the Pro Mate didn't need TWO wall wart power blocks.

Posted on 2003-06-11 07:10:04 by Ernie
It's not a problem for me, I have my own chicken, and access to one at work; or would that be the egg?

I was thinking about using the 16C745. If I was to develop something, I think it would be a generic serial port with access to the other I/O lines. Then I could use the same PIC for chatting to OBD I and II computers. When I say generic, I would like to connect to the OBD I on GM vehicles; they use 8192 baud.
Posted on 2003-06-12 02:15:25 by eet_1024
Well im back in business ;)

I payed what i feel is *way* too much for a timer/bulb/and box. But I bit the bullet and coughed up the cash for a U/V eraser. Time to start designing the programmer for it... I think i understand the USB side already (theory/coding) so i should be testing my thoughts shortly...

Posted on 2003-07-28 18:10:38 by NaN
I just dusted off my PIC stuff a few weeks ago and programmed a 16F84 for the first time. Flash sure beats the 15minute erase cycle :)

I wrote an 8 bit counter as a test. The hexadecimal displays I have came in handy; Jameco 32951.
Posted on 2003-07-29 00:52:22 by eet_1024
Well i just reviewed carefully the Programming Spec for the 16C7x5 USB chips. Found Here (http://www.microchip.com/download/lit/suppdoc/specs/30228k.pdf)

I have one question they didnt make absolutely clear for me. Perhaps someone can shed more light for me. On page 7 they have a flow chart of program flow. It shows a "Program Cycle" subfunction on the right edge. There is a 100uS "Wait" cycle shown, as well as briefly mentioned in the following text.

My problem is that the COMMAND statements (Command Load Memory, Command Begin program, Command End Program) dont take much time at all (200ns * 22 = 4.4uS + 3uS = 7.4 uS (safely) ). This includes a write of 1 14bit data/program word as well. So where is all the extra time going?

I didnt find the spec all too helpful in clearly spelling out the sequence of events. They show timming diagrams for the Command Load, but nothing is given for the entire "cycle" of Load/Begin/"wait"/End.

If someone can offer help it would be appreciated.
Posted on 2003-07-30 00:03:21 by NaN
Hi, NaN,

These are EPROM based devices, so their programming algorithm is similar to EPROM's.

Let's go back a little to how you program typical EPROM's: you present the address and data (parallel) to the EPROM, with Vpp at 12.25~13.0V, and you apply a 100us programming pulse at the PGM pin. Then you verify if the data have been written. If not, you apply another 100uS pulse and check again. When the data have been written correctly, you apply a 3*N long pulse (or 3*N more pulses), where N is the number of previous unsuccessful attempts.
Then you increment the address and program the next byte. If a byte does not get programmed correctly after a number of attempts (10~25 typically), then you have a programming failure.

These PIC devices use the same principle, but the address and data are entered serially (you only need 2 pins; for parallel operation you would in most cases need more pins than the package has).

The programming pulses are generated internally. The device starts the internal programming pulse when it receives the "Begin programming" command and it ends it when it receives the "End programming" command. (See par. "A programming pulse is defined...").
Therefore, YOU MUST SUPPLY the two commands 100us apart, read back the data and verify, etc.

Hope this helps.

P.S. A piece of advice: Do not code protect the windowed versions of these devices if you are going to use them just for yourself. They take forever (many erasing cycles) to erase. It seems the code protection bit is buried deeper into the silicon.
Posted on 2003-07-30 20:58:29 by VVV
P.S. A piece of advice: Do not code protect the windowed versions of these devices if you are going to use them just for yourself. They take forever (many erasing cycles) to erase. It seems the code protection bit is buried deeper into the silicon.

Just how many erase cycles? I have a couple that won't program; either they're toast or CP was set.
Posted on 2003-08-02 06:09:23 by eet_1024
Where I work we use PIC's extensively. We once code protected 10 windowed devices, and 6 of them are still protected after more then 10 erasures.

We contacted Microchip about the issue and they said they did not recommend code protecting the windowed devices. They pointed out that the datasheets contained a note about that.
Sure enough, if you look in the paragraph "Special features of the CPU", under "Program verification/ Code protection", you will find the note (in the latest datasheets).

The "unofficial" explanation was that the code protection bits were buried deeper into the silicon.
Maybe it was intentional, maybe it was accidental.
To put a positive spin on that, if you ship a product with a windowed device, a software pirate would not "get lucky" by doing a partial erase, hoping the CP bit would be erased, while the rest of the code would be almost intact.

As for the devices you have, try reading the config word. If CP is on, maybe there is still a chance.
Otherwise, I would say they are toast.

(I haven't given up hope on those 6 devices; whenever I erase something, they get it, too).
Posted on 2003-08-02 22:39:55 by VVV
Thanks for the Tip VVV!

I found an App noter that has more or less a programatic explaination of what i was looking for (and is why i havent been back till now). I've studied the entire source and got a better feel for how its to work. I didnt wake up to the fact you send the data first, then "pulse" program the data. I thought it was the other way around, and in one step alown.

The Appnote is found here if anyone is interested:


The example has a programming "jig" that will extract calibation parameters from a chip, and reprogram them in to the chip. It assumes that the calibration paramenters are generated from the final product, with a special code page for calibration if not programmed yet. Thus the calibration parameters are generated, per device specific to the "quirks" of each device's sensors etc. Its an interesting article realy. There is alot of good code snipets as well for RS232 etc.

The real surprise is the number of times it programs ever byte. If it takes 3 times to get it right, it will then do it 3*3=9 more times on top of it as an "over program" ensurance. I didnt realize that EPROM is this lathargic when being programmed. Like i said earlier i thought it does it all in one step, data & program.

Actually, while im here, there *is* one thing that i didnt understand from the above source. On page 15, line/address 027E: there is a line "movlw UPPER6BITS".

Earlier UPPER6BITS is equated to 0x34h. My misunderstanding comes from the fact that every "word" that is programmed is 14 bits long, making a high byte, and a low byte. The highbyte is 6 bits, and the low bite is 8 bits. I would assume that the data to program would be compensated in this way, but it turns out that the data being programmed is on a per byte basis, where the upper 6 bits is hard coded to be 0x34h every time! The lower 8 bits is data from the programming buffer!!

Can anyone explain to me the mentality behind this?? What is the 0x34 to mean? Is it a special code to imply that the lower 8 bits is actual data? or does it mean "store the lower 8 bits in data memory"??

This is kinda a surprise, and may be a big issue if im going to write my own ISP programmer. This example programms calibration data. However, i will want to program everything (data and code).

Im tossing this thought out early. I havent consulted the spec sheets regarding this issue yet. If you know off hand or have advice, i would be happy to hear it ;)

Posted on 2003-08-02 23:30:26 by NaN
Nevermind. Im off track again!

I figured it out. The side notes puzzled me origionally, and kinda ignored it. After posting this, and re-reading it, i see what they are doing now.

The 0x34h is the OP-CODE for "retlw". The lower 8 bits is the calibated 8bit data to return. Hence this example programs a look-up table! Not just raw data into data memory!

I get it now. Anywho, thanks for reading this anways! If you have any thing more to add/share I am still open to your thoughts ;)

Posted on 2003-08-02 23:34:29 by NaN
The learning curve of using Microchips software. Needn't bother. Machine code you programs in say Debug or any hex editor. Than use this program to convert into Intel Hex file format. When using MPLAB go to file menu and choose "import hex file"
Posted on 2003-08-08 16:48:56 by mrgone