I need some advice on LAN performance. I have this "file server" application that I want to run on a Windows server. It's kinda like ISAM/VSAM for you old mainframe folks. Please, let's not get into "SQL is better" etc., I have good reasons for wanting to do this. It works like this. The "client" makes a request and passes a parameter "block". The block contains a function and related info. It can be very short, as in OPEN with a filename, READ with a key, or just GET next, with nothing. But it can also be very long, as in ADD new record, where the whole record is passed in the block. The "server" processes the request, and sends back info in the same parameter block. Again, it can be very short, such as exception codes like KEY NOT FOUND, or END OF FILE. Or in cases like ADD new record, the normal answer is just "DONE". But again, a READ request, for example, results in the entire record being passed back to the user in the block. This "routine" exists today. It has been "performance tweaked" over the years, in the world of DOS, 32-bit mixed-mode "386" code. Network support is limited to "record locking" but this has not caused any problems, with the nature of the applications using the service. It already does an excellent job at buffer management. It dynamically adjusts index levels as a file grows and shrinks. It supports backout/restart. It allows user defined key lengths, "variable length" record updates, and forward, backward or random access. It reuses "deleted" space when possible. In general, it does a pretty darn good job at what it does... Now, I know that there are better ways to do things today that could reduce network traffic, but that's not the point here. In normal operation, large amounts of data are going to pass over the LAN, no matter how well it is cached or synchronized. My main concern is how fast I can get that data on and off of the network. Most clients are interactive, "file maintenance" programs, kinda like old dBase. Some are "batch" programs that process entire files. For big files, I plan to run these on the server when possible to avoid network traffic, and/or schedule them "after hours" when server overhead, or network traffic isn't as big of an issue. They are not a problem, but a user can still read a large section of, if not an entire file, resulting in heavy network traffic. There are also a few "administrator" type programs that do things like BACKUP and IMPORT, produce statistics, and even attempt recovery of a physically damaged file (luckily, very little need for that one!). These programs bypass the "user interface" and deal with the files physically, so they don't apply here. User programs are written in various languages, including DOS MASM. Native support is C and small interfaces talk to DOS Basic, and Micro Focus Cobol. The same ".OBJ" is always linked with the user program, one way or another. It is important that user programs maintain the same "calling conventions" in the new environment. What I envision is to write a new ".OBJ" that is called with the same parameters, but does whatever it takes to pass the request to the server, wait for it, and send the answer back up to the user. The server needs to be able to process requests from an old DOS Cobol program, or a brand new, hasn't been written yet, WIN32ASM program on the same LAN. :) So what's the best way to talk to a "server" for this type of application? Do the above "compatibility limitations" rule out COM/DCOM (Ernie)? Is netbios "faster" than sockets? I don't mind learning details, if I can get significant results. What about named pipes, which I think are available only on NT? In fact, since NT has a "server" version, does it offer anything that I can take advantage of in this case? And how much support is available to a "DOS" user with each method? How's that for a newbie question... ;)
Posted on 2001-02-04 08:35:00 by S/390
Too bad the C in COM stands for Component and not Compatability. COM is completely about compatability, since COM is a specfication on how objects talk together (ie, an interface specfication). It is intentionally independent of language, which is why you see VC objects talking to VB talking to Java talking to... I am NOT a network guy. And while COM may be language independent, less may be said about it's OS dependence. I believe DCOM (or this may be COM+) is dependent on MTS (MS Transaction Server) and the ole dll's. OLE uses the MTS framework to pass info. I've never played with it. I will tell you that if you use DCOM, if you are using such objects you access them just as if they were right inside your process, which is the whole point of COM, easy reuseability.
Posted on 2001-02-04 10:34:00 by Ernie