I wanna make some kind of prog which uses multiple computers over the internet. A kind of napster-like system, but something that is more in demand or has less competition cos as we all know there are a thousand napsters. Anyone have any ideas for where I should start??
Posted on 2001-09-30 13:56:42 by nin
Get a good grap of winsock :). Read on various communication
protocols (FTP, HTTP, and such) to become familiar with usual methods
of data transfer.

Ask yourself if you want peer-to-peer or server.

Ask yourself if you *really* want it :)
Posted on 2001-09-30 14:06:48 by f0dder
I dont mind either way. I just need ideas on this sort of thing to get a direction. I'm already quite farmiliar with winsock and data transfer.
Posted on 2001-09-30 14:20:34 by nin
Writing a protocol that suits your requirements could be a first step.. Of course you could use any existing protocol but I don't know any that is designed for napster like programs.
Just make some basic commandos to retrieve file lists from a computer, download or upload something, etc.
Then write the winsock code for it. Then write your program around it. That's how I would do it..

Thomas
Posted on 2001-09-30 17:15:02 by Thomas
hi Thomas,
i just came accross this posting and its really a very interesting subject. How can one make his/her own protocol !? and then making up the basic commands like download upload etc..
if you have any source or links or whatever that relates to this subject plz let me know..
thanks.
Posted on 2001-10-30 01:30:35 by Ray
Making your own protocol? Well, first you decided whether you want
TCP or UDP (not really any point in using something else). For file
data transfer itself, it would be stupid using anything but TCP; if you
want reliable transfers, that is. Otherwise you'll end building your
own TCPish thing ontop of UDP, which is a waste of time.

As for the rest of the protocol? It all depends on what you want to
do :). You could have command-bytes, where 0 would mean search,
1 retrieve, 2 disconnect (or whatever...). Or you could use strings
like "search reallygreatfile", "retrieve joebloggs reallygreatfile",
and... whatever.

It depends pretty much on what you want to do, so it's hard to
give any generic advice.
Posted on 2001-10-30 05:17:17 by f0dder
LOL I didnt think this thread was still alive. I'm in the process of designing a media sharing system which will use users as servers automatically as well as users setting up servers themselves. How do you think this system should be designed so that it works fast and efficiently for searching file databases, retreiving results and downloading files? I have designed things so that main servers can select child servers to help alleviate cpu time and bandwidth and also that other child servers broadcast info to all peers connected to the main server etc.
What do you guys think?
Posted on 2001-10-30 06:05:17 by nin
F0dder wrote: Making your own protocol? Well, first you decided whether you want
TCP or UDP (not really any point in using something else). For file
data transfer itself, it would be stupid using anything but TCP; if you
want reliable transfers, that is. Otherwise you'll end building your
own TCPish thing ontop of UDP, which is a waste of time.


I didn't mean writing your own TCP- or UDP-like protocol, I the second protocol type f0dder mentioned, something like HTTP, FTP, etc.. The basic commands to communicate.
Using a text-based protocol has some advantages, because it's easier to test the communication by telnetting to your server and type in the commands, but a problem will be sending binary data then..
But binary commands aren't hard to program, especially not in asm.

nin wrote:
LOL I didnt think this thread was still alive. I'm in the process of designing a media sharing system which will use users as servers automatically as well as users setting up servers themselves. How do you think this system should be designed so that it works fast and efficiently for searching file databases, retreiving results and downloading files? I have designed things so that main servers can select child servers to help alleviate cpu time and bandwidth and also that other child servers broadcast info to all peers connected to the main server etc.
What do you guys think?


The search databases should have some kind of fast searchable index (with a good hash algorithm like md5), the problem will be the place where this db is stored... Maybe you could send a 'search commando' to all servers reachable from you and then receive as much replies as possible with the IPs the found files can be downloaded. Use a seperate data connection to connect to the users that have that files...

Thomas
Posted on 2001-10-30 12:38:11 by Thomas
Thomas, a 'search commando' is somebody you don't want after
you - the right word (I guess) is 'search command'. Sorry, couldn't
help it :).

As for the searching, I assume this will be a peer-to-peer network,
which is after all the most reasonable approach for a filesharing network...
no main servers that can be taken down, and harder to punish the
creators of the filesharing technology (I guess).

Consider a (big) peer2peer network. Some users will have megabit
ADSL connections, some will have 33k modems. Some might be on
an OC48 or whatever. You'll want some way to make the hefty machines
take more requests than the smaller machines, and have caches
of previous results, or something similar. If you just send out a
search with a time-to-live that goes to "whatever" machines, the
search could end up painstakingly slow (33.6... bad ping rates...)

A good way might be implementing "supernodes". If a machine is
deemed worthy (cpu time (not so important?), bandwith) it will be
promoted to a "supernode". It should then ask the network for other
known supernodes, and "hook up with them", getting their cached
filelists (which should probably have a time-to-live).

When a regular client connects to the network, it asks "whatever"
machines if it knows a supernode. You could perhaps even scan
through an IP range, but some ISPs don't like this, and if done wrongly,
it could cause some impressive network slowdown. When a supernode
is found, the client will upload it's filelist to the supernode, for caching.
Any search requests will be processed by the supernodes.

This way, searches will not be limited by the speed of a single or
two stupid 33.6 modems :]. The requirements for being a supernode
probably wont be too large. A 256/128 line would probably be quite
sufficient. In fact, if you set the supernode limit too high, the requirements
will be higher because there are fewer supernodes and each will
have to do more processing :).

Load distribution between supernodes would be a good thing.

And there's a whole lot of tweaks and parameters and algorithms
that needs consideration to make this thing run smoothly. Efficient
routing is going to be one tough nut. And what to do if no supernodes
are found? (Sort of probable in the beginning of the network :D).

I might think of other stuff later. This stuff is fun. Ta-ta.
Posted on 2001-10-31 04:23:14 by f0dder
Thomas, a 'search commando' is somebody you don't want after
you - the right word (I guess) is 'search command'. Sorry, couldn't
help it.

LOL :grin:, commando is the dutch word for both english words commando and command...

Thomas
:stupid:
Posted on 2001-10-31 05:16:38 by Thomas
In danish as well ;)
Posted on 2001-10-31 05:22:53 by f0dder