I?m facing a new project and I want to know your opinions on how you would implement it.
The goal is to load on a regular time basis an internet page, say daily, and strip out some information like a quotation of some shares and store this information on a database or excel.
Has somebody done somewhat like this before?


Posted on 2006-12-17 05:48:39 by Biterider
Kinda - I've used PHP to perform all this.. my script performed the following steps:
- fetch webpage (naively via php socket code)
- parse particulars from html (php made this sooo easy)
- store particulars to file-based database on server
- perform database queries and responses generating dynamic html pages to display the results

The only problem with all this of course is that in order to execute the script, you had to visit a page on the server.. so I made a silly little program to periodically connect to the webserver and cause the script to be executed.

It might not have been the most beautiful solution in the world, and really I should have used php/sql for the database, but the customer was happy, especially with the time it took to put it together :P

Posted on 2006-12-17 07:01:06 by Homer
Simple example of getting quotes for GE (General Electric and Met Life)
MessageBox displays quotes.
fasm syntax!



Posted on 2006-12-17 08:03:46 by farrier
<3 Regular Expressions <3
Posted on 2006-12-18 17:06:15 by f0dder
Homer, your suggestion seems to be the shortest way, but I have to learn a bit more of PHP

farrier, thanks for your source. I have checked it and I?ll probably use it.

fOdder, can you be a bit more explicit?



Posted on 2006-12-19 12:30:52 by Biterider
Regular Expressions are pretty useful when parsing stuff - or rather, extracting data easily, instead of having to actually *parse*. Often very useful when you need to mine web stuff for data :)
Posted on 2006-12-20 17:38:01 by f0dder