I know it's just vista with a facelift and a workover, but that's exactly what attracts me.
That's what I'm saying: it's not a workover. Vista already had the workover.
Windows 7 has a few new things that Vista didn't have, but those are not things that solve Vista's original problems. Those problems were already solved.
I already wrote about it in a blog in May of this year:
http://scalibq.spaces.live.com/blog/cns!663AD9A4F9CB0661!173.entry
On the May blog-post: Yeah, let's ignore the return of GDI acceleration. Win7 costs 3 Euro more than Vista, and for WinXP users like me it looks like a better deal. Many people like me have been itching to get their mittens on the updated Vista, and here comes Win7 at the same price :) .
On the May blog-post: Yeah, let's ignore the return of GDI acceleration.
Yes I think we can ignore it. It's nice that it's there, but I haven't actually noticed it (to be more exact: I didn't even know it was gone in Vista, until I read about it after I had already been using Vista for a while. And I didn't notice that it was back when I switched from Vista to Win7).
It's not a big deal.
Same with the better memory management with regard to backbuffers and all. It saves video memory, but it's not really an issue anymore either, when 512 mb of videomemory is standard on low-end cards, and 1 GB on mainstream.
The problem has already solved itself, so to speak. I noticed a slight improvement with my 320 mb 8800GTS, but with my 1 GB 5770, it doesn't matter.
Those really aren't things that the average user will even notice, let alone that it will be a reason for them to move to Windows 7.
Most of them will move to Windows 7 to get the Vista features.
Those ignorable features are ignorable on our high-end PCs, fortunately :D.
Those ignorable features are ignorable on our high-end PCs, fortunately :D.
High end? My PC was mainstream when I bought it 3 years ago. It's low-end now.
I think the irony is that the only PCs that would benefit from GDI acceleration are the same PCs that won't meet the minimum requirements for Windows 7 in the first place.
Win7 also saves system memory, since it (tries to?) keeps graphic elements solely in video memory, unlike Vista which had everything both in system and video memory.
Iirc there were some benchmarks that showed that while most stuff ran faster (though probably not user-noticable :P) with Win7's WDDM 1.1 drivers, a few corner cases ran slower than Vista's non-accelerated... let me guess: locking bitmaps and manipulating their contents?
Microsoft's ZoomIn (v3.20.01 from 1992 :P) runs like s*** - anybody knows of a zoom-in-window tool that runs properly? SysInternals' ZoomIt tool runs nicely, but that works by zooming the entire desktop (panning a screenshot, I suppose).
Scali: your PC might be "low end" from a gamer perspective, but I think it's skewed to call it low-end generally...
Iirc there were some benchmarks that showed that while most stuff ran faster (though probably not user-noticable :P) with Win7's WDDM 1.1 drivers, a few corner cases ran slower than Vista's non-accelerated... let me guess: locking bitmaps and manipulating their contents?
Microsoft's ZoomIn (v3.20.01 from 1992 :P) runs like s*** - anybody knows of a zoom-in-window tool that runs properly? SysInternals' ZoomIt tool runs nicely, but that works by zooming the entire desktop (panning a screenshot, I suppose).
Scali: your PC might be "low end" from a gamer perspective, but I think it's skewed to call it low-end generally...
Iirc there were some benchmarks that showed that while most stuff ran faster (though probably not user-noticable :P) with Win7's WDDM 1.1 drivers, a few corner cases ran slower than Vista's non-accelerated... let me guess: locking bitmaps and manipulating their contents?
Thing is, just because you can measure it, doesn't mean you can notice it in daily use.
You'd need a VERY GDI-heavy application. Otherwise even without acceleration it will update its entire screen in a few ms, faster than the user can notice.
Scali: your PC might be "low end" from a gamer perspective, but I think it's skewed to call it low-end generally...
I don't. I have an E6600 2.4 GHz processor... Recently I bought a Pentium DualCore E5200 2.5 GHz processor. This was one of the cheapest Intel processors on the market, and definitely qualifies as 'low end', costing only about 50 euros. The E5200 is about the same speed as my E6600. Hence it's low-end.
Besides, my laptop is even slower, with its 1.5 GHz Core2 Duo and its onboard Intel graphics... But even there I never noticed GDI being slow. I couldn't even tell the difference with my 8800GTS in daily use.
Thing is, just because you can measure it, doesn't mean you can notice it in daily use.
You'd need a VERY GDI-heavy application. Otherwise even without acceleration it will update its entire screen in a few ms, faster than the user can notice.
Scali: your PC might be "low end" from a gamer perspective, but I think it's skewed to call it low-end generally...
I don't. I have an E6600 2.4 GHz processor... Recently I bought a Pentium DualCore E5200 2.5 GHz processor. This was one of the cheapest Intel processors on the market, and definitely qualifies as 'low end', costing only about 50 euros. The E5200 is about the same speed as my E6600. Hence it's low-end.
Besides, my laptop is even slower, with its 1.5 GHz Core2 Duo and its onboard Intel graphics... But even there I never noticed GDI being slow. I couldn't even tell the difference with my 8800GTS in daily use.
That's something I'd call sorta low-end, at least around here :)I wonder where GDI acceleration makes much of a difference (positive as well as negative), except for pathological cases like ZoomIt. Theoretically you could free some CPU power for other processing, but even on your 1.5GHz system it probably makes just about no difference - and on the other hand, the additional usage of the GPU could prevent it from entering lower-power state and thus draining laptop battery life.
I guess Win7's GDI acceleration is more about conserving system memory by not duplicating bitmap elements...
I agree - but you ignored a real-world example of something where it's extremely noticable.
Ofcourse, case of rules and exceptions, you know.
I never denied that it's slower, so I'm not surprised that there are cases where the difference can be noticed.
It may be a 'real-world' example, but does it also fit into 'daily use' for the average user?
I mean, if your webbrowser (pretty GDI-heavy anyway, compared to most other stuff) would ran noticeably slower, yes, that would be something an average user would worry about. But that seems to go fine.
Low-end for a gamer, or us "Rich Kids". But there's a lot of people still stuck with much lower hardware, and not just in the lesser fortunate countries.
That's not the point though. My E6600 certainly doesn't represent the 'bottom line' as far as Vista/Win7 performance is concerned.
Thing is, if you were to buy a new PC today, even some of the cheapest stuff, like the Pentium E5200 I mentioned, isn't going to be slower than what I have. And those systems handle Vista/Win7 like a charm.
That's my point, GDI performance may be 'fixed' with Windows 7, but was it really a problem in the first place?
I've heard people complain about many things with Vista, but I can't recall any one of them ever mentioning anything related to GDI rendering speed.
As for the 'really poor', tough luck for them, but they're not the type of people who would be playing DX9+ games or using Vista/Win7 anyway. I don't think they belong within the scope of this discussion.
That's something I'd call sorta low-end, at least around here :)
I wonder where GDI acceleration makes much of a difference (positive as well as negative), except for pathological cases like ZoomIt. Theoretically you could free some CPU power for other processing, but even on your 1.5GHz system it probably makes just about no difference - and on the other hand, the additional usage of the GPU could prevent it from entering lower-power state and thus draining laptop battery life.
I guess Win7's GDI acceleration is more about conserving system memory by not duplicating bitmap elements...
I wonder where GDI acceleration makes much of a difference (positive as well as negative), except for pathological cases like ZoomIt. Theoretically you could free some CPU power for other processing, but even on your 1.5GHz system it probably makes just about no difference - and on the other hand, the additional usage of the GPU could prevent it from entering lower-power state and thus draining laptop battery life.
I guess Win7's GDI acceleration is more about conserving system memory by not duplicating bitmap elements...
Well, there's two sides to the story.
1) There's the move to GPU-based rendering, eliminating the need for extra backbuffers...
2) GDI has received more granular locking, so that less time is spent waiting on GDI calls, and more parallelism is possible.
I'm not sure why they did it.
I don't think GPU acceleration is an issue for power saving though. GPUs are permanently in low power state in desktop mode, they don't need all the horsepower to render a desktop. I don't think CPU or GPU drawing will make a difference for the power mode the GPU needs to be in.
However, I do think that a difference will be that the GPU requires a lot less time and also power to do the same drawing operations as the CPU. Another thing is that the CPU can now do other tasks in parallel, while if you were to do eg a fillrect on CPU, you'd max out your memory controller for a considerable time, stalling other applications.
Who knows... perhaps MS did it 'because they could'... Perhaps it really does help power consumption... or perhaps it's a different angle altogether... Eg, it may help Windows run better from a virtual environment, or with remote desktop, because it's easier to translate the drawing calls at the acceleration level.
At any rate, for regular desktop users it's something they'll probably never notice.
I was reading in my Video Demystified book (handbook for video repair guys) about MPEG streams.
They can contain a series of JPEG images, which suffer no artefacts - that is to say, all frames come from the stream.
Or they can be "true mpeg", where theres two kinds of frames.
One is a pure data frame, as before.
The other is a guesstimation - the encoder generates an inbetween frame based apon the two nearest frames.
It's not just an interpolation, chrominance and luminance and stuff is all taken into account, but the result is an array of "error values", which describe the difference between Frame A and the ideal intermediate frame.
So MPEGs (typically) produce frames, most of which come from the datastream, but some of which are generated based on delta error expressions and previous frame. These intermediate frames are visually incorrect (there is always artefacts) but they are good enough to serve for one frame between some real ones, and fool the eye. This lets the decoder be slower and still acceptable, and I guess that was originally the point of that.
That book also contains some interesting stuff about JPEG artefacts that would probably interest you.
They can contain a series of JPEG images, which suffer no artefacts - that is to say, all frames come from the stream.
Or they can be "true mpeg", where theres two kinds of frames.
One is a pure data frame, as before.
The other is a guesstimation - the encoder generates an inbetween frame based apon the two nearest frames.
It's not just an interpolation, chrominance and luminance and stuff is all taken into account, but the result is an array of "error values", which describe the difference between Frame A and the ideal intermediate frame.
So MPEGs (typically) produce frames, most of which come from the datastream, but some of which are generated based on delta error expressions and previous frame. These intermediate frames are visually incorrect (there is always artefacts) but they are good enough to serve for one frame between some real ones, and fool the eye. This lets the decoder be slower and still acceptable, and I guess that was originally the point of that.
That book also contains some interesting stuff about JPEG artefacts that would probably interest you.
I was reading in my Video Demystified book (handbook for video repair guys) about MPEG streams.
They can contain a series of JPEG images, which suffer no artefacts - that is to say, all frames come from the stream.
I think you are referring to a format known as Motion-JPEG, or MJPEG.
Yes, it's basically just a stream of JPEGs. Only the images are encoded.
With 'full' MPEG, you don't encode all images. You take 'keyframes' at a certain interval, and then encode the difference between those two keyframes with a few heuristics. This gives you frames with information known as motion vectors. The keyframes are basically stored as JPEG images (with a few minor changes... one of them is that JPG contains its own optimized Huffman and quantization tables in the image... MPEG has fixed tables for faster decoding, they can be pre-optimized in software or hardware. My JPG decoder builds an optimized huffman decoder everytime you open a file).
This gives you a lot more compression, but it isn't suitable for realtime purposes.
Firstly, you need to store the entire sequence of frames, which is a delay in itself. Secondly, you need to analyze this sequence of frames to determine all the motion. This is a rather bruteforce process (the main reason why video encoding is still relatively slow on high-end PCs).
That's why digital camera's or TV capture cards and such will generally record in MJPEG format rather than real MPEG.
If you take these files and recode them into MPEG you can often get much better compression with little or no quality loss.
DVDs and digital broadcasts such as satellite or cable TV will always use 'real' MPEG.
This gives some side-effects that you've probably witnessed already:
1) 'live' broadcasts lag 1-2 seconds behind the same broadcast on analog TV. Funny with football matches... You hear the analog watching neighbours cheering over a goal that hasn't been scored yet on your screen.
2) Switching channels and/or recovering from loss of signal always takes a while... the decoder cannot continue decoding until it has received a complete frame, usually takes 1-1.5 seconds until one comes along.
3) When the image gets corrupted, you can clearly see the effect of the motion vector data... The garbled data can be seen moving across the screen in very linear fashion.
For that reason it's also not really possible to play a DVD backwards frame by frame. The data can only be decoded forward, and most players don't have a buffer to hold all the frames, and then play backwards through them... Instead they will just skip from keyframe to keyframe in backward mode.
"most of"? I thought most frames in MPEGs (and other formats) are delta frames, with a few keyframes in "optimal" resolution. I think I once came upon a video player that showed "direction vectors" for mpegs - was quite interesting to see, whatever it was :)
"most of"? I thought most frames in MPEGs (and other formats) are delta frames, with a few keyframes in "optimal" resolution. I think I once came upon a video player that showed "direction vectors" for mpegs - was quite interesting to see, whatever it was :)
That is correct. As I mentioned earlier, keyframes are emitted every 1-1.5 seconds on average... depending on quality settings ofcourse, higher quality will have more keyframes, lower quality will have fewer keyframes... especially early DivX encodings used very few keyframes to get good compression rates... But you would notice accumulation of error.
It's funny with certain things such as a football pitch... The grass will lose its detail over time, and the detail will 'snap' back everytime a keyframe is received.
But if we were to take 1 second intervals as an example... Say you have 24 fps (standard movie rate), that means that you have one keyframe for every 24 frames, so only about 4% of the frames is stored as a whole, the rest is all reconstructed from the motion info.
Here's the main two players involved in the new IOCP asynch file io stuff:
To get started, you call NetComEngine.OpenFile, passing it your pathname, desired access mode (R/W and sharing), and a pointer to an eventsink class you derived from "NetComFileEvents".
This will return to you a new NetComFile object, which you use to issue file operations.
These will be broken up into small "iojob requests" (1500 bytes or less).
As each io request completes, calls are made to the eventsink you supplied.
Each io request is marked with a 64bit offset, so out-of-order processing is not too problematic.
So, assuming you managed to get yourself a pointer to a NetComFile object, you might call the NetComFile.ReadFile method.
This generates a bunch of IOJobs and returns without waiting for the data to be received - as chunks of data are read, we will receive asynch notifications via our callback object, where Your Code decides what to do with it.
When the data has been exhausted, our eventsink will receive an "OnEOF" notification.. end of file.
When we receive that,we can choose to destroy the NetComFile object if we want... its not done for us.
When the NetComFile object is Destroyed (by You), our eventsink will receive one final notification, alerting the Application that this pointer is about to become invalid.
Object NetComFile, NetComFileID, DiskStream
;RedefineMethod Done
RedefineMethod Init, Pointer, Pointer, dword, dword, Pointer
RedefineMethod Done
StaticMethod OnRead, PIOJOB ;Some file data is ready
StaticMethod OnWrite, PIOJOB
StaticMethod DoRead, PIOJOB
StaticMethod DoWrite, PIOJOB
StaticMethod SetFilePointer,dword,dword
StaticMethod Get_Progress_Percent
StaticMethod ReadFile
RedefineMethod BinRead, dword ;Initiate asynch Read(s)
RedefineMethod BinWrite, Pointer,dword ;Initiate asynch Write(s)
DefineVariable qOffset, QuadWord,{<>} ;FilePointer for the next Job we queue
DefineVariable qFileSize, QuadWord,{<>} ;Size of the file
DefineVariable qProgress, QuadWord,{<>} ;Total Amount of data transferred
DefineVariable pEventSink,Pointer,NULL ;-> NetComFileEvent object
ObjectEnd
;FileEvents callback
Object NetComFileEvents,3245345,Primer
DynamicAbstract OnRead,Pointer ;-> IOJob
DynamicAbstract OnWrite,Pointer ;-> IOJob
DynamicAbstract OnEOF,Pointer ;-> NetComFile
DynamicAbstract OnClose,Pointer ;-> NetComFile
ObjectEnd
To get started, you call NetComEngine.OpenFile, passing it your pathname, desired access mode (R/W and sharing), and a pointer to an eventsink class you derived from "NetComFileEvents".
This will return to you a new NetComFile object, which you use to issue file operations.
These will be broken up into small "iojob requests" (1500 bytes or less).
As each io request completes, calls are made to the eventsink you supplied.
Each io request is marked with a 64bit offset, so out-of-order processing is not too problematic.
So, assuming you managed to get yourself a pointer to a NetComFile object, you might call the NetComFile.ReadFile method.
This generates a bunch of IOJobs and returns without waiting for the data to be received - as chunks of data are read, we will receive asynch notifications via our callback object, where Your Code decides what to do with it.
When the data has been exhausted, our eventsink will receive an "OnEOF" notification.. end of file.
When we receive that,we can choose to destroy the NetComFile object if we want... its not done for us.
When the NetComFile object is Destroyed (by You), our eventsink will receive one final notification, alerting the Application that this pointer is about to become invalid.
NetComEngine/NetComFile allow you to perform disk operations "in the background", alerting you when IO operations have completed.
The NetComFile class represents an Asynchronous Communication Channel for a File supporting full read/write capability, it can act as a FileStream, and it supports random access, although it is not 100% optimized for that.
Aynchronous IO is managed by (some additions to) the NetComEngine IOCP framework, and file events are received via a user-defined eventsink derived from an abstract class - this means you control what happens when a chunk of data is ready, and you control it on a per-file basis.
Here's the code for NetComFile.
I'll consider this beta until it's been hardness-tested, but all seems well.
Oh - and expect this code to change!! I am quite likely to make changes in fact.
Consider this a sneak preview or something, but I'll post a bunch of working code anyway.
Next posts will contain the updates for NetComEngine.
About random access, theres two catches.
One is that you need to call NetComFile.SetFilePointer whever the IO offset has moved (unless the next offset is linear).
And the other is that I haven't used the appropriate flag in my CreateFile call, so the filesystem isn't expecting it.
But it'll work :)
A future update might use a switch for linear/random access modes.
The NetComFile class represents an Asynchronous Communication Channel for a File supporting full read/write capability, it can act as a FileStream, and it supports random access, although it is not 100% optimized for that.
Aynchronous IO is managed by (some additions to) the NetComEngine IOCP framework, and file events are received via a user-defined eventsink derived from an abstract class - this means you control what happens when a chunk of data is ready, and you control it on a per-file basis.
Here's the code for NetComFile.
I'll consider this beta until it's been hardness-tested, but all seems well.
Oh - and expect this code to change!! I am quite likely to make changes in fact.
Consider this a sneak preview or something, but I'll post a bunch of working code anyway.
Next posts will contain the updates for NetComEngine.
; ==================================================================================================
; Title: NetComFile.inc
; Author: Homer / G. Friedrich
; Version: 1.0.0
; Purpose: Asynchronous File IO via NetCom's IOCP Engine
; Notes: Version 1.0.0, January 2010
; - First release.
; - Read/Write Completion Methods are Mutexed (ObjLock) within NetComEngine.WorkerThread
; - This protects internal Counters from being corrupted by simultaneous Completions.
; - IT DOES NOT GUARANTEE ORDER OF COMPLETION
; - However, we tagged each IOJob with a 64-bit Offset ("FilePosition") :)
; -
; - GUARANTEEING ORDER OF COMPLETION
; - Your NetComFileEvents eventsink object must implement its own FilePosition counter,
; - and Collect any IOJobs whose 64-bit Offset doesn't match the current FilePosition :)
; - Collected Jobs will have to be dealt with in a Delayed way.
; - NOTE that EventSink Notification calls are already within an ObjectLock !!
; - Your EventSink methods (and their Counters and other data) are effectively threadsafe !! Good stuff huh
; ==================================================================================================
PIOJOB typedef ptr IOJOB
.code
if IMPLEMENT
;Method: NetComFile.Init
;Purpose: Attempt to Open a File for Asynch IO
;Arguments: pOwner -> Owner Object
; pstrPathName -> full pathname of file
; dShareMode = any combo of FILE_SHARE_READ, FILE_SHARE_WRITE
; dAccessMode = any combo of GENERIC_READ, GENERIC_WRITE
; pEventSink -> EventSink interface derived from NetComFileEvent
;Returns: NULL = SUCCESS
; ErrorCode = Failed to open file (need WRITE access?)
;Remarks: Failure can be due to bad SHARE access ,etc
Method NetComFile.Init,uses esi,pOwner,pstrPathName,dShareMode,dAccessMode,pEventSink
SetObject esi
m2m .pEventSink,pEventSink,edx
DbgHex pEventSink
;If the user requests Write access, we will use OPEN_ALWAYS
;This allows new files to be created.
mov edx,dAccessMode
and edx,GENERIC_WRITE
.if edx!=0
mov edx,OPEN_ALWAYS
.else
;If user only wants READ access, the file should exist!
mov edx,OPEN_EXISTING
.endif
ACall Init,pOwner,pstrPathName,dShareMode,dAccessMode,edx,FILE_FLAG_OVERLAPPED
.if eax!=STM_OPENERROR ;then eax=hFile, i think..
invoke GetLastError
.if eax== ERROR_ALREADY_EXISTS
;This isnt really an error
DbgText "opened EXISTING file"
.elseif eax!=0
;Return unhandled error
DbgDec eax,"returning error code"
ExitMethod
.endif
;Bind the FileHandle to the IOCP
mov edx,pOwner
invoke CreateIoCompletionPort,.hFile,.NetComEngine.hIOCP,0,0
.if eax==0
;Possibly not an error if a given filehandle is already associated
DbgWarning "NetComFile.Init Error - failed to bind FileHandle to IOCP"
invoke GetLastError
DbgDec eax,"Reason"
.if eax==ERROR_INVALID_HANDLE
DbgDec .hFile,"is an invalid handle, apparently.."
.endif
int 3
.else
;Initialize FileSize
invoke GetFileSize, .hFile, addr .qFileSize.HiDWord
mov .qFileSize.LoDWord,eax
OCall esi.SetFilePointer,0,0
;Return success
xor eax,eax
.endif
.endif
MethodEnd
;Method: NetComFile.Done
;Purpose: Destructor method
; Notify application that the object is redundant
; Release resources
Method NetComFile.Done,uses esi
SetObject esi
DbgWarning "NetComFile.Done"
mov edx,.pEventSink
.if edx!=0
;Warn the application that this NetComFile is about to be Destroyed
OCall edx::NetComFileEvents.OnClose,esi
mov .pEventSink,0
.endif
ACall Done
MethodEnd
;Method: NetComFile.OnRead
;Purpose: Sinks 'READ io completion' events
Method NetComFile.OnRead, uses esi, pIOJob:PIOJOB
LOCAL q:QuadWord
SetObject esi
;A FileRead request has completed.
;Increment the progress counter
mov eax,pIOJob
DbgDec .qProgress.LoDWord
DbgDec .IOJOB.dBytesUsed
qdadd .qProgress,.IOJOB.dBytesUsed
;Send Progress Notification
.if .pEventSink!=0
OCall .pEventSink::NetComFileEvents.OnRead,pIOJob
.endif
;Check for EOF
qmov q,.qProgress
qqsub q,.qFileSize
.if q.LoDWord==0 && q.HiDWord==0
OCall .pEventSink::NetComFileEvents.OnEOF,esi
.endif
MethodEnd
;Method: NetComFile.OnWrite
;Purpose: Sinks 'WRITE io completion' events
Method NetComFile.OnWrite,uses esi, pIOJob:PIOJOB
SetObject esi
;Update FileSize - in case of access mid-file
invoke GetFileSize, .hFile, addr .qFileSize.HiDWord
mov .qFileSize.LoDWord,eax
;Increment the progress counter
qdadd .qProgress,.IOJOB.dBytesUsed
;Send Progress Notification
.if .pEventSink!=0
OCall .pEventSink::NetComFileEvents.OnWrite,pIOJob
.endif
MethodEnd
;Method: NetComFile.DoRead
;Purpose: Perform call to initiate Overlapped FileRead request
;Returns: TRUE/FALSE
;Remarks: Assumes the IOJob has been initialized with valid values.
Method NetComFile.DoRead, uses esi, pIOJob:PIOJOB
LOCAL q:QuadWord
;Set the File Offset for this FileRead request
SetObject esi
mov edx,pIOJob
.if .IOJOB.dBytesUsed!=0
DbgDec .IOJOB.dBytesUsed,"requesting read"
;Queue the Read request
invoke ReadFile,.hFile,.IOJOB.WSABuf.pBuffer,.IOJOB.dBytesUsed,addr .IOJOB.dBytesUsed,edx
.if eax != TRUE ;Did read IO complete synchronously?
invoke GetLastError ;No, check if it is a pending state
.if eax != ERROR_IO_PENDING
;OCall esi.OnError, pIOJob
DbgDec eax,"Unhandled Error in NetComFile.DoRead"
int 3
return FALSE
.endif
.endif
.else
;FileStream is exhausted - End Of File
mov edx,.pEventSink
.if edx!=0
OCall edx::NetComFileEvents.OnEOF,esi
.endif
mov eax,FALSE
.endif
MethodEnd
;Method: NetComFile.Write
;Purpose: Perform call to initiate Overlapped FileWrite request
;Returns: TRUE/FALSE
;Remarks: Assumes the IOJob has been initialized with valid values.
Method NetComFile.DoWrite, uses esi, pIOJob:PIOJOB
SetObject esi
SetObject esi
mov edx,pIOJob
.if .IOJOB.dBytesUsed!=0
invoke WriteFile,.hFile,.IOJOB.WSABuf.pBuffer,.IOJOB.dBytesUsed,addr .IOJOB.dBytesUsed,edx
.if eax != TRUE ;Did IO complete synchronously?
invoke GetLastError ;No, check if it is a pending state
.if eax != ERROR_IO_PENDING
;OCall esi.OnError, pIOJob
DbgDec eax,"Unhandled error in NetComFile.DoWrite"
int 3
return FALSE
.endif
.endif
.else
DbgWarning "NetComFile.DoWrite: dBytesUsed=0"
mov eax,FALSE
.endif
MethodEnd
;Method: NetComFile.BinWrite
;Purpose: Write some data to the file
;Args: pData = ptr to user's data to be Written
; dLength = #bytes to Write
;Remarks: Be sure to call SetFilePointer at least once prior to this call!
; Its safe to destroy the input buffer as soon as your call completes.
; The data will be written to the file in the background.
Method NetComFile.BinWrite, uses esi,pData,dLength
SetObject esi
.while dLength!=0
;Allocate an IOJob
mov edx,.pOwner
xOCall .NetComEngine.IOJobs::NetComIOJobPool.NewItem, esi, FILE_WRITE
.if eax != NULL
;Determine how much data to put into this IOJob
mov ecx,eax
mov eax,.IOJOB.WSABuf.dLength
.if eax>dLength
mov eax,dLength
.endif
mov .IOJOB.dBytesUsed,eax
sub dLength,eax
;Copy the IO Offset into the Job
m2m .IOJOB.Ovl.OffsetHigh,.qOffset.HiDWord,eax
m2m .IOJOB.Ovl.an_Offset, .qOffset.LoDWord,eax
;Copy (one buffer or less) bytes of data into the Job
push ecx
push eax
invoke RtlMoveMemory,.IOJOB.WSABuf.pBuffer,pData,eax
pop eax
pop ecx
add pData,eax
;Queue the Write request
push eax
OCall esi.DoWrite, ecx
pop ecx
;Update the IO Offset
qdadd .qOffset,ecx
.endif
.endw
MethodEnd
;Method: NetComFile.BinRead
;Purpose: Read some data from the file
;Args: dLength = #bytes to read
;Remarks: Be sure to call SetFilePointer at least once prior to this call!
; Notifications will be received the usual way
Method NetComFile.BinRead, uses esi,dLength
SetObject esi
.while dLength!=0
;Allocate an IOJob
mov edx,.pOwner
OCall .NetComEngine.IOJobs::NetComIOJobPool.NewItem, esi, FILE_READ
.if eax != NULL
;Determine how much data to put into this IOJob
mov ecx,eax
mov eax,.IOJOB.WSABuf.dLength
.if eax>dLength
mov eax,dLength
.endif
mov .IOJOB.dBytesUsed,eax
sub dLength,eax
push eax
;Copy the IO Offset into the Job
m2m .IOJOB.Ovl.OffsetHigh,.qOffset.HiDWord,eax
m2m .IOJOB.Ovl.an_Offset, .qOffset.LoDWord,eax
;Queue the Read request
OCall esi.DoRead, ecx
;Update the IO Offset
pop eax
qdadd .qOffset,eax
.endif
.endw
MethodEnd
;Method: NetComFile.ReadFile
;Purpose: Read the entire contents of the file
;Remarks: This method queues asynchronous requests for chunks of file,
; then returns immediately... Event notifications will be sent
; to the User's EventSink (NetComFileEvent-derived interface)
Method NetComFile.ReadFile, uses esi
LOCAL pIOJob:PIOJOB
LOCAL q:QuadWord
SetObject esi
qmov q,.qFileSize
.repeat
.if q.HiDWord!=0
;The filesize is larger than 32 bits !!!
;Make the biggest READ call we possibly can with 32 bits
OCall esi.BinRead ,-1
qdsub q,-1
.else
.break .if q.LoDWord==0
;The filesize is 32 bits
;Make a READ call for that amount
OCall esi.BinRead, .qFileSize.LoDWord
qdsub q,.qFileSize.LoDWord
.endif
.until 0
MethodEnd
;Method: NetComFile.SetFilePointer
;Purpose: Sets the internal FilePointer offset (used when queuing IO requests)
;Args: dHigh,dLow = 64-bit file offset
;Remarks: With care, you can use this to random-access a file.
Method NetComFile.SetFilePointer,uses esi,dHigh:dword,dLow:dword
SetObject esi
m2m .qOffset.HiDWord,dHigh,edx
m2m .qOffset.LoDWord,dLow,edx
MethodEnd
;Method: NetComFile.Get_Progress_Percentage
;Purpose: Convert Progress QuadWord into Percentage Dword
;Returns: File Progress, integer Percentage
Method NetComFile.Get_Progress_Percent,uses esi ebx ecx
LOCAL q:QuadWord
SetObject esi
mov eax,.qProgress.LoDWord
mov edx,.qProgress.HiDWord
mov ebx,100
qdmul
mov ebx,.qFileSize.LoDWord
mov ecx,.qFileSize.HiDWord
qqdiv
MethodEnd
endif
About random access, theres two catches.
One is that you need to call NetComFile.SetFilePointer whever the IO offset has moved (unless the next offset is linear).
And the other is that I haven't used the appropriate flag in my CreateFile call, so the filesystem isn't expecting it.
But it'll work :)
A future update might use a switch for linear/random access modes.
The changes to NetComEngine are mostly inside the Worker Thread:
NetComFile uses NetComEngine's internal IOJob Pool, which was designed for Socket IO... In many ways, NetComFile resembles NetComConnection. One has a file handle, the other has a socket handle.
But they are not the same.
Unlike NetComConnections, NetComEngine does not manage instances of NetComFile, or their associated NetComFileEvents-derived classes. Other than the IOJobs you'll be working with, Your Application is in total control of the File stuff.
Previously, NetComEngine supported two kinds of message containers - IOJOB and IOMSG.
And it still does.
There's a couple of new IOJOB operation codes for FileRead and FileWrite.
The Worker thread examines the completed message container.
If its an IOJOB, it looks further to see if its a File Operation, or a Socket Operation.
Then it handles these separately, using either GetOverlappedResult or WSAGetOverlappedResult, but essentially using the same logic otherwise.
I decided to separate these cleanly, instead of doing some kind of dirty asm jump into a common handler, because the error handler needs to call either GetLastError, or WSAGetLastError :P
NetComEngine has one new Method, which is used to get our hands on a NetComFile object:
Next post will contain the update for NetCom's object definitions.
; 覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧
; Method: NetComEngine.Worker
; Purpose: Here is the HEART AND SOUL of the NetEngine.
; This thread is responsible for waiting on IOCP completion notifications and passing
; them to the appropriate handler.
; There can be several of these Worker threads operating asynchronously.
; Arguments: None.
; Return: Nothing.
Method NetComEngine.Worker, uses ebx edi esi
local pIOMsg:PIOMSG, dCompletionKey:dword, dBytes:dword, dFlags:dword
SetObject esi
.repeat
;The GetQueuedCompletionStatus api call will (if successful) return two useful pieces of
;information to us. One is the "completion key" of the completed io, the other is a ptr
;to the IOJob which represents the io operation which (we hope) is completed.
invoke GetQueuedCompletionStatus, .hIOCP, addr dBytes,
addr dCompletionKey,
addr pIOMsg, 500
mov ebx, pIOMsg
.if ebx != NULL
;Something got Dequeued from the IOCP...
mov edi, .IOMSG.pConnection
.if .IOMSG.sdOperation > 0 ;Job or message?
;IO Job
;Determine whether the operation completed successfully or not
.if .IOMSG.sdOperation<5
;socket io
invoke WSAGetOverlappedResult, .NetComConnection.hSocket, ebx, \
addr .IOJOB.dBytesUsed, \
FALSE, addr dFlags
.if eax != FALSE
.switch .IOJOB.sdOperation
.case SOCKET_READ
mov eax, dBytes
lock add .dBytesIn, eax
OCall edi::NetComConnection.OnRead, ebx
.case SOCKET_WRITE
mov eax, dBytes
lock add .dBytesOut, eax
OCall edi::NetComConnection.OnWrite, ebx
.case SOCKET_ACCEPT
mov eax, dBytes
lock add .dBytesIn, eax
OCall edi::NetComConnection.OnAccept, ebx
.case SOCKET_CONNECT
mov eax, dBytes
lock add .dBytesOut, eax
OCall edi::NetComConnection.OnConnect, ebx
.default
DbgWarning "UNKNOWN IOJOB OPERATION IDENTIFIER"
OCall edi::NetComConnection.QueueClose
.endsw
.else
DbgDec $invoke(GetLastError),"WSAGetOverlappedResult failed"
.ifBitSet .NetComConnection.dFlags, NCC_ABORT
.if .IOJOB.sdOperation == SOCKET_READ
;If we have a read operation, we have to remove it from the read SDLL
LockObjectAccess .NetComConnection
lea eax, .IOJOB.pNextItem
SDLL_Remove eax, ecx, edx
UnlockObjectAccess .NetComConnection
.endif
xOCall .IOJobs::NetComIOJobPool.FreeItem, ebx
.else
invoke WSAGetLastError ;This error is thread specific
.continue .if eax == ERROR_IO_INCOMPLETE
OCall edi::NetComConnection.OnError, ebx, eax
.endif
.endif
.else
;file io
invoke GetOverlappedResult,.NetComFile.hFile,ebx,addr .IOJOB.dBytesUsed,FALSE
.if eax != FALSE
.switch .IOJOB.sdOperation
.case FILE_READ
xOCall edi::NetComFile.OnRead, ebx
xOCall .IOJobs::NetComIOJobPool.FreeItem, ebx
.continue ;<-- important - skip all remaining code to end of loop
.case FILE_WRITE
xOCall edi::NetComFile.OnWrite, ebx
xOCall .IOJobs::NetComIOJobPool.FreeItem, ebx
.continue ;<-- important - skip all remaining code to end of loop
.default
DbgWarning "UNKNOWN FILE IOJOB OPERATION IDENTIFIER"
xOCall .IOJobs::NetComIOJobPool.FreeItem, ebx
.endsw
.else
DbgDec $invoke(GetLastError),"GetOverlappedResult failed"
xOCall .IOJobs::NetComIOJobPool.FreeItem, ebx
.endif
.endif
.else
int 3
;IO Message
.switch .IOMSG.sdOperation
.case MESSAGE_DISCONNECT
OCall edi::NetComConnection.OnDisconnect, ebx
.case MESSAGE_CLOSE
OCall edi::NetComConnection.OnClose, ebx
.case MESSAGE_TIMEOUT
OCall edi::NetComConnection.OnTimeout, ebx
OCall edi::NetComConnection.QueueClose
.default
DbgWarning "UNKNOWN IOMSG OPERATION IDENTIFIER"
OCall edi::NetComConnection.QueueClose
.endsw
.endif
lock dec .NetComConnection.dPendingIORequests
; DbgDec .NetComConnection.dPendingIORequests
;Check if the NetComConnection that is marked to close has no pending IOJobs.
.ifBitSet .NetComConnection.dFlags, NCC_CLOSE
.if .NetComConnection.dPendingIORequests == 0
DbgWarning "Closing NetComConnection"
;In this case dispose and recycle the NetComConnection.
push .NetComConnection.pOwner
OCall edi::NetComConnection.Done
pop ecx
xOCall ecx::NetComConnectionPool.FreeItem, edi
.endif
.endif
.endif
.until (.dShuttingDown == SDN_CLOSE_WORKERS && .IOJobs.dCount == 0) || \
.dShuttingDown == SDN_QUIT_WORKERS ;Abortive close
DbgText "NetComEngine worker death"
MethodEnd
NetComFile uses NetComEngine's internal IOJob Pool, which was designed for Socket IO... In many ways, NetComFile resembles NetComConnection. One has a file handle, the other has a socket handle.
But they are not the same.
Unlike NetComConnections, NetComEngine does not manage instances of NetComFile, or their associated NetComFileEvents-derived classes. Other than the IOJobs you'll be working with, Your Application is in total control of the File stuff.
Previously, NetComEngine supported two kinds of message containers - IOJOB and IOMSG.
And it still does.
There's a couple of new IOJOB operation codes for FileRead and FileWrite.
The Worker thread examines the completed message container.
If its an IOJOB, it looks further to see if its a File Operation, or a Socket Operation.
Then it handles these separately, using either GetOverlappedResult or WSAGetOverlappedResult, but essentially using the same logic otherwise.
I decided to separate these cleanly, instead of doing some kind of dirty asm jump into a common handler, because the error handler needs to call either GetLastError, or WSAGetLastError :P
NetComEngine has one new Method, which is used to get our hands on a NetComFile object:
; 覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧覧
; Method: NetComEngine.OpenFile
; Purpose: Attempt to open new/existing file for asynch io
; Arguments: pstrPathName -> Pathname of file
; dShareMode = any combo of FILE_SHARE_READ, FILE_SHARE_WRITE
; dAccessMode = any combo of GENERIC_READ, GENERIC_WRITE
; pEventSink -> EventSink derived from NetComFileEvents
; Return: NULL (failed) or pNetComFile
Method NetComEngine.OpenFile,uses esi,pstrPathName,dShareMode,dAccessMode,pEventSink
LOCAL file
SetObject esi
mov file,$New(NetComFile)
;Initialize the NetComFile object, setting Parent = this (NetComEngine)
OCall eax::NetComFile.Init,esi,pstrPathName,dShareMode,dAccessMode,pEventSink
DbgDec eax
.if eax==0
DbgText "Opened file"
return file
.else
;Failed to initialize NetComFile object
DbgWarning "Failed to open file"
DbgDec eax,"Error code"
Destroy file
xor eax,eax
.endif
MethodEnd
Next post will contain the update for NetCom's object definitions.
Here's the changes to NetCom's objects:
Now, you'll need to include the QuadWord.inc macros, but otherwise, you should be able to still build and use the existing NetComEngine demos with these updates in place.
And you can start to experiment with NetComFile !!
; ==================================================================================================
; File stuff
Object NetComFile, NetComFileID, DiskStream
RedefineMethod Init, Pointer, Pointer, dword, dword, Pointer
RedefineMethod Done
StaticMethod OnRead, PIOJOB ;Some file data is ready
StaticMethod OnWrite, PIOJOB
StaticMethod DoRead, PIOJOB
StaticMethod DoWrite, PIOJOB
StaticMethod SetFilePointer,dword,dword
StaticMethod Get_Progress_Percent
StaticMethod ReadFile
RedefineMethod BinRead, dword ;Initiate asynch Read(s)
RedefineMethod BinWrite, Pointer,dword ;Initiate asynch Write(s)
DefineVariable ObjLock, ObjectLock, {} ;Don't move this member
DefineVariable qOffset, QuadWord,{<>} ;FilePointer for the next Job we queue
DefineVariable qFileSize, QuadWord,{<>} ;Size of the file
DefineVariable qProgress, QuadWord,{<>} ;Total Amount of data transferred
DefineVariable pEventSink,Pointer,NULL ;-> NetComFileEvent object
ObjectEnd
;FileEvents callback
Object NetComFileEvents,3245345,Primer
DynamicAbstract OnRead,Pointer ;-> IOJob
DynamicAbstract OnWrite,Pointer ;-> IOJob
DynamicAbstract OnEOF,Pointer ;-> NetComFile
DynamicAbstract OnClose,Pointer ;-> NetComFile
ObjectEnd
; ==================================================================================================
;Note: Connections are tracked in a linked list, since they may come from an attack! The supervisor
; thread should analyse them and eventually close the connections and add the IP addresses to
; the black list.
Object NetComEngine, NetComEngineID, Primer
VirtualMethod CloseConnections
VirtualMethod ConnectTo, Pointer, dword, dword
RedefineMethod Done
VirtualMethod GetLogicalCpuCount
RedefineMethod Init, Pointer, dword, dword, dword, dword
VirtualMethod Listen, Pointer, dword ;pProtocol, dListenPort
VirtualMethod QueueAcceptor, PLISTENER
VirtualMethod RawConnection, Pointer, dword ;pProtocol, dSocketType
VirtualMethod OpenFile, Pointer,dword,dword,Pointer
DefineVariable hIOCP, Handle, 0
DefineVariable dShuttingDown, dword, 0
DefineVariable dAcceptors, dword, 0
DefineVariable hSupervisor, Handle, 0
DefineVariable LocalHost, sockaddr_in, {}
DefineVariable ConnectionChain, SDLL_SENTINEL, {NULL, NULL}
DefineVariable dBytesIn, dword, 0
DefineVariable dBytesOut, dword, 0
DefineVariable dRateIn, dword, 0
DefineVariable dRateOut, dword, 0
Embed Connections, NetComConnectionPool ;Pool of NetComConnections
Embed Listeners, DataCollection ;DataCollection of listening sockets
Embed IOJobs, NetComIOJobPool ;IOJob Pool
Embed IOMessages, NetComIOMsgPool ;IOMsg Pool
Embed Workers, DwordCollection ;Collection of Worker thread handles
ObjectEnd
Now, you'll need to include the QuadWord.inc macros, but otherwise, you should be able to still build and use the existing NetComEngine demos with these updates in place.
And you can start to experiment with NetComFile !!
It's a pity we don't define some symbol in our macro files, so I could:
ifndef __QuadWord__Macros
%include ...
I can do it with any Object Class by name, but not with a set of macros. Hum.
Eg
;Late-Load this Object if necessary
ifndef DiskStream
LoadObjects DiskStream
endif
is perfectly legal.
ifndef __QuadWord__Macros
%include ...
I can do it with any Object Class by name, but not with a set of macros. Hum.
Eg
;Late-Load this Object if necessary
ifndef DiskStream
LoadObjects DiskStream
endif
is perfectly legal.
Another month has passed, let's assess the damage: http://store.steampowered.com/hwsurvey
Windows XP 32-bit down to 42.15% now, with only 0.63% being x64.
Windows 7 is at 28.53%, 19.50% of which is x64(!).
Vista is still going down aswell, roughly 28% now, 9% of which is x64.
In other words, about 57% of all Windows users uses a DirectX 10/11-capable OS. About 30% of all Windows users use an x64 OS.
Windows XP 32-bit down to 42.15% now, with only 0.63% being x64.
Windows 7 is at 28.53%, 19.50% of which is x64(!).
Vista is still going down aswell, roughly 28% now, 9% of which is x64.
In other words, about 57% of all Windows users uses a DirectX 10/11-capable OS. About 30% of all Windows users use an x64 OS.
In other words, about 57% of all Windows users uses a DirectX 10/11-capable OS. About 30% of all Windows users use an x64 OS.
Let's put a positive and predictable spin on this, 100% of Windows users are using Windows :P