I have some ideas for merging and extending on both the ideas and the code provided by Alan Baylis and Nathan Whitaker.

Alan produced a really nice demo a while back that used the Camera View Frustum to determine whether any Portals were visible, and if so, draw their content onscreen - but more importantly, the demo clearly demonstrated exactly what a Leaf node of a BSP Tree represents - a chunk of space, bounded by Walls and Portals... also, it demonstrated what a Portal is, by drawing them as transparent partitioning walls that separate chunks of space, which we are allowed to travel through. He used Nathan's algorithm for Portal discovery.

In regards to improving on Nathan's algorithm:

Nathan's algorithm requires that the BSP Tree has already been created, and is based apon the intensive fragmentation of what is initially a set of large rectangles that are created so that they lay apon the splitting-planes of our BSP Tree (any non-leaf node has one of these).

I found that Nathan's algorithm generates a LOT of portal fragment triangles, the majority of which are later found to be redundant and discarded - but it generates them all BEFORE it culls out the dead wood, and I say that's wrong.

We can do this on a 'per-rectangle basis', ie, create a rectangle, chop it up (with all the 'other' split planes), send the resulting fragments down the tree, cull out the dead wood / find the 'real portal fragments', and then recurse the remaining rectangles.

This will be much less memory-intensive, and will speed up the culling phase as well.

In regards to improving apon Alan's algorithm:

Alan's realtime culling scheme is based on us being able to find out which Leaf Node of the BSP Tree the Camera is currently in, which requires us to walk the BSP Tree from the Root, performing a Point/Plane test at each Node, until we land in a Leaf. We draw the contents of that Leaf, and then we check whether any of the Portals owned by that Leaf are visible, by testing for intersection with the planes of the Camera View Frustum, and if any Portals are visible, we recursively render those too, again checking for Portal visibility etc.

So - we have a BSP Tree and its set of Leaves, each Leaf owns a set of Portal fragments which point to another Leaf or Leaves, and we have a dynamic Frustum.. it seems to me that we actually don't need the entire BSP Tree at this stage, we are only using it to determine which Leaf we are currently in - ONCE- and then we only need the Leaves.

So, what if we get rid of the BSP Tree and only keep a list of its Leaves?

Our initial test to find the "current leaf" containing the Camera becomes more expensive yes - but after we find that out, we can use the portal connectivity information to track what leaf we are in.

We can write a new method of finding the 'current leaf' which is based on tests of the Point against the planes of the faces and portal fragments in each Leaf - and better yet, since we generated Bounding Boxes for each node during the BSP Tree creation, we can quickly determine which leaves the Camera MIGHT be in, versus those it CANT be in.

We don't need the Tree anymore, we don't need recursive stuff at all.

So we can say goodbye to all our BSP Tree (except for the Leaf Nodes), and simply think in terms of a list of subspaces linked by portals (which in turn might be used for acceleration of a-star and other pathfinding algorithms).

Your thoughts?

Alan produced a really nice demo a while back that used the Camera View Frustum to determine whether any Portals were visible, and if so, draw their content onscreen - but more importantly, the demo clearly demonstrated exactly what a Leaf node of a BSP Tree represents - a chunk of space, bounded by Walls and Portals... also, it demonstrated what a Portal is, by drawing them as transparent partitioning walls that separate chunks of space, which we are allowed to travel through. He used Nathan's algorithm for Portal discovery.

In regards to improving on Nathan's algorithm:

Nathan's algorithm requires that the BSP Tree has already been created, and is based apon the intensive fragmentation of what is initially a set of large rectangles that are created so that they lay apon the splitting-planes of our BSP Tree (any non-leaf node has one of these).

I found that Nathan's algorithm generates a LOT of portal fragment triangles, the majority of which are later found to be redundant and discarded - but it generates them all BEFORE it culls out the dead wood, and I say that's wrong.

We can do this on a 'per-rectangle basis', ie, create a rectangle, chop it up (with all the 'other' split planes), send the resulting fragments down the tree, cull out the dead wood / find the 'real portal fragments', and then recurse the remaining rectangles.

This will be much less memory-intensive, and will speed up the culling phase as well.

In regards to improving apon Alan's algorithm:

Alan's realtime culling scheme is based on us being able to find out which Leaf Node of the BSP Tree the Camera is currently in, which requires us to walk the BSP Tree from the Root, performing a Point/Plane test at each Node, until we land in a Leaf. We draw the contents of that Leaf, and then we check whether any of the Portals owned by that Leaf are visible, by testing for intersection with the planes of the Camera View Frustum, and if any Portals are visible, we recursively render those too, again checking for Portal visibility etc.

So - we have a BSP Tree and its set of Leaves, each Leaf owns a set of Portal fragments which point to another Leaf or Leaves, and we have a dynamic Frustum.. it seems to me that we actually don't need the entire BSP Tree at this stage, we are only using it to determine which Leaf we are currently in - ONCE- and then we only need the Leaves.

So, what if we get rid of the BSP Tree and only keep a list of its Leaves?

Our initial test to find the "current leaf" containing the Camera becomes more expensive yes - but after we find that out, we can use the portal connectivity information to track what leaf we are in.

We can write a new method of finding the 'current leaf' which is based on tests of the Point against the planes of the faces and portal fragments in each Leaf - and better yet, since we generated Bounding Boxes for each node during the BSP Tree creation, we can quickly determine which leaves the Camera MIGHT be in, versus those it CANT be in.

We don't need the Tree anymore, we don't need recursive stuff at all.

So we can say goodbye to all our BSP Tree (except for the Leaf Nodes), and simply think in terms of a list of subspaces linked by portals (which in turn might be used for acceleration of a-star and other pathfinding algorithms).

Your thoughts?

I guess that rendering in terms of subspaces also much better for current-generation videocards (and their over-abstracting drivers). With sub-spaces, we know what geometry this sub-space has, and sort by shader and then by texture (this is done offline).

And each frame we needn't re-calculate in which subspace the camera is. Simply let the artist specify the initial position (he does that anyway), and then track the camera movent with lines in space. After the camera has been moved, try to hit-test that line with the current subspace's portals (each portal can be made of one or many triangles on one plane). If the line goes through a portal, reposition the camera inside the subspace that portal points to.

I think BSP is no longer used in current-gen games as a means to reduce overdraw.

And each frame we needn't re-calculate in which subspace the camera is. Simply let the artist specify the initial position (he does that anyway), and then track the camera movent with lines in space. After the camera has been moved, try to hit-test that line with the current subspace's portals (each portal can be made of one or many triangles on one plane). If the line goes through a portal, reposition the camera inside the subspace that portal points to.

I think BSP is no longer used in current-gen games as a means to reduce overdraw.

I think BSP is no longer used in current-gen games as a means to reduce overdraw.

Thats true - although theres no reason not to take advantage of that and disable z-buffer while drawing the world... but z-sorting isnt where I'm heading - it's the combination of frustum and portal culling for visual surface detection via a PVS computed in realtime, and the simple discovery and defining of empty and connected subspaces that interest me.

Being able to describe an arbitrary 3D world in terms of a network of linked nodes (not a tree) is exactly where I want to go.

This morning, I had an epiphany :)

I have completed my reworking of the BSP Tree Generator, and was beginning to add support code for discovery of portals.

Part of that involved adding code to find the bounding box of the set of faces given as input into each newly-constructed BSPTree Node.

Next was to add code to construct a large rectangle (two triangles) that both laid on a (non-leaf) Node's splitter plane, and extended to its bounding box...

But then I stopped.

Something had occurred to me. It was about this concept of connectivity.

We're trying to discover connectivity between leaf nodes, right? IE, portals / doorways that lead between places in the world.

Why are we bothering to discover Portals (which end up as MANY triangles that describe arbitrary polygons that sit in the empty space that connects the subspaces in the world), when the obvious solution is so much simpler?

We have the bounding boxes of the leaf nodes.

We know that these bounding boxes MUST intersect one another because they are describing CONNECTED SPACES.

For each Leaf, we can easily make a list of which Other LeafBoxes it intersects, and thus describe which Boxes MAY BE VISIBLE.

Only if we leave the current Box will we bother to determine which one we are now inside.

THIS CAN WORK!!

So then I am thinking .. ok, so after processing, we just save the data for Leaf Nodes to our output file.

When the app starts, we 'know' which node the camera is in.

To Render, we begin by drawing the Faces that are in the current node.

Then we perform intersection tests between the View Frustum and boxes in the list of nodes potentially connected to the current node, and render those nodes recursively, thus checking any nodes connected to them (and not already rendered).

Only the nodes whose boxes intersect the frustum will be rendered.

If the Camera leaves the bounding box of the current node, we can again use the connectivity list to determine a 'new' current node.

In fact, spheres could easily be used in place of boxes, which would be JUST as accurate as Boxes in terms of what we're doing, and in many cases even MORE accurate in terms of describing "a Volume of 3D Space containing an arbitrary set of triangles which form a convex set".

I have completed my reworking of the BSP Tree Generator, and was beginning to add support code for discovery of portals.

Part of that involved adding code to find the bounding box of the set of faces given as input into each newly-constructed BSPTree Node.

Next was to add code to construct a large rectangle (two triangles) that both laid on a (non-leaf) Node's splitter plane, and extended to its bounding box...

But then I stopped.

Something had occurred to me. It was about this concept of connectivity.

We're trying to discover connectivity between leaf nodes, right? IE, portals / doorways that lead between places in the world.

Why are we bothering to discover Portals (which end up as MANY triangles that describe arbitrary polygons that sit in the empty space that connects the subspaces in the world), when the obvious solution is so much simpler?

We have the bounding boxes of the leaf nodes.

We know that these bounding boxes MUST intersect one another because they are describing CONNECTED SPACES.

For each Leaf, we can easily make a list of which Other LeafBoxes it intersects, and thus describe which Boxes MAY BE VISIBLE.

Only if we leave the current Box will we bother to determine which one we are now inside.

THIS CAN WORK!!

So then I am thinking .. ok, so after processing, we just save the data for Leaf Nodes to our output file.

When the app starts, we 'know' which node the camera is in.

To Render, we begin by drawing the Faces that are in the current node.

Then we perform intersection tests between the View Frustum and boxes in the list of nodes potentially connected to the current node, and render those nodes recursively, thus checking any nodes connected to them (and not already rendered).

Only the nodes whose boxes intersect the frustum will be rendered.

If the Camera leaves the bounding box of the current node, we can again use the connectivity list to determine a 'new' current node.

In fact, spheres could easily be used in place of boxes, which would be JUST as accurate as Boxes in terms of what we're doing, and in many cases even MORE accurate in terms of describing "a Volume of 3D Space containing an arbitrary set of triangles which form a convex set".