[patch] CommonFilters: added getTexCoordSemantic()

Bug #1374594 reported by Juha Jeronen
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Panda3D
Fix Released
Undecided
rdb

Bug Description

CommonFilters refactor vol. 1.

As discussed in http://www.panda3d.org/forums/viewtopic.php?f=9&t=16367#p95740

This very small patch adds an internal helper function to CommonFilters to fetch the next free TEXCOORD semantic. When called again with the same label, the same semantic is returned. This is used to match the TEXCOORD semantics between the generated vshader and fshader.

The result is that consecutive registers will be used regardless of which combination of filters is enabled.

Revision history for this message
Juha Jeronen (juha-jeronen) wrote :
Revision history for this message
rdb (rdb) wrote :

I created an easier solution that is more sustainable in the long run. The filters now put the list of texcoords they want to use in a list, after which they are added programmatically.

Please grab the latest version from CVS:
http://panda3d.cvs.sourceforge.net/viewvc/panda3d/direct/src/filter/CommonFilters.py?view=markup

Changed in panda3d:
assignee: nobody → rdb (rdb)
milestone: none → 1.9.0
status: New → Fix Committed
Revision history for this message
Juha Jeronen (juha-jeronen) wrote :

Ah, now that you've posted it, it's obviously a better solution. Thanks.

I think there's still a problem, though: the computation of texcoords at lines 267-279 uses the names of the filters instead of the list of texcoords that the filters want. Similarly, the provided texpad inputs are linked to the filter names, not to the textures they refer to.

The pattern is pretty simple:

l_texcoordC=(vtx_position.xzxz * texpad_txcolor) + texpad_txcolor;

so if it is known that l_texcoordC corresponds to txcolor, these lines could be automatically generated.

I could do this unless you want to?

Revision history for this message
rdb (rdb) wrote :

Good call. I've checked in some changes that make it handle the padding more automatically. It is also a bit smarter about when padding isn't enabled for the textures (ie. when textures-power-2 is set to none), which should result in faster shaders.

Revision history for this message
Juha Jeronen (juha-jeronen) wrote :

Hmm, the new code looks very good.

This leaves just two more potential issues:

1) Enabling texpix_txaux by checking the presence of specifically the CartoonInk filter seems rather arbitrary.

CartoonInk used to be the only one that needed texpix, but of the new filters, also Scanlines needs to know the pixel size. Additionally, the upcoming new CartoonInk may need also texpix_txdepth depending on its configuration options. Basically, any filter that wants to use whole-pixel offsets in its computations will need texpix for some texture, and for which one, depends on the details of the filter algorithm.

Thus, I think this needs a general mechanism to specify which filters require texpix, and for which textures - similar to the mechanisms you have already added for the texture sampler and texcoord inputs. The specification of exactly which texpix variables are needed by which filters probably belongs into the Filter interface as part of the refactoring.

Note that the various custom textures created for the intermediate stages might have a size different from txcolor; e.g. bloom is rendered into a half-resolution texture. Hence, for generality, texpix must be handled per-texture.

2) Some filters may want to flip some of their input textures along x, along y, or through the view center point. (I missed this possibility in my comment yesterday.)

For example, the first stage of the lens flare filter flips the input color texture around the view center point. This is done in the vshader by modifying the generated texcoords. Operation-count-wise, this is much cheaper than recomputing the texture coordinate in the fshader for each pixel separately.

Currently there are no filters (even among the new ones) that would want to do this for an input of the "main" postprocessing shader, but in the future someone might want to add such a filter.

(In fact, we could easily implement a "mirror" filter by flipping the coordinates? This would be useful as a postprocessing step for rendering textures for camera-based mirrors in the scene.)

Revision history for this message
rdb (rdb) wrote :

1) Yes, but keep in mind that CartoonInk requires texpix_txaux in the *fragment shader*, whereas HalfPixelShift requires all of the texpix inputs in the *vertex shader*.

But yes, I agree this is only a temporary solution. If we improve the design of the filter system to be more modular, I imagine that each Filter instance could 'register' a list of inputs they need for the final compositing shader.

2) The first stage, you say? These issues are only relevant to the final compositing stage, since each individual stage can specify their own shader to do the flip. Let's try not to over-engineer this too much.

As for a mirror effect, this could be done by flipping the quad on the final card, and requires no shader code. I don't see how this applies to camera-based mirrors in the scene, though, since those have to be done by flipping the camera across a plane in 3-D space.

Revision history for this message
Juha Jeronen (juha-jeronen) wrote :

1) True. I'll see what I can do. (I'm currently writing a short proposal for the new design.)

2) True, and indeed the lens flare filter does it that way. This only becomes a problem if someone writes a single-stage filter (where all code goes to the compositing stage) that requires one of its texture inputs to be flipped. I think it's conceivable that could happen - but I have no idea how likely that is.

If you think it's over-engineering to account for that case, then I'm fine with leaving this part as it is. Given the current collection of shaders (including upcoming ones), that feature is not yet needed.

It didn't cross my mind to flip the quad - you're right about that.

My point about camera-based mirrors was that after the view has been rendered from the mirrored viewpoint, then as one step in the process, the left and right need to be somehow swapped in order for the object in the original view to behave like a real mirror. But I see I wasn't thinking this completely through - a simple swap of the whole view texture hardly does what is needed. Thanks for the correction - I need to read up on the standard way to do this :)

rdb (rdb)
Changed in panda3d:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.