OpenSceneGraph Forum Forum Index OpenSceneGraph Forum
Official forum which mirrors the existent OSG mailing lists. Messages posted here are forwarded to the mailing list and vice versa.
 
   FAQFAQ    SearchSearch    MemberlistMemberlist    RulesRules    UsergroupsUsergroups    RegisterRegister 
 Mail2Forum SettingsMail2Forum Settings  ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 
   AlbumAlbum  OpenSceneGraph IRC ChatOpenSceneGraph IRC Chat   SmartFeedSmartFeed 

Cheaper way to implement rendering-to-texture and post-processing pipeline?

Goto page 1, 2  Next
 
Post new topic   Reply to topic    OpenSceneGraph Forum Forum Index -> General
View previous topic :: View next topic  
Author Message
Wang Rui
Guest





PostPosted: Thu Feb 14, 2013 3:30 pm    Post subject:
Cheaper way to implement rendering-to-texture and post-processing pipeline?
Reply with quote

Hi Robert, hi all,

I want to raise this topic when reviewing my effect composting/view-dependent shadow code. As far as I know, most of us use osg::Camera for rendering to texture and thus the post-processing/deferred shading work. We attach the camera's sub scene to a texture with attach() and set render target to FBO and so on. It works fine so far in my client work.


But these days I found another way to render scene to FBO based texture/image:


First I create a node (include input textures, shaders, and the sub scene or a screen sized quad) and apply FBO and Viewport as its state attributes:


    osg::ref_ptr<osg::Texture2D> tex = new osg::Texture2D;
    tex->setTextureSize( 1024, 1024 );
    


    osg::ref_ptr<osg::FrameBufferObject> fbo = new osg::FrameBufferObject;
    fbo->setAttachment( osg::Camera::COLOR_BUFFER, osg::FrameBufferAttachment(tex) );
    

    node->getOrCreateStateSet()->setAttributeAndModes( fbo.get() );
    node->getOrCreateStateSet()->setAttributeAndModes( new osg::Viewport(0, 0, 1024, 1024) );



Then if we need more deferred passes, we can add more nodes with screen sized quads and set texOutput as texture attributes. The intermediate passes require fixed view and projection matrix. So we can add it a cull callback like:


    cv->pushModelViewMatrix( new RefMatrix(Matrix()), Transform::ABSOLUTE_RF );
    cv->pushProjectionMatrix( new RefMatrix(Matrix::ortho2D(0.0, 1.0, 0.0, 1.0)) );
    
    each_child->accept( nv );
    
    cv->popProjectionMatrix();
    cv->popModelViewMatrix();



This works well in my initial tests and it won't require a list of osg::Camera classes. I think this would be a light-weighted way for the post-processing work as it won't create multiple RenderStages at the back-end and will reduce the possibility of having too many nests of cameras in a scene graph.


Do you think it useful to have such a class? User input a sub-scene or any texture; the class uses multiple passes to process it and output to a result texture. The class won't have potential cameras for the RTT work, and can be placed anywhere in the scene graph as a deferred pipeline implementer, or a pure GPU-based image filter.


I'd like to rewrite my effect compositor implementation with the new idea if it is considered necessary, or I will forget it and soon be getting ready to submit both the deferred shading pipeline and the new VDSM code in the following week. Smile


Cheers,


Wang Rui

------------------
Post generated by Mail2Forum
Back to top
Paul Martz
Guest





PostPosted: Thu Feb 14, 2013 4:53 pm    Post subject:
Cheaper way to implement rendering-to-texture and post-processing pipeline?
Reply with quote

This is how I've been doing post-rendering effects, too.

However, I have never done any performance benchmarks. My instinct tells me that this method should have faster cull time than using a Camera, but if post-rendering cull time makes up only a small percentage of the total cull time, then I imagine the performance benefit would be difficult to measure.


Have you done any performance comparisons against equivalent use of Camera nodes?



On Thu, Feb 14, 2013 at 8:33 AM, Wang Rui < (
Only registered users can see emails on this board!
Get registred or enter the forums!
)> wrote:
Quote:
Hi Robert, hi all,

I want to raise this topic when reviewing my effect composting/view-dependent shadow code. As far as I know, most of us use osg::Camera for rendering to texture and thus the post-processing/deferred shading work. We attach the camera's sub scene to a texture with attach() and set render target to FBO and so on. It works fine so far in my client work.


But these days I found another way to render scene to FBO based texture/image:


First I create a node (include input textures, shaders, and the sub scene or a screen sized quad) and apply FBO and Viewport as its state attributes:


    osg::ref_ptr<osg::Texture2D> tex = new osg::Texture2D;
    tex->setTextureSize( 1024, 1024 );
    


    osg::ref_ptr<osg::FrameBufferObject> fbo = new osg::FrameBufferObject;
    fbo->setAttachment( osg::Camera::COLOR_BUFFER, osg::FrameBufferAttachment(tex) );
    

    node->getOrCreateStateSet()->setAttributeAndModes( fbo.get() );
    node->getOrCreateStateSet()->setAttributeAndModes( new osg::Viewport(0, 0, 1024, 1024) );



Then if we need more deferred passes, we can add more nodes with screen sized quads and set texOutput as texture attributes. The intermediate passes require fixed view and projection matrix. So we can add it a cull callback like:


    cv->pushModelViewMatrix( new RefMatrix(Matrix()), Transform::ABSOLUTE_RF );
    cv->pushProjectionMatrix( new RefMatrix(Matrix::ortho2D(0.0, 1.0, 0.0, 1.0)) );
    
    each_child->accept( nv );
    
    cv->popProjectionMatrix();
    cv->popModelViewMatrix();



This works well in my initial tests and it won't require a list of osg::Camera classes. I think this would be a light-weighted way for the post-processing work as it won't create multiple RenderStages at the back-end and will reduce the possibility of having too many nests of cameras in a scene graph.


Do you think it useful to have such a class? User input a sub-scene or any texture; the class uses multiple passes to process it and output to a result texture. The class won't have potential cameras for the RTT work, and can be placed anywhere in the scene graph as a deferred pipeline implementer, or a pure GPU-based image filter.


I'd like to rewrite my effect compositor implementation with the new idea if it is considered necessary, or I will forget it and soon be getting ready to submit both the deferred shading pipeline and the new VDSM code in the following week. Smile


Cheers,


Wang Rui






_______________________________________________
osg-users mailing list
(
Only registered users can see emails on this board!
Get registred or enter the forums!
)
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org





--
Paul Martz
Skew Matrix Software LLC

------------------
Post generated by Mail2Forum
Back to top
Aurelien
Appreciator


Joined: 03 Aug 2011
Posts: 297

PostPosted: Thu Feb 14, 2013 7:26 pm    Post subject:
Reply with quote

Hi,

Thanks for sharing this, I've never think to this way, and it's very interseting.

Not really for performance reason, but also for simplicity reasons.

I'll try to dig that further to see if it can be usefull to implement what is discussed here : http://forum.openscenegraph.org/viewtopic.php?t=11577

(using renderbin to configure current FBO)

One use case could be one of the applications I'm working on :

- I have a scene with multiple objects
- I render the scene using HDR shader : scene => FBO 1
- Post process (tone mapping) : FBO 1 => FBO 2
- render GUI elements with normal shader : FBO 2 + GUI => FBO 3

I would like to "move" an object from HDR to normal rendering and vice-versa by simply specify its RenderBin as "FBO 1" or "FBO 3".

I've already have a shader system with automatically switch between HDR and normal rendering, but for now I have to move the object from a sub graph to another, which breaks the graph logics for user and also for events o intersectors.

Thank you!

Cheers,
Aurelien
Back to top
View user's profile Send private message
robertosfield
OSG Project Lead


Joined: 18 Mar 2009
Posts: 11758

PostPosted: Thu Feb 14, 2013 7:28 pm    Post subject:
Cheaper way to implement rendering-to-texture and post-processing pipeline?
Reply with quote

Hi Rui,

The cost of traversing an osg::Camera in cull should be very small,
and one can avoid using a separate RenderStage by using a
NESTED_TRAVERSAL. Using a different custom Node to do similar RTT
setup is done for osg::Camera will incur similar CPU and GPU costs, so
I unless there is really sound reason for providing an alternative
then I'd rather just stick with osg::Camera as it'll just be code to
maintenance and more code to teach people how to use with the
additional hurdle of having two ways to do the same thing. If need be
perhaps osg::Camera could be extended if it isn't able to handle all
the current required usage cases.

As for scalability, you can't have too many osg::Camera nested or not
in the scene graph, the only limit is the amount of memory available
and your imagination, both limits apply to any scheme you come up with
for doing something similar to osg::Camera. While osg::Camera can do
lots of tasks, in itself it isn't a large object, it's only the
buffer/textures that you attach that are significant in size, and this
applies too approaches.

Right now I'm rather unconvinced there is pressing need for an alternative.

Robert.

On 14 February 2013 15:33, Wang Rui <> wrote:
Quote:
Hi Robert, hi all,

I want to raise this topic when reviewing my effect
composting/view-dependent shadow code. As far as I know, most of us use
osg::Camera for rendering to texture and thus the post-processing/deferred
shading work. We attach the camera's sub scene to a texture with attach()
and set render target to FBO and so on. It works fine so far in my client
work.

But these days I found another way to render scene to FBO based
texture/image:

First I create a node (include input textures, shaders, and the sub scene or
a screen sized quad) and apply FBO and Viewport as its state attributes:

osg::ref_ptr<osg::Texture2D> tex = new osg::Texture2D;
tex->setTextureSize( 1024, 1024 );

osg::ref_ptr<osg::FrameBufferObject> fbo = new osg::FrameBufferObject;
fbo->setAttachment( osg::Camera::COLOR_BUFFER,
osg::FrameBufferAttachment(tex) );

node->getOrCreateStateSet()->setAttributeAndModes( fbo.get() );
node->getOrCreateStateSet()->setAttributeAndModes( new osg::Viewport(0,
0, 1024, 1024) );

Then if we need more deferred passes, we can add more nodes with screen
sized quads and set texOutput as texture attributes. The intermediate passes
require fixed view and projection matrix. So we can add it a cull callback
like:

cv->pushModelViewMatrix( new RefMatrix(Matrix()), Transform::ABSOLUTE_RF
);
cv->pushProjectionMatrix( new RefMatrix(Matrix::ortho2D(0.0, 1.0, 0.0,
1.0)) );

each_child->accept( nv );

cv->popProjectionMatrix();
cv->popModelViewMatrix();

This works well in my initial tests and it won't require a list of
osg::Camera classes. I think this would be a light-weighted way for the
post-processing work as it won't create multiple RenderStages at the
back-end and will reduce the possibility of having too many nests of cameras
in a scene graph.

Do you think it useful to have such a class? User input a sub-scene or any
texture; the class uses multiple passes to process it and output to a result
texture. The class won't have potential cameras for the RTT work, and can be
placed anywhere in the scene graph as a deferred pipeline implementer, or a
pure GPU-based image filter.

I'd like to rewrite my effect compositor implementation with the new idea if
it is considered necessary, or I will forget it and soon be getting ready to
submit both the deferred shading pipeline and the new VDSM code in the
following week. :-)

Cheers,

Wang Rui







------------------
Post generated by Mail2Forum
Back to top
View user's profile Send private message
danoo
Appreciator


Joined: 16 Dec 2011
Posts: 146

PostPosted: Thu Feb 14, 2013 8:18 pm    Post subject:
Reply with quote

With all postprocessing solutions I checked out so far, they all have the same issue: they lack support of multiple viewports (for instance using CompositeViewer) and they all need some additional elements (camera, quads, etc) being added to the scene graph.

My expectation of a postprocessing framework is this:
- It doesn't impact the scenegraph, since it is something that happens after the whole geometry is rendered, therefore it must be completely separated.
- It must be compatible with multiple views

Why is there no postprocessing framework that can be attached/inserted into a FinalDraw or PostDraw call of the main camera? This is the place I expect postpro to happen!

Right now I'm using osgPPU and I modified it to work with CompositeViewer and multiple views, but still I'm forced to have one postprocessing camera (including its whole unit pipeline) per view stored in the scenegraph.

Cheers,
Daniel
Back to top
View user's profile Send private message
Aurelien
Appreciator


Joined: 03 Aug 2011
Posts: 297

PostPosted: Thu Feb 14, 2013 10:14 pm    Post subject:
Reply with quote

Hi all,

I'm not sure the CPU cost is really the issue here, but it would be usefull to have that kind of methods :

executeCamera(osg::Camera*, osg::State*)
executeCameraAsync(osg::Camera*, osg::State*)

=> execute all needed code to render the sub graph of the camera, with async wait.
=> inputs are controled using the StateSet of the camera + subgraph
=> outputs are controled by the camera render target

So, we can do processing like that in a PostDraw/FinalDraw callback :

Code:
executeCamera(cameraBlur, renderinfo.getState());

if (something)
{
    executeCamera(cameraFX, renderinfo.getState())
}

float x = computeSomething();

cameraToneMapping->getStateSet->setUniform(x);
executeCamera(cameraToneMapping, renderinfo.getState())


with :

cameraBlur : a small graph : a render quad, a shader and main scene bound as input texture

cameraFX : similar, another FX

cameraToneMapping : similar, execute a tone mapping controled via uniform "x"

Here we can mix GLSL processing and CPU control code, for advanced processing it's very usefull.
It's also difficult to achieve using a standard graph, without playing a lot with callbacks and render order.

Aurelien
Back to top
View user's profile Send private message
Wang Rui
Guest





PostPosted: Fri Feb 15, 2013 12:47 am    Post subject:
Cheaper way to implement rendering-to-texture and post-processing pipeline?
Reply with quote

Hi all,

Thanks for the replies. It is always midnight for me when most community members are active so I have to reply to you all later in my morning. Smile


Paul, I haven't done any comparisons yet. Post processing steps won't be too many in a common application, and as Robert says, the cull time cost of a camera and a normal node won't be too different, so I think the results may be difficult to measure.


Aurelien's past idea (using RenderBins directly) also interests me but it will change the back-end dramatically. I'm also focusing on implementing a complete deferred pipeline including HDR, SSAO, color grading and AA work, and finally merge it with normal scenes like the HUD GUI. The automatic switch between DS and normal pipeline is done by changing whole 'technique' instead of moving child nodes, which may be found in the osgRecipes project I'm maintaining.



But I don't think it easy to implement an executeCameraAsync() method at present. As OSG is a lazy rendering system and one can hardly insert some CPU side computation into FBO cameras. Maybe it could be done by using pre- and post-draw callbacks of a specified camera.


I also agree with Daniel's second opinion that the pipeline should be compatible with multi-views. As a node in the scene graph we can easily do this by sharing the same root node in different views. For the first opinion, because we also have nodes that should not be affected by the post-processing effects (like the GUI, HUD display), and developers may require multiple post effects in the same scene graph (e.g., draw dynamic and static objects differently), I don't think it convincing to totally separate the post processing framework and place it in draw callbacks or viewer's graphics operations.


So, in conclusion, I will agree with Robert that OSG itself don't need an additional RTT node at present and will use cameras to perform every passes, which is already proved in my client work to be compatible with most current OSG functionalities including the VDSM shadows, and some external libraries like SilverLining and osgEarth.


I will try to tidy and submit my current code in the next week as well as a demo scene. And then I will modify the osgRecipes project to use the new idea flashed in my mind to find the pros and cons of it.


Thanks,


Wang Rui

------------------
Post generated by Mail2Forum
Back to top
Paul Martz
Guest





PostPosted: Fri Feb 15, 2013 4:05 am    Post subject:
Cheaper way to implement rendering-to-texture and post-processing pipeline?
Reply with quote

Now that I give this some more thought, the concept of post-processing is inherently non-spatial, so it really doesn't belong in a scene graph at all. Repeatedly "culling" entities that we know will always be rendered is redundant at best. Wouldn't it be better to have a list of dedicated RTT objects as described by Rui, and process them as a Camera post-draw callback?

Just thinking out loud...



On Thu, Feb 14, 2013 at 5:50 PM, Wang Rui < (
Only registered users can see emails on this board!
Get registred or enter the forums!
)> wrote:
Quote:
Hi all,

Thanks for the replies. It is always midnight for me when most community members are active so I have to reply to you all later in my morning. Smile


Paul, I haven't done any comparisons yet. Post processing steps won't be too many in a common application, and as Robert says, the cull time cost of a camera and a normal node won't be too different, so I think the results may be difficult to measure.


Aurelien's past idea (using RenderBins directly) also interests me but it will change the back-end dramatically. I'm also focusing on implementing a complete deferred pipeline including HDR, SSAO, color grading and AA work, and finally merge it with normal scenes like the HUD GUI. The automatic switch between DS and normal pipeline is done by changing whole 'technique' instead of moving child nodes, which may be found in the osgRecipes project I'm maintaining.



But I don't think it easy to implement an executeCameraAsync() method at present. As OSG is a lazy rendering system and one can hardly insert some CPU side computation into FBO cameras. Maybe it could be done by using pre- and post-draw callbacks of a specified camera.


I also agree with Daniel's second opinion that the pipeline should be compatible with multi-views. As a node in the scene graph we can easily do this by sharing the same root node in different views. For the first opinion, because we also have nodes that should not be affected by the post-processing effects (like the GUI, HUD display), and developers may require multiple post effects in the same scene graph (e.g., draw dynamic and static objects differently), I don't think it convincing to totally separate the post processing framework and place it in draw callbacks or viewer's graphics operations.


So, in conclusion, I will agree with Robert that OSG itself don't need an additional RTT node at present and will use cameras to perform every passes, which is already proved in my client work to be compatible with most current OSG functionalities including the VDSM shadows, and some external libraries like SilverLining and osgEarth.


I will try to tidy and submit my current code in the next week as well as a demo scene. And then I will modify the osgRecipes project to use the new idea flashed in my mind to find the pros and cons of it.


Thanks,


Wang Rui














_______________________________________________
osg-users mailing list
(
Only registered users can see emails on this board!
Get registred or enter the forums!
)
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org





--
Paul Martz
Skew Matrix Software LLC

------------------
Post generated by Mail2Forum
Back to top
Aurelien
Appreciator


Joined: 03 Aug 2011
Posts: 297

PostPosted: Fri Feb 15, 2013 8:35 am    Post subject:
Reply with quote

Hi,

Quote:
the concept of post-processing is inherently non-spatial, so it really doesn't belong in a scene graph at all


=> this is why I think we should be able to execute a render pass on an arbitrary camera : the subgraph of this camera may not have a spatial organization, but a process-logic organization

Quote:
Wouldn't it be better to have a list of dedicated RTT objects as described by Rui, and process them as a Camera post-draw callback


=> this also my idea : have a dedicated "executeCamera" method which take a camera and a state as arguments : with that we can call it from a final / post draw call back.

Rather than a list of dedicated RTT objects which will never cover all the diferent use cases, I think this is better to re-use the camera class with it's subgraph, which already works well : by building different small sub-graph we can implement different processing (with a render quad as input, or with a point cloud, or with anything else).



Aurelien
Back to top
View user's profile Send private message
Wang Rui
Guest





PostPosted: Fri Feb 15, 2013 8:54 am    Post subject:
Cheaper way to implement rendering-to-texture and post-processing pipeline?
Reply with quote

Hi,

In my present implementation in the osgRecipes project, I create a list of pre-render cameras internally as post processing passes, instead of explicitly in the scene graph. So on the user level they may simply write:


    EffectCompositor* effect = new EffectCompositor;
    effect->loadFromEffectFile( "ssao.xml" );
    effect->addChild( subgraph );


And during the cull traversal, cameras (and screen quads) are added directly to the cull visitor:


    cv->push*();
    camera->accept(*cv);
    ...
    cv->pop*();


This simplifies the class interface but doesn't change the render stage/render bins we current use. But at least, the post processing passes are not traversable by a scene node visitor, thus not part of the scene graph and will not affect intersection test and other updating work, as Paul and Aurelien point out..


But we may not be able to directly migrate these to a draw callback, because CullVisitor doesn't really render anything, but will create the render-graph for SceneView. A possible idea in my mind is to execute the FrameBufferObject::apply() method along with Quad::drawImplementation() manually in the draw callback, which means to have another complete deferred draw process, besides the main renderstage/renderbin back-end. I don't know if it could be a good choice because the implementation may finally look like an OpenGL-style one, not an OSG composition.


Thanks,


Wang Rui







2013/2/15 Aurelien Albert < (
Only registered users can see emails on this board!
Get registred or enter the forums!
)>
Quote:
Hi,


Quote:
the concept of post-processing is inherently non-spatial, so it really doesn't belong in a scene graph at all



=> this is why I think we should be able to execute a render pass on an arbitrary camera : the subgraph of this camera may not have a spatial organization, but a process-logic organization


Quote:
Wouldn't it be better to have a list of dedicated RTT objects as described by Rui, and process them as a Camera post-draw callback



=> this also my idea : have a dedicated "executeCamera" method which take a camera and a state as arguments : with that we can call it from a final / post draw call back.

Rather than a list of dedicated RTT objects which will never cover all the diferent use cases, I think this is better to re-use the camera class with it's subgraph, which already works well : by building different small sub-graph we can implement different processing (with a render quad as input, or with a point cloud, or with anything else).



Aurelien

------------------
Read this topic online here:

http://forum.openscenegraph.org/viewtopic.php?p=52688#52688





_______________________________________________
osg-users mailing list
(
Only registered users can see emails on this board!
Get registred or enter the forums!
)
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org




------------------
Post generated by Mail2Forum
Back to top
robertosfield
OSG Project Lead


Joined: 18 Mar 2009
Posts: 11758

PostPosted: Fri Feb 15, 2013 12:03 pm    Post subject:
Cheaper way to implement rendering-to-texture and post-processing pipeline?
Reply with quote

Hi Paul,

On 15 February 2013 04:09, Paul Martz <> wrote:
Quote:
Now that I give this some more thought, the concept of post-processing is
inherently non-spatial, so it really doesn't belong in a scene graph at all.
Repeatedly "culling" entities that we know will always be rendered is
redundant at best. Wouldn't it be better to have a list of dedicated RTT
objects as described by Rui, and process them as a Camera post-draw
callback?

Post processing techniques introduce some introducing conceptual and
implementation issues, and as you point out conceptually a post
process isn't directly part of the scene, rather it's more closely
associated with how you view the scene. This conceptually aspect puts
it's either up in the viewer and outside of the scene graph, or
between the viewer and scene as an intermediate layer.

Implementation wise the intermediate layer could be done in the viewer
itself, but the viewer tends to have a static list of cameras, and the
post processing effects need to be done per viewer camera so placing
the cameras directly in the viewer might pose problems in itself if
the cameras at viewer or in the intermediate level at at all dynamic.
This issue would suggest nested the intermediate layer below a viewer
camera.

Performance wise viewer cameras can been threaded - both when handling
multiple contexts and when handling multiple cameras on a single
context. The later might be of interest here ideally you'd want to
interleave the cull and draw dispatch of multiple cameras on a single
context such that cull0 runs and completes, then cull1 and draw0 run
in parallel, then cull2 and draw1 run in parallel etc. For best
performance we'd want to take advantage for this in the intermediate
level as well. To here we have a motivation for putting the post
processing cameras into viewer.

Or... have a scheme where the viewer's level Camera's have their
Renderer collects the nested Cameras from the intermediate level and
then do threading on these. This might achieve the best of both
worlds. A twist on this might be to have the CullVisitor spot places
where it can thread cull and draw dispatch as it hits Camera's in the
scene graph. The later approach would have the advantage of working
with existing scene graphs and NodeKits like osgShadow.

On the topic of re-using RenderBin's when doing multi-pass, this is
partially possible right now, but you really have to know how the
rendering back end works and the constraints you have to work within
to prevent everything getting out of sync. In most cases it's simply
not possible to reuse RenderBin's as even if the same objects make it
into the RenderBin their state will mostly be different, and the only
way you know what the state is is by collecting the state in the cull
traversal so you still have to do the cull traversal and build a
unique StateGraph and with it unique RenderLeaf and which also require
a unique RenderBin. The times when sharing RenderBin is possible
really is very limited and has to be assessed on a case by case basis.

If we do want to explore the possibility of greater re-use of cull
results then I think we best look at extending CullVisitor and the
rendering back end in a way that enables new ways of managing things.
One could perhaps provide a convenience methods that provided similar
set up functionality to an osg::Camera and make it easier to use this
type of functionality, this would make it lighter weight to avoid
using osg::Camera, but this is added complexity that we would have to
look very carefully at as being properly justified. It might be that
a better solution might be to enable easier management of osg::Camera
that are used to implement these techniques, so that the user front
end doesn't need to worry about the how the scene graph is
implementing a post processing effect, it just configures the
interface it needs and the back end goes away and does what it needs.

Robert.


------------------
Post generated by Mail2Forum
Back to top
View user's profile Send private message
wh_xiexing
Guest





PostPosted: Sat Feb 16, 2013 6:53 am    Post subject:
problem of rendering overlaped models.
Reply with quote

hi friends:


i have 2 models to render, some part of which are overlaped . so the result is some kind of weird. how can i resolve this probem?

do i need to split the model and align them? or set different render details for the 2 models?


Shawl

------------------
Post generated by Mail2Forum
Back to top
Preet
Guest





PostPosted: Sat Feb 16, 2013 12:18 pm    Post subject:
problem of rendering overlaped models.
Reply with quote

When you say "weird" what do you mean? If you have polygons that are positioned/aligned very closely to one another you might be seeing z-fighting (http://www.zeuscmd.com/tutorials/opengl/15-PolygonOffset.php). In that case you can tell OpenSceneGraph to use polygon offset on one of the models.

    osg::ref_ptr<osg::PolygonOffset> polyOffset = new osg::PolygonOffset;
    polyOffset->setFactor(1.0f);
    polyOffset->setUnits(1.0f);
    ss = someNode->getOrCreateStateSet();
    ss->setAttributeAndModes(polyOffset);



On Sat, Feb 16, 2013 at 1:57 AM, wh_xiexing < (
Only registered users can see emails on this board!
Get registred or enter the forums!
)> wrote:
Quote:
hi friends:
 
 
 i have 2 models to  render, some part of which are overlaped .  so the result is some kind of weird.  how can i resolve this probem?
 
do i need to split the model and align them?  or set different render details for the 2 models?
 
 
Shawl


_______________________________________________
osg-users mailing list
(
Only registered users can see emails on this board!
Get registred or enter the forums!
)
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



------------------
Post generated by Mail2Forum
Back to top
Chris Hanson
Guest





PostPosted: Sat Feb 16, 2013 4:05 pm    Post subject:
problem of rendering overlaped models.
Reply with quote

Are the models transparent at all, or opaque?


On Sat, Feb 16, 2013 at 5:21 AM, Preet < (
Only registered users can see emails on this board!
Get registred or enter the forums!
)> wrote:
Quote:
When you say "weird" what do you mean? If you have polygons that are positioned/aligned very closely to one another you might be seeing z-fighting (http://www.zeuscmd.com/tutorials/opengl/15-PolygonOffset.php). In that case you can tell OpenSceneGraph to use polygon offset on one of the models.

    osg::ref_ptr<osg::PolygonOffset> polyOffset = new osg::PolygonOffset;
    polyOffset->setFactor(1.0f);
    polyOffset->setUnits(1.0f);
    ss = someNode->getOrCreateStateSet();
    ss->setAttributeAndModes(polyOffset);



On Sat, Feb 16, 2013 at 1:57 AM, wh_xiexing < (
Only registered users can see emails on this board!
Get registred or enter the forums!
)> wrote:


Quote:
hi friends:
 
 
 i have 2 models to  render, some part of which are overlaped .  so the result is some kind of weird.  how can i resolve this probem?
 
do i need to split the model and align them?  or set different render details for the 2 models?
 
 
Shawl




_______________________________________________
osg-users mailing list
(
Only registered users can see emails on this board!
Get registred or enter the forums!
)
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org





_______________________________________________
osg-users mailing list
(
Only registered users can see emails on this board!
Get registred or enter the forums!
)
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org





--
Chris 'Xenon' Hanson, omo sanza lettere. http://www.alphapixel.com/
Training • Consulting • Contracting
3D • Scene Graphs (Open Scene Graph/OSG) • OpenGL 2 • OpenGL 3 • OpenGL 4 • GLSL • OpenGL ES 1 • OpenGL ES 2 • OpenCL
Digital Imaging • GIS • GPS • osgEarth • Terrain • Telemetry • Cryptography • Digital Audio • LIDAR • Kinect • Embedded • Mobile • iPhone/iPad/iOS • Android
@alphapixel facebook.com/alphapixel (775) 623-PIXL [7495]

------------------
Post generated by Mail2Forum
Back to top
romulogcerqueira
User


Joined: 11 Jun 2015
Posts: 55
Location: Brazil

PostPosted: Tue Apr 24, 2018 4:21 am    Post subject:
Reply with quote

Folks,

is there any example to implement FBO without cameras? Where can I find it?

...

Thank you!

Cheers,
Rômulo
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic   Reply to topic    OpenSceneGraph Forum Forum Index -> General All times are GMT
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You cannot download files in this forum

Similar Topics
Topic Author Forum Replies Posted
No new posts DXT texture compression urbanchaos General [forum] 9 Tue May 08, 2018 12:13 pm View latest post
No new posts How do I create a graphics context on... srinivas General 3 Tue Apr 24, 2018 11:40 am View latest post
No new posts EXTERNAL: Re: EXTERNAL: Re: Writing t... Rowley, Marlin R General 0 Wed Apr 18, 2018 2:31 pm View latest post
No new posts Having issue with osgText rendering o... pixelord General 0 Mon Apr 16, 2018 4:16 am View latest post
No new posts EXTERNAL: Re: Writing texture coordin... robertosfield General 1 Sat Apr 14, 2018 5:48 pm View latest post


Board Security Anti Bot Question MOD - phpBB MOD against Spam Bots
Powered by phpBB © 2001, 2005 phpBB Group
Protected by Anti-Spam ACP