-
-
Notifications
You must be signed in to change notification settings - Fork 36.1k
Description
I'm working on implementing transform feedback support on top of the WebGLRenderer to WebGL2 proposal.
I would like to get some feedback from the community about the following two approaches:
- We'll have something like
THREE.TransformFeedbackand we will call it with aBufferGeometryand aProgram:
var geometry = new THREE.BufferGeometry();
var program = // A program with out varyings
var tf = new THREE.TransformFeedback(geometry, program);
...
renderLoop() {
tf.tick();
scene.render();
}
THREE.TransformFeedback.prototype.tick = function () {
bindBuffersAndAttribs();
gl.enable(gl.RASTERIZER_DISCARD);
bindTransformFeedback(this.transformFeedback);
drawCall(currentGeometry, transformFeedbackProgram);
gl.disable(gl.RASTERIZER_DISCARD);
unbindBuffersAndAttribs();
}The constructor will create a copy of the buffergeometry that will then be used on tick to pingpong between these two geometries.
As you can spot, we're calling RASTERIZER_DISCARD as we don't want to draw anything, we just want to do the computation stored on the program and save the values back to a buffer that will be used later to paint the geometry.
An example of this approach is the WebGL2Particles demo by @toji
The key point here is that we're performing first the computation using a specific shader and we use these computed attributes to render the geometry using a https://github.com/toji/webgl2-particles-2/blob/gh-pages/index.html#L55-L74 unrelated to the transformfeedback.
- The second approach will be to integrate it on the pipeline. So we could have a material definition like:
var materialWithTF = new THREE.ShaderMaterial( {
uniforms: {...},
vertexShader: document.getElementById( 'vs' ).textContent,
fragmentShader: document.getElementById( 'fs' ).textContent
transformFeedbackVaryings: ['outPosition', 'outVelocity']
});Or we could use injection points for standard materials (Sorry I don't remember the current state on this @mrdoob @pailhead, but you get the point):
var materialWithTF = new THREE.StandardMaterial( {
transformFeedbackVaryings: ['outPosition', 'outVelocity'],
preVertex: 'out vec3 outPosition; out vec3 outVelocity;',
vertexMain: 'outPosition = position + velocity * delta; outVelocity *= acceleration;',
});So the renderer and the related modules should take the transformFeedback attributes into account and generate the buffers, doing the pingpong (this could be defined on the material too), binding the tf and the attributes.
With this second approach we don't separate the computation from the actual rendering as we just do it in one pass.
An example of this implementation can be found here on the WebGL2Samples Where with just one drawcall you compute and render at the same time using just one program
- I'm not sure if there's a huge overhead between option (1) or (2). Although until we won't have multiview support on three.js, (1) will do just one call to compute the new attributes while on (2) we will call twice one per each eye, as the usual drawcalls.
- I'd like to know if people would like to have the option (1) without rendering anything after, so just to do some gpgpu, or if it's a good idea to have both for two specific use cases.