I have recently gained some interest in the raytracing API provided by the Metal framework. I understand that you can attach a vertex buffer to a geometry descriptor that Metal will use to create the acceleration structure later (on a MTLPrimitiveAccelerationStructureDescriptor instance for example).
This made me wonder if it were possible to write the output of the tessellator into a separate vertex buffer from the post-tessellation vertex shader and pass that along to the raytracer. I thought that perhaps you could get more detailed geometry and still render without rasterization. For example, I might have the following simple post-tessellation vertex function:
// Control Point struct
However, the following doesn't compile
where outPutBuffer would be some struct* (not void*). I noticed that the function doesn't compile when I don't use the data in the control points as output, like so
I looked at the patch_control_point<T> template that was publicly exposed but didn't see anything enforcing this. What is going on here?
In particular, how would I go about increasing the quality of the geometry into the raytracer? Would I simply have to use more complex assets? Tessellation has its place in the rasterization pipeline, but can it be used elsewhere? Of course, this would leave a much larger memory footprint if we were storing the tessellated patches.
This made me wonder if it were possible to write the output of the tessellator into a separate vertex buffer from the post-tessellation vertex shader and pass that along to the raytracer. I thought that perhaps you could get more detailed geometry and still render without rasterization. For example, I might have the following simple post-tessellation vertex function:
// Control Point struct
Code Block struct ControlPoint { float4 position [[attribute(0)]]; }; // Patch struct struct PatchIn { patch_control_point<ControlPoint> control_points; }; // Vertex-to-Fragment struct struct FunctionOutIn { float4 position [[ position ]]; half4 color [[ flat ]]; }; [[patch(triangle, 3)]] vertex FunctionOutIn tessellation_vertex_triangle(PatchIn patchIn [[stage_in]], float3 patch_coord [[ position_in_patch ]]) { // Barycentric coordinates float u = patch_coord.x; float v = patch_coord.y; float w = patch_coord.z; // Convert to cartesian coordinates float x = u * patchIn.control_points[0].position.x + v * patchIn.control_points[1].position.x + w * patchIn.control_points[2].position.x; float y = u * patchIn.control_points[0].position.y + v * patchIn.control_points[1].position.y + w * patchIn.control_points[2].position.y; // Output FunctionOutIn vertexOut; vertexOut.position = float4(x, y, 0.0, 1.0); vertexOut.color = half4(u, v, w, 1.0h); return vertexOut; }
However, the following doesn't compile
Code Block // Triangle post-tessellation vertex function [[patch(triangle, 3)]] vertex void tessellation_vertex_triangle(device void *outputBuffer [[ buffer(0) ]], PatchIn patchIn [[stage_in]], float3 patch_coord [[ position_in_patch ]]) { // Barycentric coordinates float u = patch_coord.x; float v = patch_coord.y; float w = patch_coord.z; // Convert to cartesian coordinates float x = u * patchIn.control_points[0].position.x + v * patchIn.control_points[1].position.x + w * patchIn.control_points[2].position.x; float y = u * patchIn.control_points[0].position.y + v * patchIn.control_points[1].position.y + w * patchIn.control_points[2].position.y; // Output FunctionOutIn vertexOut; vertexOut.position = float4(x, y, 0.0, 1.0); vertexOut.color = half4(u, v, w, 1.0h); }
where outPutBuffer would be some struct* (not void*). I noticed that the function doesn't compile when I don't use the data in the control points as output, like so
Code Block [[patch(triangle, 3)]] vertex FunctionOutIn tessellation_vertex_triangle(PatchIn patchIn [[stage_in]], float3 patch_coord [[ position_in_patch ]]) { // Barycentric coordinates float u = patch_coord.x; float v = patch_coord.y; float w = patch_coord.z; // Convert to cartesian coordinates float x = u * patchIn.control_points[0].position.x + v * patchIn.control_points[1].position.x + w * patchIn.control_points[2].position.x; float y = u * patchIn.control_points[0].position.y + v * patchIn.control_points[1].position.y + w * patchIn.control_points[2].position.y; // Output FunctionOutIn vertexOut; // Does not use x or y (and therefore the `patch_control_point<T>`'s values // are not used as output into the rasterizer) vertexOut.position = float4(1.0, 1.0, 0.0, 1.0); vertexOut.color = half4(1.0h, 1.0h, 1.0h, 1.0h); return vertexOut; }
I looked at the patch_control_point<T> template that was publicly exposed but didn't see anything enforcing this. What is going on here?
In particular, how would I go about increasing the quality of the geometry into the raytracer? Would I simply have to use more complex assets? Tessellation has its place in the rasterization pipeline, but can it be used elsewhere? Of course, this would leave a much larger memory footprint if we were storing the tessellated patches.