initial commit-moved from vulkan_guide

This commit is contained in:
2025-10-10 22:53:54 +09:00
commit 8853429937
2484 changed files with 973414 additions and 0 deletions

5
.gitignore vendored Normal file
View File

@@ -0,0 +1,5 @@
/build
/bin
/assets
/.idea
*.spv

45
CMakeLists.txt Normal file
View File

@@ -0,0 +1,45 @@
cmake_minimum_required (VERSION 3.8)
project ("vulkan_engine")
if (WIN32)
set(VULKAN_SDK "$ENV{VULKAN_SDK}")
set(Vulkan_INCLUDE_DIR "C:/VulkanSDK/1.3.296.0/Include")
set(Vulkan_LIBRARY "C:/VulkanSDK/1.3.296.0/Lib/vulkan-1.lib")
endif()
find_package(Vulkan REQUIRED)
add_subdirectory(third_party)
set (CMAKE_RUNTIME_OUTPUT_DIRECTORY "${PROJECT_SOURCE_DIR}/bin")
set (CMAKE_LIBRARY_OUTPUT_DIRECTORY "${PROJECT_SOURCE_DIR}/bin")
add_subdirectory(src)
find_program(GLSL_VALIDATOR glslangValidator HINTS /usr/bin /usr/local/bin $ENV{VULKAN_SDK}/Bin/ $ENV{VULKAN_SDK}/Bin32/)
file(GLOB_RECURSE GLSL_SOURCE_FILES
"${PROJECT_SOURCE_DIR}/shaders/*.frag"
"${PROJECT_SOURCE_DIR}/shaders/*.vert"
"${PROJECT_SOURCE_DIR}/shaders/*.comp"
)
foreach(GLSL ${GLSL_SOURCE_FILES})
message(STATUS "BUILDING SHADER")
get_filename_component(FILE_NAME ${GLSL} NAME)
set(SPIRV "${PROJECT_SOURCE_DIR}/shaders/${FILE_NAME}.spv")
message(STATUS ${GLSL})
add_custom_command(
OUTPUT ${SPIRV}
COMMAND ${GLSL_VALIDATOR} -V ${GLSL} -o ${SPIRV}
DEPENDS ${GLSL})
list(APPEND SPIRV_BINARY_FILES ${SPIRV})
endforeach(GLSL)
add_custom_target(
DEPENDS ${SPIRV_BINARY_FILES}
)

3
compile_shaders.ps1 Normal file
View File

@@ -0,0 +1,3 @@
Get-ChildItem -Path "shaders" -Include *.frag,*.vert,*.comp,*.geom,*.tesc,*.tese,*.mesh,*.task,*.rgen,*.rint,*.rahit,*.rchit,*.rmiss,*.rcall -Recurse | ForEach-Object {
glslc $_.FullName -o "$($_.FullName).spv"
}

73
docs/Compute.md Normal file
View File

@@ -0,0 +1,73 @@
## Compute System: Pipelines, Instances, and Dispatch
Standalone compute subsystem with a small, explicit API. Used by passes (e.g., Background) and tools. It lives under `src/compute` and is surfaced via `EngineContext::compute` and convenience wrappers on `PipelineManager`.
### Concepts
- Pipelines: Named compute pipelines created from a SPIRV module and a simple descriptor layout spec.
- Instances: Persistently bound descriptor sets keyed by instance name; useful for effects that rebind images/buffers across frames without recreating pipelines.
- Dispatch: Issue work with group counts, optional push constants, and adhoc memory barriers.
### Key Types
- `ComputePipelineCreateInfo` — shader path, descriptor types, push constant size/stages, optional specialization (src/compute/vk_compute.h).
- `ComputeDispatchInfo``groupCount{X,Y,Z}`, `bindings`, `pushConstants`, and `*_barriers` arrays for additional sync.
- `ComputeBinding` — helpers for `uniformBuffer`, `storageBuffer`, `sampledImage`, `storeImage`.
### API Surface
- Register/Destroy
- `bool ComputeManager::registerPipeline(name, ComputePipelineCreateInfo)`
- `void ComputeManager::unregisterPipeline(name)`
- Query: `bool ComputeManager::hasPipeline(name)`
- Dispatch
- `void ComputeManager::dispatch(cmd, name, ComputeDispatchInfo)`
- `void ComputeManager::dispatchImmediate(name, ComputeDispatchInfo)` — records on a transient command buffer and submits.
- Helpers: `createDispatch2D(w,h[,lsX,lsY])`, `createDispatch3D(w,h,d[,lsX,lsY,lsZ])`.
- Instances
- `bool ComputeManager::createInstance(instanceName, pipelineName)` / `destroyInstance(instanceName)`
- `setInstanceStorageImage`, `setInstanceSampledImage`, `setInstanceBuffer`
- `AllocatedImage createAndBindStorageImage(...)`, `AllocatedBuffer createAndBindStorageBuffer(...)`
- `void dispatchInstance(cmd, instanceName, info)`
### Quick Start — OneShot Dispatch
```c++
ComputePipelineCreateInfo ci{};
ci.shaderPath = context->getAssets()->shaderPath("blur.comp.spv");
ci.descriptorTypes = { VK_DESCRIPTOR_TYPE_STORAGE_IMAGE, VK_DESCRIPTOR_TYPE_SAMPLED_IMAGE };
ci.pushConstantSize = sizeof(ComputePushConstants);
context->compute->registerPipeline("blur", ci);
ComputeDispatchInfo di = ComputeManager::createDispatch2D(draw.w, draw.h);
di.bindings.push_back(ComputeBinding::storeImage(0, outImageView));
di.bindings.push_back(ComputeBinding::sampledImage(1, inImageView, context->getSamplers()->defaultLinear()));
ComputePushConstants pc{}; /* fill */
di.pushConstants = &pc; di.pushConstantSize = sizeof(pc);
context->compute->dispatch(cmd, "blur", di);
```
### Quick Start — Persistent Instance
```c++
context->compute->createInstance("background.sky", "sky");
context->compute->setInstanceStorageImage("background.sky", 0, ctx->getSwapchain()->drawImage().imageView);
ComputeDispatchInfo di = ComputeManager::createDispatch2D(ctx->getDrawExtent().width,
ctx->getDrawExtent().height);
di.pushConstants = &effect.data; di.pushConstantSize = sizeof(ComputePushConstants);
context->compute->dispatchInstance(cmd, "background.sky", di);
```
### Integration With Render Graph
- Compute passes declare `write(image, RGImageUsage::ComputeWrite)` in their build callback; the graph inserts layout transitions to `GENERAL` and required barriers.
- Background pass example: `src/render/vk_renderpass_background.cpp`.
### Sync Notes
- ComputeManager inserts minimal barriers needed for common cases; prefer using the Render Graph for crosspass synchronization.
- For advanced cases, add `imageBarriers`/`bufferBarriers` to `ComputeDispatchInfo`.

77
docs/Descriptors.md Normal file
View File

@@ -0,0 +1,77 @@
## Descriptors: Builders, Allocators, and Layouts
Utilities to define descriptor layouts, write descriptor sets, and efficiently allocate them per-frame or globally.
### Overview
- Layouts: `DescriptorLayoutBuilder` assembles `VkDescriptorSetLayout` with staged bindings.
- Writing: `DescriptorWriter` collects buffer/image writes and updates a set in one call.
- Pools: `DescriptorAllocatorGrowable` manages a growable pool-of-pools for resilient allocations; `FrameResources` keeps one per overlapping frame.
- Common layouts: `DescriptorManager` pre-creates reusable layouts such as `gpuSceneDataLayout()` and `singleImageLayout()`.
### Quick Start — Transient Per-Frame Set
```c++
// 1) Create/update a small uniform buffer for the frame
AllocatedBuffer ubuf = context->getResources()->create_buffer(
sizeof(GPUSceneData), VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT, VMA_MEMORY_USAGE_CPU_TO_GPU);
context->currentFrame->_deletionQueue.push_function([=, &ctx=*context]{ ctx.getResources()->destroy_buffer(ubuf); });
VmaAllocationInfo ai{}; vmaGetAllocationInfo(context->getDevice()->allocator(), ubuf.allocation, &ai);
*static_cast<GPUSceneData*>(ai.pMappedData) = context->getSceneData();
// 2) Allocate a set from the frame allocator using a common layout
VkDescriptorSet set = context->currentFrame->_frameDescriptors.allocate(
context->getDevice()->device(), context->getDescriptorLayouts()->gpuSceneDataLayout());
// 3) Write the buffer binding
DescriptorWriter writer;
writer.write_buffer(0, ubuf.buffer, sizeof(GPUSceneData), 0, VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER);
writer.update_set(context->getDevice()->device(), set);
```
### Defining a Custom Layout
```c++
DescriptorLayoutBuilder lb;
lb.add_binding(0, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER);
lb.add_binding(1, VK_DESCRIPTOR_TYPE_STORAGE_BUFFER);
VkDescriptorSetLayout myLayout = lb.build(device, VK_SHADER_STAGE_FRAGMENT_BIT | VK_SHADER_STAGE_COMPUTE_BIT);
// ... remember to vkDestroyDescriptorSetLayout(device, myLayout, nullptr) in cleanup
```
### Global Growable Allocator
For long-lived sets (e.g., materials, compute instances), use `EngineContext::descriptors` which wraps a growable allocator shared across modules.
```c++
VkDescriptorSet persistent = context->getDescriptors()->allocate(device, myLayout);
DescriptorWriter writer;
writer.write_image(0, albedoView, sampler, VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL,
VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER);
writer.update_set(device, persistent);
// Freeing: call `context->getDescriptors()->destroy_pools(device)` at engine shutdown; sets die with their pools
```
### Compute Integration
`ComputeManager` uses an internal `DescriptorAllocatorGrowable` and offers higher-level bindings:
- Build pipeline: specify `descriptorTypes` and `pushConstantSize` in `ComputePipelineCreateInfo`.
- Ad-hoc dispatch: fill `ComputeDispatchInfo.bindings` with `ComputeBinding::{uniformBuffer, storageBuffer, sampledImage, storeImage}`.
- Persistent instances: create via `createInstance()` and set bindings with `setInstance*()`; descriptor sets are auto-updated and reused across dispatches.
See `PipelineManager.md` for a full compute quick start using the unified API.
### API Summary
- `DescriptorLayoutBuilder`: `add_binding(binding, type)`, `build(device, stages[, pNext, flags])`, `clear()`.
- `DescriptorWriter`: `write_buffer(binding, buffer, size, offset, type)`, `write_image(binding, view, sampler, layout, type)`, `update_set(device, set)`, `clear()`.
- `DescriptorAllocatorGrowable`: `init(device, initialSets, ratios)`, `allocate(device, layout[, pNext])`, `clear_pools(device)`, `destroy_pools(device)`.
- `DescriptorManager`: `gpuSceneDataLayout()`, `singleImageLayout()`.
### Best Practices
- Use per-frame allocator (`currentFrame->_frameDescriptors`) for transient sets to avoid lifetime pitfalls.
- Keep `DescriptorLayoutBuilder` small and local; free custom layouts in your pass/module `cleanup()`.
- Tune pool ratios to match workload; see how frames are initialized in `VulkanEngine::init_frame_resources()`.
- For persistent compute resources, prefer `ComputeManager` instances over manual descriptor lifecycle.

90
docs/EngineContext.md Normal file
View File

@@ -0,0 +1,90 @@
## Engine Context: Access to Managers + PerFrame State
Central DI-style handle that modules use to access device/managers, per-frame state, and convenience data without depending directly on `VulkanEngine`.
### Overview
- Ownership: Holds shared owners for `DeviceManager`, `ResourceManager`, and a growable `DescriptorAllocatorGrowable` used across modules.
- Global managers: Non-owning pointers to `SwapchainManager`, `DescriptorManager` (prebuilt layouts), `SamplerManager`, and `SceneManager`.
- Per-frame state: `currentFrame` (command buffer, per-frame descriptor pool, deletion queue), `stats`, and `drawExtent`.
- Subsystems: `compute` (`ComputeManager`) and `pipelines` (`PipelineManager`) exposed for unified graphics/compute API.
- Window + content: `window` (SDL handle) and convenience meshes (`cubeMesh`, `sphereMesh`).
Context is wired in `VulkanEngine::init()` and refreshed each frame before passes execute.
### Render Graph Note
- Builtin passes no longer call `vkCmdBeginRendering` or perform image layout transitions directly.
- Use your pass `register_graph(graph, ...)` to declare attachments and resource accesses; the Render Graph inserts barriers and begins/ends dynamic rendering.
- See `docs/RenderGraph.md` for the builder API and scheduling.
### Quick Start — In a Render Pass (essentials)
```c++
void MyPass::init(EngineContext* context) {
_context = context;
// Use common descriptor layouts provided by DescriptorManager
VkDescriptorSetLayout sceneLayout = _context->getDescriptorLayouts()->gpuSceneDataLayout();
// Build a pipeline via PipelineManager (re-fetch on draw for hot reload)
GraphicsPipelineCreateInfo info{};
info.vertexShaderPath = "../shaders/fullscreen.vert.spv";
info.fragmentShaderPath = "../shaders/my_pass.frag.spv";
info.setLayouts = { sceneLayout };
info.configure = [this](PipelineBuilder& b){
b.set_input_topology(VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST);
b.set_polygon_mode(VK_POLYGON_MODE_FILL);
b.set_cull_mode(VK_CULL_MODE_NONE, VK_FRONT_FACE_CLOCKWISE);
b.set_multisampling_none();
b.disable_depthtest();
b.set_color_attachment_format(_context->getSwapchain()->drawImage().imageFormat);
};
_context->pipelines->createGraphicsPipeline("my_pass", info);
}
void MyPass::execute(VkCommandBuffer cmd) {
// Fetch latest pipeline in case of hot reload
VkPipeline p{}; VkPipelineLayout l{};
_context->pipelines->getGraphics("my_pass", p, l);
// Per-frame uniform buffer via currentFrame allocator
AllocatedBuffer ubuf = _context->getResources()->create_buffer(
sizeof(GPUSceneData), VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT, VMA_MEMORY_USAGE_CPU_TO_GPU);
_context->currentFrame->_deletionQueue.push_function([=, this]{ _context->getResources()->destroy_buffer(ubuf); });
VmaAllocationInfo ai{}; vmaGetAllocationInfo(_context->getDevice()->allocator(), ubuf.allocation, &ai);
*static_cast<GPUSceneData*>(ai.pMappedData) = _context->getSceneData();
VkDescriptorSet set = _context->currentFrame->_frameDescriptors.allocate(
_context->getDevice()->device(), _context->getDescriptorLayouts()->gpuSceneDataLayout());
DescriptorWriter writer;
writer.write_buffer(0, ubuf.buffer, sizeof(GPUSceneData), 0, VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER);
writer.update_set(_context->getDevice()->device(), set);
vkCmdBindPipeline(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, p);
vkCmdBindDescriptorSets(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, l, 0, 1, &set, 0, nullptr);
// Viewport/scissor from context draw extent
VkViewport vp{0,0,(float)_context->getDrawExtent().width,(float)_context->getDrawExtent().height,0.f,1.f};
vkCmdSetViewport(cmd, 0, 1, &vp);
VkRect2D sc{{0,0},{_context->getDrawExtent().width,_context->getDrawExtent().height}};
vkCmdSetScissor(cmd, 0, 1, &sc);
vkCmdDraw(cmd, 3, 1, 0, 0);
}
```
### Life Cycle
- Init: `VulkanEngine` constructs managers, initializes `_context`, then assigns `pipelines`, `compute`, `scene`, etc.
- Per-frame: Engine sets `currentFrame` and `drawExtent`, optionally triggers `PipelineManager::hotReloadChanged()`.
- Cleanup: Managers own their resources; modules should free layouts/sets they create and push per-frame deletions to `currentFrame->_deletionQueue`.
### Best Practices
- Prefer `EngineContext` accessors (`getDevice()`, `getResources()`, `getSwapchain()`) for clarity and testability.
- Re-fetch pipeline/layout by key every frame if using hot reload.
- Use `currentFrame->_frameDescriptors` for transient sets; use `context->descriptors` for longer-lived sets.
- Push resource cleanup to the frame or pass deletion queues to match lifetime with usage.

141
docs/PipelineManager.md Normal file
View File

@@ -0,0 +1,141 @@
## Pipeline Manager: Graphics + Compute
Centralizes pipeline creation and access with a clean, uniform API. Avoids duplication, enables hot reload (graphics), and makes passes/materials simpler.
### Overview
- Graphics pipelines: Owned by `PipelineManager` with per-name registry, hot-reloaded when shaders change.
- Compute pipelines: Created through `PipelineManager` but executed by `ComputeManager` under the hood.
- Access from anywhere via `EngineContext` (`context->pipelines`).
### Quick Start — Graphics
```c++
// In pass/material init
GraphicsPipelineCreateInfo info{};
info.vertexShaderPath = "../shaders/mesh.vert.spv";
info.fragmentShaderPath = "../shaders/mesh.frag.spv";
info.setLayouts = { context->getDescriptorLayouts()->gpuSceneDataLayout(), materialLayout };
info.pushConstants = { VkPushConstantRange{ VK_SHADER_STAGE_VERTEX_BIT, 0, sizeof(GPUDrawPushConstants) } };
info.configure = [context](PipelineBuilder& b){
b.set_input_topology(VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST);
b.set_polygon_mode(VK_POLYGON_MODE_FILL);
b.set_cull_mode(VK_CULL_MODE_NONE, VK_FRONT_FACE_CLOCKWISE);
b.set_multisampling_none();
b.disable_blending();
b.enable_depthtest(true, VK_COMPARE_OP_GREATER_OR_EQUAL);
b.set_color_attachment_format(context->getSwapchain()->drawImage().imageFormat);
b.set_depth_format(context->getSwapchain()->depthImage().imageFormat);
};
context->pipelines->createGraphicsPipeline("mesh.opaque", info);
// Fetch for binding
MaterialPipeline mp{};
context->pipelines->getMaterialPipeline("mesh.opaque", mp);
vkCmdBindPipeline(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, mp.pipeline);
// ... bind sets using mp.layout
```
Notes:
- Graphics hot-reload runs each frame. If you cache pipeline handles, re-fetch with `getGraphics()` before use to pick up changes.
### Quick Start — Compute
Define and create a compute pipeline through the same manager:
```c++
ComputePipelineCreateInfo c{};
c.shaderPath = "../shaders/blur.comp.spv";
c.descriptorTypes = {
VK_DESCRIPTOR_TYPE_STORAGE_IMAGE, // out image
VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER // in image
};
c.pushConstantSize = sizeof(MyBlurPC);
context->pipelines->createComputePipeline("blur", c);
```
Dispatch it when needed:
```c++
ComputeDispatchInfo di = ComputeManager::createDispatch2D(width, height, 16, 16);
di.bindings.push_back(ComputeBinding::storeImage(0, outView));
di.bindings.push_back(ComputeBinding::sampledImage(1, inView, context->getSamplers()->defaultLinear()));
di.pushConstants = &pc; // of type MyBlurPC
di.pushConstantSize = sizeof(MyBlurPC);
// Insert barriers as needed (optional)
// di.imageBarriers.push_back(...);
context->pipelines->dispatchCompute(cmd, "blur", di);
```
Tips:
- Use `dispatchComputeImmediate("name", di)` for one-off operations via an internal immediate command buffer.
- For complex synchronization, populate `memoryBarriers`, `bufferBarriers`, and `imageBarriers` in `ComputeDispatchInfo`.
- Compute pipelines are not hot-reloaded yet. If needed, re-create via `createComputePipeline(...)` and re-dispatch.
### When to Create vs. Use
- Create pipelines in pass/material `init()` and keep only the string keys around if you rely on hot reload.
- Re-fetch handles right before binding each frame to pick up changes:
```c++
VkPipeline p; VkPipelineLayout l;
if (context->pipelines->getGraphics("mesh.opaque", p, l)) { /* bind & draw */ }
```
### API Summary
- Graphics
- `createGraphicsPipeline(name, GraphicsPipelineCreateInfo)`
- `getGraphics(name, VkPipeline&, VkPipelineLayout&)`
- `getMaterialPipeline(name, MaterialPipeline&)`
- Hot reload: `hotReloadChanged()` is called by the engine each frame.
- Compute
- `createComputePipeline(name, ComputePipelineCreateInfo)`
- `destroyComputePipeline(name)` / `hasComputePipeline(name)`
- `dispatchCompute(cmd, name, ComputeDispatchInfo)`
- `dispatchComputeImmediate(name, ComputeDispatchInfo)`
### Persistent Compute Resources (Instances)
For long-lived compute workloads, create a compute instance that owns its descriptor set and (optionally) its resources.
```c++
// 1) Ensure the pipeline exists
ComputePipelineCreateInfo c{}; c.shaderPath = "../shaders/work.comp.spv"; c.descriptorTypes = { VK_DESCRIPTOR_TYPE_STORAGE_IMAGE };
context->pipelines->createComputePipeline("work", c);
// 2) Create an instance bound to the pipeline
// You can go either via ComputeManager or PipelineManager
context->pipelines->createComputeInstance("work.main", "work");
// 3) Allocate persistent resources and bind to instance
auto img = context->pipelines->createAndBindComputeStorageImage("work.main", 0,
VkExtent3D{width, height, 1}, VK_FORMAT_R8G8B8A8_UNORM);
// 4) Optionally add more bindings (buffers, sampled images, etc.)
auto buf = context->pipelines->createAndBindComputeStorageBuffer("work.main", 1, size);
// or reference external resources
context->pipelines->setComputeInstanceStorageImage("work.main", 2, someView);
// 5) Update and dispatch repeatedly (bindings persist)
ComputeDispatchInfo di = ComputeManager::createDispatch2D(width, height);
di.pushConstants = &myPC; di.pushConstantSize = sizeof(myPC);
context->pipelines->dispatchComputeInstance(cmd, "work.main", di);
// 6) Destroy when no longer needed
context->pipelines->destroyComputeInstance("work.main");
```
Notes:
- Instances keep their descriptor set and binding specification; you can modify bindings via `setInstance*` and call `dispatchInstance()` without respecifying them each frame.
- Owned images/buffers created via `createAndBind*` are automatically destroyed when the instance is destroyed or on engine cleanup.
- Descriptor sets are allocated from a growable pool and are freed when the compute manager is cleaned up.
### Best Practices
- Keep descriptor set layouts owned by the module that defines resource interfaces (e.g., material or pass). Pipelines/layouts created by the manager are managed by the manager.
- Prefer pipeline keys over cached handles to benefit from hot reload.
- Encapsulate fixed-function state in `GraphicsPipelineCreateInfo::configure` lambdas to keep pass code tidy.

127
docs/RenderGraph.md Normal file
View File

@@ -0,0 +1,127 @@
## Render Graph: PerFrame Scheduling, Barriers, and Dynamic Rendering
Lightweight render graph that builds a perframe DAG from pass declarations, computes the necessary resource barriers/layout transitions, and records passes with dynamic rendering when attachments are declared.
### Why
- Centralize synchronization and image layout transitions across passes.
- Make passes declarative: author declares reads/writes; the graph inserts barriers and begins/ends rendering.
- Keep existing pass classes (`IRenderPass`) while migrating execution to the graph.
### HighLevel Flow
- Engine creates the graph each frame and imports swapchain/GBuffer images: `src/core/vk_engine.cpp:303`.
- Each pass registers its work by calling `register_graph(graph, ...)` and declaring resources via a builder.
- The graph appends a present chain (copy HDR `drawImage` → swapchain, then transition to `PRESENT`), optionally inserting ImGui before present.
- `compile()` topologically sorts passes by data dependencies (read/write) and computes perpass barriers.
- `execute(cmd)` emits barriers, begins dynamic rendering if attachments were declared, calls the pass record lambda, and ends rendering.
### Core API
- `RenderGraph::add_pass(name, RGPassType type, BuildCallback build, RecordCallback record)`
- Declare image/buffer accesses and attachments inside `build` using `RGPassBuilder`.
- Do your actual rendering/copies in `record` using resolved Vulkan objects from `RGPassResources`.
- See: `src/render/rg_graph.h:36`, `src/render/rg_graph.cpp:51`.
- `RenderGraph::compile()` → builds ordering and perpass `Vk*MemoryBarrier2` lists. See `src/render/rg_graph.cpp:83`.
- `RenderGraph::execute(cmd)` → emits barriers and dynamic rendering begin/end. See `src/render/rg_graph.cpp:592`.
- Import helpers for engine images: `import_draw_image()`, `import_depth_image()`, `import_gbuffer_*()`, `import_swapchain_image(index)`. See `src/render/rg_graph.cpp:740`.
- Present chain: `add_present_chain(draw, swapchain, appendExtra)` inserts Copy→Present passes and lets you inject extra passes (e.g., ImGui) in between. See `src/render/rg_graph.cpp:705`.
### Declaring a Pass
Use `register_graph(...)` on your pass to declare resources and record work. The graph handles transitions and dynamic rendering.
```c++
void MyPass::register_graph(RenderGraph* graph,
RGImageHandle draw,
RGImageHandle depth) {
graph->add_pass(
"MyPass",
RGPassType::Graphics,
// Build: declare resources + attachments
[draw, depth](RGPassBuilder& b, EngineContext*) {
b.read(draw, RGImageUsage::SampledFragment); // example read
b.write_color(draw); // render target
b.write_depth(depth, /*clear*/ false); // depth test
},
// Record: issue Vulkan commands (no begin/end rendering needed)
[this, draw](VkCommandBuffer cmd, const RGPassResources& res, EngineContext* ctx) {
VkPipeline p{}; VkPipelineLayout l{};
ctx->pipelines->getGraphics("my_pass", p, l); // hotreload safe
vkCmdBindPipeline(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, p);
VkViewport vp{0,0,(float)ctx->getDrawExtent().width,(float)ctx->getDrawExtent().height,0,1};
vkCmdSetViewport(cmd, 0, 1, &vp);
VkRect2D sc{{0,0}, ctx->getDrawExtent()};
vkCmdSetScissor(cmd, 0, 1, &sc);
vkCmdDraw(cmd, 3, 1, 0, 0);
}
);
}
```
### Builder Reference (`RGPassBuilder`)
- Images
- `read(RGImageHandle, RGImageUsage)` → sample/read usage for this pass.
- `write(RGImageHandle, RGImageUsage)` → write usage (compute/storage/transfer).
- `write_color(RGImageHandle, bool clearOnLoad=false, VkClearValue clear={})` → declares a color attachment.
- `write_depth(RGImageHandle, bool clearOnLoad=false, VkClearValue clear={})` → declares a depth attachment.
- Buffers
- `read_buffer(RGBufferHandle, RGBufferUsage)` / `write_buffer(RGBufferHandle, RGBufferUsage)`.
- Convenience import: `read_buffer(VkBuffer, RGBufferUsage, size, name)` and `write_buffer(VkBuffer, ...)` dedup by raw handle.
See `src/render/rg_builder.h:39` and impl in `src/render/rg_builder.cpp:20`.
### Resource Model (`RGResourceRegistry`)
- Imported vs transient resources are tracked uniformly with lifetime indices (`firstUse/lastUse`).
- Imports are deduplicated by `VkImage`/`VkBuffer` and keep initial layout/stage/access as the starting state.
- Transients are created via `ResourceManager` and autodestroyed at end of frame using the frame deletion queue.
- See `src/render/rg_resources.h:11` and `src/render/rg_resources.cpp:1`.
### Synchronization and Layouts
- For each pass, `compile()` compares previous state with desired usage and, if needed, adds a prepass barrier:
- Images: `VkImageMemoryBarrier2` with stage/access/layout derived from `RGImageUsage`.
- Buffers: `VkBufferMemoryBarrier2` with stage/access derived from `RGBufferUsage`.
- Initial state comes from the imported descriptor; if unknown, buffers default to `TOP_OF_PIPE`.
- Format/usage checks:
- Warns if binding a depth format as color (and viceversa).
- Warns if a transient resource is used with flags it wasnt created with.
Image usage → layout/stage examples (subset):
- `SampledFragment` → `SHADER_READ_ONLY_OPTIMAL`, `FRAGMENT_SHADER`.
- `ColorAttachment` → `COLOR_ATTACHMENT_OPTIMAL`, `COLOR_ATTACHMENT_OUTPUT` (read|write).
- `DepthAttachment` → `DEPTH_ATTACHMENT_OPTIMAL`, `EARLY|LATE_FRAGMENT_TESTS`.
- `TransferDst` → `TRANSFER_DST_OPTIMAL`, `TRANSFER`.
- `Present` → `PRESENT_SRC_KHR`, `BOTTOM_OF_PIPE`.
Buffer usage → stage/access examples:
- `IndexRead` → `INDEX_INPUT`, `INDEX_READ`.
- `VertexRead` → `VERTEX_INPUT`, `VERTEX_ATTRIBUTE_READ`.
- `UniformRead` → `ALL_GRAPHICS|COMPUTE`, `UNIFORM_READ`.
- `StorageReadWrite` → `COMPUTE|FRAGMENT`, `SHADER_STORAGE_READ|WRITE`.
### BuiltIn Pass Wiring (Current)
- Resource uploads (if any) → Background (compute) → Geometry (GBuffer) → Lighting (deferred) → Transparent → CopyToSwapchain → ImGui → PreparePresent.
- See registrations: `src/core/vk_engine.cpp:321``src/core/vk_engine.cpp:352`.
### Notes & Limits
- No aliasing or transient pooling yet; images created via `create_*` are released endofframe.
- Graph scheduling uses a topological order by data dependency; it does not parallelize across queues.
- Load/store control for attachments is minimal (`clearOnLoad`, `store` on `RGAttachmentInfo`).
- Render area is the min of all declared attachment extents and `EngineContext::drawExtent`.
### Debugging
- Each pass is wrapped with a debug label (`RG: <name>`).
- Compile prints warnings for suspicious usages or format mismatches.

90
docs/RenderPasses.md Normal file
View File

@@ -0,0 +1,90 @@
## Render Passes: Background → Geometry → Lighting → Transparent → ImGui
Pass classes (`IRenderPass`) define initialization and recording logic, but execution is now driven by the Render Graph. Each pass exposes a `register_graph(...)` method to declare dependencies and render targets; the graph handles barriers, layouts, and dynamic rendering.
### Overview
- Interface: Each pass implements `IRenderPass { init(context); execute(cmd); cleanup(); getName(); }`. Today, `execute()` is unused for built-in passes; work is recorded via the Render Graph record callback.
- Manager: `RenderPassManager::init()` creates and stores built-in passes: `BackgroundPass` (compute), `GeometryPass` (G-Buffer), `LightingPass` (deferred), `TransparentPass`, plus optional `ImGuiPass`.
- Render graph: Passes call `register_graph(graph, ...)` to declare image/buffer access and attachments. The graph inserts barriers and begins/ends dynamic rendering.
- Shared targets: Passes coordinate through `SwapchainManager` images: `drawImage`, `gBufferPosition/Normal/Albedo`, `depthImage` (imported into the graph each frame).
- Hot reload: Fetch graphics pipeline/layout by key each frame through `PipelineManager` in the record callback.
### Quick Start — Add a New Pass (Render Graph)
```c++
class MyPass : public IRenderPass {
public:
void init(EngineContext* ctx) override {
_ctx = ctx;
GraphicsPipelineCreateInfo info{};
info.vertexShaderPath = _ctx->getAssets()->shaderPath("fullscreen.vert.spv");
info.fragmentShaderPath = _ctx->getAssets()->shaderPath("my_pass.frag.spv");
info.setLayouts = { _ctx->getDescriptorLayouts()->gpuSceneDataLayout() };
info.configure = [this](PipelineBuilder& b){
b.set_input_topology(VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST);
b.set_polygon_mode(VK_POLYGON_MODE_FILL);
b.set_cull_mode(VK_CULL_MODE_NONE, VK_FRONT_FACE_CLOCKWISE);
b.set_multisampling_none(); b.disable_depthtest();
b.set_color_attachment_format(_ctx->getSwapchain()->drawImage().imageFormat);
};
_ctx->pipelines->createGraphicsPipeline("my_pass", info);
}
void register_graph(RenderGraph* graph, RGImageHandle draw, RGImageHandle depth) {
graph->add_pass(
"MyPass",
RGPassType::Graphics,
[draw, depth](RGPassBuilder& b, EngineContext*) {
b.write_color(draw);
b.write_depth(depth, false);
},
[this](VkCommandBuffer cmd, const RGPassResources&, EngineContext* ctx){
VkPipeline p{}; VkPipelineLayout l{};
ctx->pipelines->getGraphics("my_pass", p, l);
vkCmdBindPipeline(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, p);
VkViewport vp{0,0,(float)ctx->getDrawExtent().width,(float)ctx->getDrawExtent().height,0,1};
vkCmdSetViewport(cmd, 0, 1, &vp);
VkRect2D sc{{0,0}, ctx->getDrawExtent()};
vkCmdSetScissor(cmd, 0, 1, &sc);
vkCmdDraw(cmd, 3, 1, 0, 0);
}
);
}
void execute(VkCommandBuffer) override {} // unused with Render Graph
void cleanup() override {}
const char* getName() const override { return "MyPass"; }
private:
EngineContext* _ctx{};
};
// Register in RenderPassManager::init()
auto myPass = std::make_unique<MyPass>();
myPass->init(context);
addPass(std::move(myPass));
```
### Built-in Passes
- Background (compute): Declares `ComputeWrite(drawImage)` and dispatches a selected effect instance.
- Geometry (G-Buffer): Declares 3 color attachments and `DepthAttachment`, plus buffer reads for shared index/vertex buffers.
- Lighting (deferred): Reads GBuffer as sampled images and writes to `drawImage`.
- Transparent (forward): Writes to `drawImage` with depth test against `depthImage` after lighting.
- ImGui: Inserted just before present to draw on the swapchain image.
### API Summary
- `RenderPassManager::addPass(unique_ptr<IRenderPass>)`: Register a new pass (storage/ownership only).
- `RenderPassManager::setImGuiPass(...)`: Configure the optional ImGui pass.
- `IRenderPass::register_graph(...)` (per pass class): Declare resources and recording callbacks for the Render Graph.
### Tips
- Dont call `vkCmdBeginRendering` or add manual transitions for declared attachments; the Render Graph handles it.
- Re-fetch pipeline and layout by key each frame to pick up hot-reloaded shaders.
- Allocate transient descriptor sets from `currentFrame->_frameDescriptors`; free pass-owned layouts in `cleanup()`.
- Use `EngineContext::getDrawExtent()` for viewport/scissor.
See also: `docs/RenderGraph.md` for the builder API and synchronization details.

49
docs/ResourceManager.md Normal file
View File

@@ -0,0 +1,49 @@
## Resource Manager: Buffers, Images, Uploads, and Lifetime
Central allocator and uploader built on VMA. Provides creation helpers, an immediate submit path, and a deferred upload queue that is converted into a Render Graph transfer pass each frame.
### Responsibilities
- Create/destroy GPU buffers and images (with mapped memory for CPUtoGPU when requested).
- Stage and upload mesh/texture data either immediately or via a perframe deferred path.
- Integrate with `FrameResources` deletion queues to match lifetimes to the frame.
- Expose a Render Graph pass that batches all pending uploads.
### Key APIs (src/core/vk_resource.h)
- Creation
- `AllocatedBuffer create_buffer(size, usage, memUsage)`
- `AllocatedImage create_image(extent3D, format, usage[, mipmapped])`
- `AllocatedImage create_image(data, extent3D, format, usage[, mipmapped])`
- Destroy with `destroy_buffer`, `destroy_image`.
- Uploads
- Deferred mode toggle: `set_deferred_uploads(bool)`; when true, mesh/texture uploads enqueue staging work.
- Query queues: `pending_buffer_uploads()`, `pending_image_uploads()`; clear via `clear_pending_uploads()`.
- Immediate path: `process_queued_uploads_immediate()` or `immediate_submit(lambda)` for custom commands.
- Render Graph integration: `register_upload_pass(RenderGraph&, FrameResources&)` builds a single `Transfer` pass that:
- Imports staging buffers and destination resources into the graph.
- Inserts the appropriate `TransferSrc/Dst` declarations.
- Records `vkCmdCopyBuffer` / `vkCmdCopyBufferToImage` and optional mip generation.
- Schedules deletion of staging buffers at end of frame.
- Mesh upload convenience
- `GPUMeshBuffers uploadMesh(span<uint32_t> indices, span<Vertex> vertices)` — returns device buffers and device address.
### PerFrame Lifetime
- `FrameResources::_deletionQueue` owns perframe cleanups for transient buffers/images created during rendering passes.
- The upload pass registers cleanups for staging buffers on the frame queue.
### Render Graph Interaction
- `register_upload_pass` is called during frame build before other passes (see `src/core/vk_engine.cpp:315`).
- It uses graph `import_buffer` / `import_image` to deduplicate external resources and attach initial stage/layout.
- Barriers and final layouts for uploaded images are handled in the pass recording (`generate_mipmaps` path transitions to `SHADER_READ_ONLY_OPTIMAL`).
### Guidelines
- Prefer deferred uploads (`set_deferred_uploads(true)`) for framecoherent synchronization under the Render Graph.
- For tooling and oneoff setup, use `immediate_submit(lambda)` to avoid perframe queuing.
- When creating transient images/buffers used only inside a pass, prefer the Render Graphs `create_*` so destruction is automatic at frame end.

54
docs/Scene.md Normal file
View File

@@ -0,0 +1,54 @@
## Scene System: Cameras, DrawContext, and Instances
Thin scene layer that produces `RenderObject`s for the renderer. It gathers opaque/transparent surfaces, maintains the main camera, and exposes simple runtime instance APIs.
### Components
- `SceneManager` (src/scene/vk_scene.h/.cpp)
- Owns the main `Camera`, `GPUSceneData`, and `DrawContext`.
- Loads GLTF scenes via `AssetManager`/`LoadedGLTF` and creates dynamic mesh/GLTF instances.
- Updates perframe transforms, camera, and `GPUSceneData` (`view/proj/viewproj`, sun/ambient).
- `DrawContext`
- Two lists: `OpaqueSurfaces` and `TransparentSurfaces` of `RenderObject`.
- Populated by scene graph traversal and dynamic instances each frame.
- `RenderObject`
- Geometry: `indexBuffer`, `vertexBuffer` (for RG tracking), `vertexBufferAddress` (device address used by shaders).
- Material: `MaterialInstance* material` with bound set and pipeline.
- Transform and bounds for optional culling.
### Frame Flow
1. `SceneManager::update_scene()` clears the draw lists and rebuilds them by drawing all active scene/instance nodes.
2. Renderer consumes the lists:
- Geometry pass sorts opaque by material and index buffer to improve locality.
- Transparent pass sorts backtofront against camera and blends to the HDR target.
3. Uniforms: Passes allocate a small perframe UBO (`GPUSceneData`) and bind it via a shared layout.
### Sorting / Culling
- Opaque (geometry): stable sort by `material` then `indexBuffer` (see `src/render/vk_renderpass_geometry.cpp`).
- Transparent: sort by cameraspace depth far→near (see `src/render/vk_renderpass_transparent.cpp`).
- An example frustum test exists in `vk_renderpass_geometry.cpp` (`is_visible`) and can be enabled to cull meshes.
### Dynamic Instances
- Mesh instances
- `addMeshInstance(name, mesh, transform)`, `removeMeshInstance(name)`, `clearMeshInstances()`.
- Useful for spawning primitives or asset meshes at runtime.
- GLTF instances
- `addGLTFInstance(name, LoadedGLTF, transform)`, `removeGLTFInstance(name)`, `clearGLTFInstances()`.
### GPU Scene Data
- `GPUSceneData` carries camera matrices and lighting constants for the frame.
- Passes map and fill it into a perframe UBO, bindable with `DescriptorManager::gpuSceneDataLayout()`.
### Tips
- Treat `DrawContext` as immutable during rendering; build it fully in `update_scene()`.
- Keep `RenderObject` small; use device addresses for vertex data to avoid perdraw vertex buffer binds.
- For custom sorting/culling, modify only the scene layer; render passes stay simple.

159
docs/asset_manager.md Normal file
View File

@@ -0,0 +1,159 @@
## Asset Manager
Centralized asset path resolution, glTF loading, and runtime mesh creation (including simple materials and primitives). Avoids scattered relative paths and duplicates by resolving roots at runtime and caching results.
### Path Resolution
- Environment root: Honors `VKG_ASSET_ROOT` (expected to contain `assets/` and/or `shaders/`).
- Upward search: If unset, searches upward from the current directory for folders named `assets` and `shaders`.
- Fallbacks: Tries `./assets`, `../assets` and `./shaders`, `../shaders`.
- Methods: `shaderPath(name)`, `assetPath(name)`, and `modelPath(name)` (alias of `assetPath`). Relative or absolute input is returned if already valid; otherwise resolution is attempted as above.
Access the manager anywhere via `EngineContext`:
```c++
auto *assets = context->getAssets();
auto spv = assets->shaderPath("mesh.vert.spv");
auto chairPath = assets->modelPath("models/chair.glb");
```
### API Summary
- Paths
- `std::string shaderPath(std::string_view)`
- `std::string assetPath(std::string_view)` / `modelPath(std::string_view)`
- glTF
- `std::optional<std::shared_ptr<LoadedGLTF>> loadGLTF(std::string_view nameOrPath)` — cached by canonical absolute path
- Meshes
- `std::shared_ptr<MeshAsset> createMesh(const MeshCreateInfo &info)`
- `std::shared_ptr<MeshAsset> createMesh(const std::string &name, std::span<Vertex> v, std::span<uint32_t> i, std::shared_ptr<GLTFMaterial> material = {})`
- `std::shared_ptr<MeshAsset> getMesh(const std::string &name) const`
- `std::shared_ptr<MeshAsset> getPrimitive(std::string_view name) const` (returns existing default primitives if created)
- `bool removeMesh(const std::string &name)`
- `void cleanup()` — releases meshes, material buffers, and any images owned by the manager
### Mesh Creation Model
Use either the convenience descriptor (`MeshCreateInfo`) or the direct overload with vertex/index spans.
```c++
struct AssetManager::MaterialOptions {
std::string albedoPath; // resolved through AssetManager
std::string metalRoughPath; // resolved through AssetManager
bool albedoSRGB = true; // VK_FORMAT_R8G8B8A8_SRGB when true
bool metalRoughSRGB = false; // VK_FORMAT_R8G8B8A8_UNORM when false
GLTFMetallic_Roughness::MaterialConstants constants{};
MaterialPass pass = MaterialPass::MainColor; // or Transparent
};
struct AssetManager::MeshGeometryDesc {
enum class Type { Provided, Cube, Sphere };
Type type = Type::Provided;
std::span<Vertex> vertices{}; // when Provided
std::span<uint32_t> indices{}; // when Provided
int sectors = 16; // for Sphere
int stacks = 16; // for Sphere
};
struct AssetManager::MeshMaterialDesc {
enum class Kind { Default, Textured };
Kind kind = Kind::Default;
MaterialOptions options{}; // used when Textured
};
struct AssetManager::MeshCreateInfo {
std::string name; // cache key; reused if already created
MeshGeometryDesc geometry; // Provided / Cube / Sphere
MeshMaterialDesc material; // Default or Textured
};
```
Behavior and lifetime:
- Default material: If no material is given, a white material is created (2× white textures, per-mesh UBO with sane defaults).
- Textured material: When `MeshMaterialDesc::Textured`, images are loaded via `stb_image` and uploaded; per-mesh UBO is allocated and filled from `constants`.
- Ownership: Material buffers and any images created by the AssetManager are tracked and destroyed on `removeMesh(name)` or `cleanup()`.
- Caching: Meshes are cached by `name`. Re-creating with the same name returns the existing mesh (no new uploads).
### Examples
Create a simple plane and render it (default material):
```c++
std::vector<Vertex> v = {
{{-0.5f, 0.0f, -0.5f}, 0.0f, {0,1,0}, 0.0f, {1,1,1,1}},
{{ 0.5f, 0.0f, -0.5f}, 1.0f, {0,1,0}, 0.0f, {1,1,1,1}},
{{-0.5f, 0.0f, 0.5f}, 0.0f, {0,1,0}, 1.0f, {1,1,1,1}},
{{ 0.5f, 0.0f, 0.5f}, 1.0f, {0,1,0}, 1.0f, {1,1,1,1}},
};
std::vector<uint32_t> i = { 0,1,2, 2,1,3 };
auto plane = ctx->getAssets()->createMesh("plane", v, i); // default white material
glm::mat4 xform = glm::scale(glm::mat4(1.f), glm::vec3(10.f, 1.f, 10.f));
ctx->scene->addMeshInstance("ground", plane, xform);
```
Generate primitives via `MeshCreateInfo`:
```c++
AssetManager::MeshCreateInfo ci{};
ci.name = "cubeA";
ci.geometry.type = AssetManager::MeshGeometryDesc::Type::Cube;
ci.material.kind = AssetManager::MeshMaterialDesc::Kind::Default;
auto cube = ctx->getAssets()->createMesh(ci);
ctx->scene->addMeshInstance("cube.instance", cube,
glm::translate(glm::mat4(1.f), glm::vec3(-2.f, 0.f, -2.f)));
AssetManager::MeshCreateInfo si{};
si.name = "sphere48x24";
si.geometry.type = AssetManager::MeshGeometryDesc::Type::Sphere;
si.geometry.sectors = 48; si.geometry.stacks = 24;
si.material.kind = AssetManager::MeshMaterialDesc::Kind::Default;
auto sphere = ctx->getAssets()->createMesh(si);
ctx->scene->addMeshInstance("sphere.instance", sphere,
glm::translate(glm::mat4(1.f), glm::vec3(2.f, 0.f, -2.f)));
```
Textured primitive (albedo + metal-rough):
```c++
AssetManager::MeshCreateInfo ti{};
ti.name = "ground.textured";
// provide vertices/indices for a plane (see first example)
ti.geometry.type = AssetManager::MeshGeometryDesc::Type::Provided;
ti.geometry.vertices = std::span<Vertex>(v.data(), v.size());
ti.geometry.indices = std::span<uint32_t>(i.data(), i.size());
ti.material.kind = AssetManager::MeshMaterialDesc::Kind::Textured;
ti.material.options.albedoPath = "textures/ground_albedo.png"; // sRGB
ti.material.options.metalRoughPath = "textures/ground_mr.png"; // UNORM
// ti.material.options.pass = MaterialPass::Transparent; // optional
auto texturedPlane = ctx->getAssets()->createMesh(ti);
glm::mat4 tx = glm::scale(glm::mat4(1.f), glm::vec3(10.f, 1.f, 10.f));
ctx->scene->addMeshInstance("ground.textured", texturedPlane, tx);
```
Textured cube/sphere via options is analogous — set `geometry.type` to `Cube` or `Sphere` and fill `material.options`.
Runtime glTF spawning:
```c++
auto chair = ctx->getAssets()->loadGLTF("models/chair.glb");
if (chair)
{
glm::mat4 t = glm::translate(glm::mat4(1.f), glm::vec3(0.f, 0.f, -3.f));
ctx->scene->addGLTFInstance("chair01", *chair, t);
}
// Move / overwrite
ctx->scene->addGLTFInstance("chair01", *chair,
glm::translate(glm::mat4(1.f), glm::vec3(0.f, 0.5f, -3.f)));
// Remove
ctx->scene->removeGLTFInstance("chair01");
```
### Notes
- Default primitives: The engine creates default Cube/Sphere meshes via `AssetManager` and registers them as dynamic scene instances.
- Reuse by name: `createMesh("name", ...)` returns the cached mesh if it already exists. Use a unique name or call `removeMesh(name)` to replace.
- sRGB/UNORM: Albedo is sRGB by default, metal-rough is UNORM by default. Adjust via `MaterialOptions`.
- Hot reload: Shaders are resolved via `shaderPath()`; pipeline hot reload is handled by the pipeline manager, not the AssetManager.
- Normal maps: Not wired into the default GLTF PBR material in this branch. Adding them would require descriptor and shader updates.

View File

@@ -0,0 +1,164 @@
#version 450
#extension GL_GOOGLE_include_directive : require
#include "input_structures.glsl"
layout(location=0) in vec2 inUV;
layout(location=0) out vec4 outColor;
layout(set=1, binding=0) uniform sampler2D posTex;
layout(set=1, binding=1) uniform sampler2D normalTex;
layout(set=1, binding=2) uniform sampler2D albedoTex;
layout(set=2, binding=0) uniform sampler2D shadowTex;
const float PI = 3.14159265359;
float hash12(vec2 p)
{
vec3 p3 = fract(vec3(p.xyx) * 0.1031);
p3 += dot(p3, p3.yzx + 33.33); return fract((p3.x + p3.y) * p3.z);
}
const vec2 POISSON_16[16] = vec2[16](
vec2(0.2852, -0.1883), vec2(-0.1464, 0.2591),
vec2(-0.3651, -0.0974), vec2(0.0901, 0.3807),
vec2(0.4740, 0.0679), vec2(-0.0512, -0.4466),
vec2(-0.4497, 0.1673), vec2(0.3347, 0.3211),
vec2(0.1948, -0.4196), vec2(-0.2919, -0.3291),
vec2(-0.0763, 0.4661), vec2(0.4421, -0.2217),
vec2(0.0281, -0.2468), vec2(-0.2104, 0.0573),
vec2(0.1197, 0.0779), vec2(-0.0905, -0.1203)
);
float calcShadowVisibility(vec3 worldPos, vec3 N, vec3 L)
{
vec4 lclip = sceneData.lightViewProj * vec4(worldPos, 1.0);
vec3 ndc = lclip.xyz / lclip.w;
vec2 suv = ndc.xy * 0.5 + 0.5;
if (any(lessThan(suv, vec2(0.0))) || any(greaterThan(suv, vec2(1.0))))
return 1.0;
float current = clamp(ndc.z, 0.0, 1.0);
float NoL = max(dot(N, L), 0.0);
float slopeBias = max(0.0006 * (1.0 - NoL), 0.0001);
float dzdx = dFdx(current);
float dzdy = dFdy(current);
float ddz = max(abs(dzdx), abs(dzdy));
float bias = slopeBias + ddz * 0.75;
ivec2 dim = textureSize(shadowTex, 0);
vec2 texelSize = 1.0 / vec2(dim);
float baseRadius = 1.25;
float radius = mix(baseRadius, baseRadius * 4.0, current);
float ang = hash12(suv * 4096.0) * 6.2831853;
vec2 r = vec2(cos(ang), sin(ang));
mat2 rot = mat2(r.x, -r.y, r.y, r.x);
const int TAP_COUNT = 16;
float occluded = 0.0;
float wsum = 0.0;
for (int i = 0; i < TAP_COUNT; ++i)
{
vec2 pu = rot * POISSON_16[i];
vec2 off = pu * radius * texelSize;
float pr = length(pu);
float w = 1.0 - smoothstep(0.0, 0.65, pr);
float mapD = texture(shadowTex, suv + off).r;
float occ = step(current + bias, mapD);
occluded += occ * w;
wsum += w;
}
float shadow = (wsum > 0.0) ? (occluded / wsum) : 0.0;
return 1.0 - shadow;
}
vec3 fresnelSchlick(float cosTheta, vec3 F0)
{
return F0 + (1.0 - F0) * pow(1.0 - cosTheta, 5.0);
}
float DistributionGGX(vec3 N, vec3 H, float roughness)
{
float a = roughness * roughness;
float a2 = a * a;
float NdotH = max(dot(N, H), 0.0);
float NdotH2 = NdotH * NdotH;
float num = a2;
float denom = (NdotH2 * (a2 - 1.0) + 1.0);
denom = PI * denom * denom;
return num / max(denom, 0.001);
}
float GeometrySchlickGGX(float NdotV, float roughness)
{
float r = (roughness + 1.0);
float k = (r * r) / 8.0;
float denom = NdotV * (1.0 - k) + k;
return NdotV / max(denom, 0.001);
}
float GeometrySmith(vec3 N, vec3 V, vec3 L, float roughness)
{
float ggx2 = GeometrySchlickGGX(max(dot(N, V), 0.0), roughness);
float ggx1 = GeometrySchlickGGX(max(dot(N, L), 0.0), roughness);
return ggx1 * ggx2;
}
void main(){
vec4 posSample = texture(posTex, inUV);
if (posSample.w == 0.0)
{
outColor = vec4(0.0);
return;
}
vec3 pos = posSample.xyz;
vec4 normalSample = texture(normalTex, inUV);
vec3 N = normalize(normalSample.xyz);
float roughness = clamp(normalSample.w, 0.04, 1.0);
vec4 albedoSample = texture(albedoTex, inUV);
vec3 albedo = albedoSample.rgb;
float metallic = clamp(albedoSample.a, 0.0, 1.0);
vec3 camPos = vec3(inverse(sceneData.view)[3]);
vec3 V = normalize(camPos - pos);
vec3 L = normalize(-sceneData.sunlightDirection.xyz);
vec3 H = normalize(V + L);
vec3 F0 = mix(vec3(0.04), albedo, metallic);
vec3 F = fresnelSchlick(max(dot(H, V), 0.0), F0);
float NDF = DistributionGGX(N, H, roughness);
float G = GeometrySmith(N, V, L, roughness);
vec3 numerator = NDF * G * F;
float denom = 4.0 * max(dot(N, V), 0.0) * max(dot(N, L), 0.0);
vec3 specular = numerator / max(denom, 0.001);
vec3 kS = F;
vec3 kD = (1.0 - kS) * (1.0 - metallic);
float NdotL = max(dot(N, L), 0.0);
// Shadowing (directional, reversed-Z shadow map)
float visibility = calcShadowVisibility(pos, N, L);
vec3 irradiance = sceneData.sunlightColor.rgb * sceneData.sunlightColor.a * NdotL * visibility;
vec3 color = (kD * albedo / PI + specular) * irradiance;
color += albedo * sceneData.ambientColor.rgb;
outColor = vec4(color, 1.0);
}

10
shaders/fullscreen.vert Normal file
View File

@@ -0,0 +1,10 @@
#version 450
layout(location=0) out vec2 outUV;
void main() {
vec2 positions[3] = vec2[3](vec2(-1.0, -1.0), vec2(3.0, -1.0), vec2(-1.0, 3.0));
vec2 uvs[3] = vec2[3](vec2(0.0, 0.0), vec2(2.0, 0.0), vec2(0.0, 2.0));
gl_Position = vec4(positions[gl_VertexIndex], 0.0, 1.0);
outUV = uvs[gl_VertexIndex];
}

26
shaders/gbuffer.frag Normal file
View File

@@ -0,0 +1,26 @@
#version 450
#extension GL_GOOGLE_include_directive : require
#include "input_structures.glsl"
layout(location = 0) in vec3 inNormal;
layout(location = 1) in vec3 inColor;
layout(location = 2) in vec2 inUV;
layout(location = 3) in vec3 inWorldPos;
layout(location = 0) out vec4 outPos;
layout(location = 1) out vec4 outNorm;
layout(location = 2) out vec4 outAlbedo;
void main() {
// Apply baseColor texture and baseColorFactor once
vec3 albedo = inColor * texture(colorTex, inUV).rgb * materialData.colorFactors.rgb;
// glTF metallic-roughness in G (roughness) and B (metallic)
vec2 mrTex = texture(metalRoughTex, inUV).gb;
float roughness = clamp(mrTex.x * materialData.metal_rough_factors.y, 0.04, 1.0);
float metallic = clamp(mrTex.y * materialData.metal_rough_factors.x, 0.0, 1.0);
outPos = vec4(inWorldPos, 1.0);
outNorm = vec4(normalize(inNormal), roughness);
outAlbedo = vec4(albedo, metallic);
}

View File

@@ -0,0 +1,32 @@
#version 460
layout (local_size_x = 16, local_size_y = 16) in;
layout(rgba16f,set = 0, binding = 0) uniform image2D image;
//push constants block
layout( push_constant ) uniform constants
{
vec4 data1;
vec4 data2;
vec4 data3;
vec4 data4;
} PushConstants;
void main()
{
ivec2 texelCoord = ivec2(gl_GlobalInvocationID.xy);
ivec2 size = imageSize(image);
vec4 topColor = PushConstants.data1;
vec4 bottomColor = PushConstants.data2;
if(texelCoord.x < size.x && texelCoord.y < size.y)
{
float blend = float(texelCoord.y)/(size.y);
imageStore(image, texelCoord, mix(topColor,bottomColor, blend));
}
}

View File

@@ -0,0 +1,20 @@
layout(set = 0, binding = 0) uniform SceneData{
mat4 view;
mat4 proj;
mat4 viewproj;
mat4 lightViewProj;
vec4 ambientColor;
vec4 sunlightDirection; //w for sun power
vec4 sunlightColor;
} sceneData;
layout(set = 1, binding = 0) uniform GLTFMaterialData{
vec4 colorFactors;
vec4 metal_rough_factors;
} materialData;
layout(set = 1, binding = 1) uniform sampler2D colorTex;
layout(set = 1, binding = 2) uniform sampler2D metalRoughTex;

88
shaders/mesh.frag Normal file
View File

@@ -0,0 +1,88 @@
#version 450
#extension GL_GOOGLE_include_directive : require
#include "input_structures.glsl"
layout (location = 0) in vec3 inNormal;
layout (location = 1) in vec3 inColor;
layout (location = 2) in vec2 inUV;
layout (location = 3) in vec3 inWorldPos;
layout (location = 0) out vec4 outFragColor;
const float PI = 3.14159265359;
vec3 fresnelSchlick(float cosTheta, vec3 F0)
{
return F0 + (1.0 - F0) * pow(1.0 - cosTheta, 5.0);
}
float DistributionGGX(vec3 N, vec3 H, float roughness)
{
float a = roughness * roughness;
float a2 = a * a;
float NdotH = max(dot(N, H), 0.0);
float NdotH2 = NdotH * NdotH;
float num = a2;
float denom = (NdotH2 * (a2 - 1.0) + 1.0);
denom = PI * denom * denom;
return num / max(denom, 0.001);
}
float GeometrySchlickGGX(float NdotV, float roughness)
{
float r = (roughness + 1.0);
float k = (r * r) / 8.0;
float denom = NdotV * (1.0 - k) + k;
return NdotV / denom;
}
float GeometrySmith(vec3 N, vec3 V, vec3 L, float roughness)
{
float ggx2 = GeometrySchlickGGX(max(dot(N, V), 0.0), roughness);
float ggx1 = GeometrySchlickGGX(max(dot(N, L), 0.0), roughness);
return ggx1 * ggx2;
}
void main()
{
// Base color with material factor and texture
vec4 baseTex = texture(colorTex, inUV);
vec3 albedo = inColor * baseTex.rgb * materialData.colorFactors.rgb;
// glTF: metallicRoughnessTexture uses G=roughness, B=metallic
vec2 mrTex = texture(metalRoughTex, inUV).gb;
float roughness = clamp(mrTex.x * materialData.metal_rough_factors.y, 0.04, 1.0);
float metallic = clamp(mrTex.y * materialData.metal_rough_factors.x, 0.0, 1.0);
vec3 N = normalize(inNormal);
vec3 camPos = vec3(inverse(sceneData.view)[3]);
vec3 V = normalize(camPos - inWorldPos);
vec3 L = normalize(-sceneData.sunlightDirection.xyz);
vec3 H = normalize(V + L);
vec3 F0 = mix(vec3(0.04), albedo, metallic);
vec3 F = fresnelSchlick(max(dot(H, V), 0.0), F0);
float NDF = DistributionGGX(N, H, roughness);
float G = GeometrySmith(N, V, L, roughness);
vec3 numerator = NDF * G * F;
float denom = 4.0 * max(dot(N, V), 0.0) * max(dot(N, L), 0.0);
vec3 specular = numerator / max(denom, 0.001);
vec3 kS = F;
vec3 kD = vec3(1.0) - kS;
kD *= 1.0 - metallic;
float NdotL = max(dot(N, L), 0.0);
vec3 irradiance = sceneData.sunlightColor.rgb * sceneData.sunlightColor.a * NdotL;
vec3 color = (kD * albedo / PI + specular) * irradiance;
color += albedo * sceneData.ambientColor.rgb;
// Alpha from baseColor texture and factor (glTF spec)
float alpha = clamp(baseTex.a * materialData.colorFactors.a, 0.0, 1.0);
outFragColor = vec4(color, alpha);
}

46
shaders/mesh.vert Normal file
View File

@@ -0,0 +1,46 @@
#version 450
#extension GL_GOOGLE_include_directive : require
#extension GL_EXT_buffer_reference : require
#include "input_structures.glsl"
layout (location = 0) out vec3 outNormal;
layout (location = 1) out vec3 outColor;
layout (location = 2) out vec2 outUV;
layout (location = 3) out vec3 outWorldPos;
struct Vertex {
vec3 position;
float uv_x;
vec3 normal;
float uv_y;
vec4 color;
};
layout(buffer_reference, std430) readonly buffer VertexBuffer{
Vertex vertices[];
};
//push constants block
layout( push_constant ) uniform constants
{
mat4 render_matrix;
VertexBuffer vertexBuffer;
} PushConstants;
void main()
{
Vertex v = PushConstants.vertexBuffer.vertices[gl_VertexIndex];
vec4 worldPos = PushConstants.render_matrix * vec4(v.position, 1.0f);
gl_Position = sceneData.viewproj * worldPos;
outNormal = (PushConstants.render_matrix * vec4(v.normal, 0.f)).xyz;
// Pass pure vertex color; apply baseColorFactor only in fragment
outColor = v.color.xyz;
outUV.x = v.uv_x;
outUV.y = v.uv_y;
outWorldPos = worldPos.xyz;
}

4
shaders/shadow.frag Normal file
View File

@@ -0,0 +1,4 @@
#version 450
void main() {}

29
shaders/shadow.vert Normal file
View File

@@ -0,0 +1,29 @@
#version 450
#extension GL_GOOGLE_include_directive : require
#extension GL_EXT_buffer_reference : require
#include "input_structures.glsl"
struct Vertex {
vec3 position; float uv_x;
vec3 normal; float uv_y;
vec4 color;
};
layout(buffer_reference, std430) readonly buffer VertexBuffer{
Vertex vertices[];
};
layout(push_constant) uniform PushConsts {
mat4 render_matrix;
VertexBuffer vertexBuffer;
} PC;
void main()
{
Vertex v = PC.vertexBuffer.vertices[gl_VertexIndex];
vec4 worldPos = PC.render_matrix * vec4(v.position, 1.0);
gl_Position = sceneData.lightViewProj * worldPos;
}

83
shaders/sky.comp Normal file
View File

@@ -0,0 +1,83 @@
#version 450
layout (local_size_x = 16, local_size_y = 16) in;
layout(rgba8,set = 0, binding = 0) uniform image2D image;
// License Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
// Return random noise in the range [0.0, 1.0], as a function of x.
float Noise2d( in vec2 x )
{
float xhash = cos( x.x * 37.0 );
float yhash = cos( x.y * 57.0 );
return fract( 415.92653 * ( xhash + yhash ) );
}
// Convert Noise2d() into a "star field" by stomping everthing below fThreshhold to zero.
float NoisyStarField( in vec2 vSamplePos, float fThreshhold )
{
float StarVal = Noise2d( vSamplePos );
if ( StarVal >= fThreshhold )
StarVal = pow( (StarVal - fThreshhold)/(1.0 - fThreshhold), 6.0 );
else
StarVal = 0.0;
return StarVal;
}
// Stabilize NoisyStarField() by only sampling at integer values.
float StableStarField( in vec2 vSamplePos, float fThreshhold )
{
// Linear interpolation between four samples.
// Note: This approach has some visual artifacts.
// There must be a better way to "anti alias" the star field.
float fractX = fract( vSamplePos.x );
float fractY = fract( vSamplePos.y );
vec2 floorSample = floor( vSamplePos );
float v1 = NoisyStarField( floorSample, fThreshhold );
float v2 = NoisyStarField( floorSample + vec2( 0.0, 1.0 ), fThreshhold );
float v3 = NoisyStarField( floorSample + vec2( 1.0, 0.0 ), fThreshhold );
float v4 = NoisyStarField( floorSample + vec2( 1.0, 1.0 ), fThreshhold );
float StarVal = v1 * ( 1.0 - fractX ) * ( 1.0 - fractY )
+ v2 * ( 1.0 - fractX ) * fractY
+ v3 * fractX * ( 1.0 - fractY )
+ v4 * fractX * fractY;
return StarVal;
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 iResolution = imageSize(image);
// Sky Background Color
vec3 vColor = vec3( 0.1, 0.2, 0.4 ) * fragCoord.y / iResolution.y;
// Note: Choose fThreshhold in the range [0.99, 0.9999].
// Higher values (i.e., closer to one) yield a sparser starfield.
float StarFieldThreshhold = 0.97;
// Stars with a slow crawl.
float xRate = 0.2;
float yRate = -0.06;
vec2 vSamplePos = fragCoord.xy + vec2( xRate * float( 1 ), yRate * float( 1 ) );
float StarVal = StableStarField( vSamplePos, StarFieldThreshhold );
vColor += vec3( StarVal );
fragColor = vec4(vColor, 1.0);
}
void main()
{
vec4 value = vec4(0.0, 0.0, 0.0, 1.0);
ivec2 texelCoord = ivec2(gl_GlobalInvocationID.xy);
ivec2 size = imageSize(image);
if(texelCoord.x < size.x && texelCoord.y < size.y)
{
vec4 color;
mainImage(color,texelCoord);
imageStore(image, texelCoord, color);
}
}

49
shaders/tonemap.frag Normal file
View File

@@ -0,0 +1,49 @@
#version 450
layout(location=0) in vec2 inUV;
layout(location=0) out vec4 outColor;
layout(set=0, binding=0) uniform sampler2D uHdr;
layout(push_constant) uniform Push
{
float exposure;
int mode;
} pc;
vec3 reinhard(vec3 x)
{
return x / (1.0 + x);
}
// Narkowicz ACES approximation
vec3 aces_tonemap(vec3 x)
{
// https://64.github.io/tonemapping/
const float a = 2.51;
const float b = 0.03;
const float c = 2.43;
const float d = 0.59;
const float e = 0.14;
return clamp((x*(a*x+b))/(x*(c*x+d)+e), 0.0, 1.0);
}
void main()
{
vec3 hdr = texture(uHdr, inUV).rgb;
// Simple exposure
float exposure = max(pc.exposure, 0.0001);
vec3 mapped = hdr * exposure;
if (pc.mode == 1)
mapped = aces_tonemap(mapped);
else
mapped = reinhard(mapped);
const float gamma = 2.2;
mapped = pow(mapped, vec3(1.0 / gamma));
outColor = vec4(mapped, 1.0);
}

102
src/CMakeLists.txt Normal file
View File

@@ -0,0 +1,102 @@
# Add source to this project's executable.
add_executable (vulkan_engine
main.cpp
# core
core/vk_types.h
core/vk_initializers.cpp
core/vk_initializers.h
core/vk_images.h
core/vk_images.cpp
core/vk_debug.h
core/vk_debug.cpp
core/vk_descriptors.h
core/vk_descriptors.cpp
core/vk_device.h
core/vk_device.cpp
core/vk_swapchain.h
core/vk_swapchain.cpp
core/vk_resource.h
core/vk_resource.cpp
core/engine_context.h
core/engine_context.cpp
core/vk_descriptor_manager.h
core/vk_descriptor_manager.cpp
core/vk_sampler_manager.h
core/vk_sampler_manager.cpp
core/asset_locator.h
core/asset_locator.cpp
core/asset_manager.h
core/asset_manager.cpp
core/vk_pipeline_manager.h
core/vk_pipeline_manager.cpp
core/frame_resources.h
core/frame_resources.cpp
core/config.h
core/vk_engine.h
core/vk_engine.cpp
# render
render/vk_pipelines.h
render/vk_pipelines.cpp
render/vk_renderpass.h
render/vk_renderpass.cpp
render/vk_renderpass_background.h
render/vk_renderpass_background.cpp
render/vk_renderpass_geometry.h
render/vk_renderpass_geometry.cpp
render/vk_renderpass_lighting.h
render/vk_renderpass_lighting.cpp
render/vk_renderpass_shadow.h
render/vk_renderpass_shadow.cpp
render/vk_renderpass_transparent.h
render/vk_renderpass_transparent.cpp
render/vk_renderpass_imgui.h
render/vk_renderpass_imgui.cpp
render/vk_renderpass_tonemap.h
render/vk_renderpass_tonemap.cpp
# render graph (initial skeleton)
render/rg_types.h
render/rg_graph.h
render/rg_graph.cpp
render/rg_builder.h
render/rg_builder.cpp
render/rg_resources.h
render/rg_resources.cpp
render/vk_materials.h
render/vk_materials.cpp
render/primitives.h
# scene
scene/vk_scene.h
scene/vk_scene.cpp
scene/vk_loader.h
scene/vk_loader.cpp
scene/camera.h
scene/camera.cpp
# compute
compute/vk_compute.h
compute/vk_compute.cpp
)
set_property(TARGET vulkan_engine PROPERTY CXX_STANDARD 20)
target_compile_definitions(vulkan_engine PUBLIC GLM_FORCE_DEPTH_ZERO_TO_ONE)
target_include_directories(vulkan_engine PUBLIC
"${CMAKE_CURRENT_SOURCE_DIR}"
"${CMAKE_CURRENT_SOURCE_DIR}/core"
"${CMAKE_CURRENT_SOURCE_DIR}/render"
"${CMAKE_CURRENT_SOURCE_DIR}/scene"
"${CMAKE_CURRENT_SOURCE_DIR}/compute"
)
target_link_libraries(vulkan_engine PUBLIC vma glm Vulkan::Vulkan fmt::fmt stb_image SDL2::SDL2 vkbootstrap imgui fastgltf::fastgltf)
add_library(vma_impl OBJECT vma_impl.cpp)
target_include_directories(vma_impl PRIVATE "${CMAKE_CURRENT_SOURCE_DIR}/../third_party/vma")
target_link_libraries(vma_impl PRIVATE Vulkan::Vulkan)
target_link_libraries(vulkan_engine PUBLIC $<TARGET_OBJECTS:vma_impl>)
target_precompile_headers(vulkan_engine PUBLIC <optional> <vector> <memory> <string> <vector> <unordered_map> <glm/mat4x4.hpp> <glm/vec4.hpp> <vulkan/vulkan.h>)
add_custom_command(TARGET vulkan_engine POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy $<TARGET_RUNTIME_DLLS:vulkan_engine> $<TARGET_FILE_DIR:vulkan_engine>
COMMAND_EXPAND_LISTS
)

585
src/compute/vk_compute.cpp Normal file
View File

@@ -0,0 +1,585 @@
#include <compute/vk_compute.h>
#include <core/engine_context.h>
#include <render/vk_pipelines.h>
#include <core/vk_initializers.h>
#include <iostream>
#include "vk_device.h"
#include "core/vk_resource.h"
ComputeBinding ComputeBinding::uniformBuffer(uint32_t binding, VkBuffer buffer, VkDeviceSize size, VkDeviceSize offset)
{
ComputeBinding result;
result.binding = binding;
result.type = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER;
result.buffer.buffer = buffer;
result.buffer.offset = offset;
result.buffer.size = size;
return result;
}
ComputeBinding ComputeBinding::storageBuffer(uint32_t binding, VkBuffer buffer, VkDeviceSize size, VkDeviceSize offset)
{
ComputeBinding result;
result.binding = binding;
result.type = VK_DESCRIPTOR_TYPE_STORAGE_BUFFER;
result.buffer.buffer = buffer;
result.buffer.offset = offset;
result.buffer.size = size;
return result;
}
ComputeBinding ComputeBinding::sampledImage(uint32_t binding, VkImageView imageView, VkSampler sampler,
VkImageLayout layout)
{
ComputeBinding result;
result.binding = binding;
result.type = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER;
result.storageImage.imageView = imageView;
result.image.sampler = sampler;
result.storageImage.layout = layout;
return result;
}
ComputeBinding ComputeBinding::storeImage(uint32_t binding, VkImageView imageView, VkImageLayout layout)
{
ComputeBinding result;
result.binding = binding;
result.type = VK_DESCRIPTOR_TYPE_STORAGE_IMAGE;
result.storageImage.imageView = imageView;
result.storageImage.layout = layout;
return result;
}
ComputePipeline::~ComputePipeline()
{
cleanup();
}
ComputePipeline::ComputePipeline(ComputePipeline &&other) noexcept
: device(other.device)
, pipeline(other.pipeline)
, layout(other.layout)
, descriptorLayout(other.descriptorLayout)
{
other.device = VK_NULL_HANDLE;
other.pipeline = VK_NULL_HANDLE;
other.layout = VK_NULL_HANDLE;
other.descriptorLayout = VK_NULL_HANDLE;
}
ComputePipeline &ComputePipeline::operator=(ComputePipeline &&other) noexcept
{
if (this != &other)
{
cleanup();
device = other.device;
pipeline = other.pipeline;
layout = other.layout;
descriptorLayout = other.descriptorLayout;
other.device = VK_NULL_HANDLE;
other.pipeline = VK_NULL_HANDLE;
other.layout = VK_NULL_HANDLE;
other.descriptorLayout = VK_NULL_HANDLE;
}
return *this;
}
void ComputePipeline::cleanup()
{
if (device != VK_NULL_HANDLE)
{
if (pipeline != VK_NULL_HANDLE)
{
vkDestroyPipeline(device, pipeline, nullptr);
}
if (layout != VK_NULL_HANDLE)
{
vkDestroyPipelineLayout(device, layout, nullptr);
}
if (descriptorLayout != VK_NULL_HANDLE)
{
vkDestroyDescriptorSetLayout(device, descriptorLayout, nullptr);
}
}
device = VK_NULL_HANDLE;
pipeline = VK_NULL_HANDLE;
layout = VK_NULL_HANDLE;
descriptorLayout = VK_NULL_HANDLE;
}
ComputeManager::~ComputeManager()
{
cleanup();
}
void ComputeManager::init(EngineContext *context)
{
this->context = context;
std::vector<DescriptorAllocatorGrowable::PoolSizeRatio> poolSizes = {
{VK_DESCRIPTOR_TYPE_STORAGE_IMAGE, 4},
{VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, 4},
{VK_DESCRIPTOR_TYPE_STORAGE_BUFFER, 4},
{VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, 4}
};
descriptorAllocator.init(context->getDevice()->device(), 100, poolSizes);
}
void ComputeManager::cleanup()
{
pipelines.clear();
// Destroy instances and their owned resources
if (context)
{
for (auto &kv : instances)
{
for (auto &img : kv.second.ownedImages)
{
context->getResources()->destroy_image(img);
}
for (auto &buf : kv.second.ownedBuffers)
{
context->getResources()->destroy_buffer(buf);
}
}
instances.clear();
}
if (context)
{
descriptorAllocator.destroy_pools(context->getDevice()->device());
}
context = nullptr;
}
bool ComputeManager::registerPipeline(const std::string &name, const ComputePipelineCreateInfo &createInfo)
{
if (pipelines.find(name) != pipelines.end())
{
std::cerr << "Pipeline '" << name << "' already exists!" << std::endl;
return false;
}
return createPipeline(name, createInfo);
}
void ComputeManager::unregisterPipeline(const std::string &name)
{
pipelines.erase(name);
}
bool ComputeManager::hasPipeline(const std::string &name) const
{
return pipelines.find(name) != pipelines.end();
}
void ComputeManager::dispatch(VkCommandBuffer cmd, const std::string &pipelineName,
const ComputeDispatchInfo &dispatchInfo)
{
auto it = pipelines.find(pipelineName);
if (it == pipelines.end())
{
std::cerr << "Pipeline '" << pipelineName << "' not found!" << std::endl;
return;
}
const ComputePipeline &pipeline = it->second;
vkCmdBindPipeline(cmd, VK_PIPELINE_BIND_POINT_COMPUTE, pipeline.getPipeline());
if (!dispatchInfo.bindings.empty())
{
VkDescriptorSet descriptorSet = allocateDescriptorSet(pipeline, dispatchInfo.bindings);
updateDescriptorSet(descriptorSet, dispatchInfo.bindings);
vkCmdBindDescriptorSets(cmd, VK_PIPELINE_BIND_POINT_COMPUTE, pipeline.getLayout(),
0, 1, &descriptorSet, 0, nullptr);
}
if (dispatchInfo.pushConstants && dispatchInfo.pushConstantSize > 0)
{
vkCmdPushConstants(cmd, pipeline.getLayout(), VK_SHADER_STAGE_COMPUTE_BIT,
0, dispatchInfo.pushConstantSize, dispatchInfo.pushConstants);
}
insertBarriers(cmd, dispatchInfo);
vkCmdDispatch(cmd, dispatchInfo.groupCountX, dispatchInfo.groupCountY, dispatchInfo.groupCountZ);
}
void ComputeManager::dispatchImmediate(const std::string &pipelineName, const ComputeDispatchInfo &dispatchInfo)
{
context->getResources()->immediate_submit([this, pipelineName, dispatchInfo](VkCommandBuffer cmd) {
dispatch(cmd, pipelineName, dispatchInfo);
});
}
bool ComputeManager::createInstance(const std::string &instanceName, const std::string &pipelineName)
{
if (instances.find(instanceName) != instances.end())
{
std::cerr << "Compute instance '" << instanceName << "' already exists!" << std::endl;
return false;
}
auto it = pipelines.find(pipelineName);
if (it == pipelines.end())
{
std::cerr << "Pipeline '" << pipelineName << "' not found for instance!" << std::endl;
return false;
}
ComputeInstance inst{};
inst.pipelineName = pipelineName;
inst.descriptorSet = descriptorAllocator.allocate(context->getDevice()->device(), it->second.descriptorLayout);
instances.emplace(instanceName, std::move(inst));
return true;
}
void ComputeManager::destroyInstance(const std::string &instanceName)
{
auto it = instances.find(instanceName);
if (it == instances.end()) return;
for (auto &img : it->second.ownedImages)
context->getResources()->destroy_image(img);
for (auto &buf : it->second.ownedBuffers)
context->getResources()->destroy_buffer(buf);
instances.erase(it);
}
static void upsert_binding(std::vector<ComputeBinding> &bindings, const ComputeBinding &b)
{
for (auto &x : bindings)
{
if (x.binding == b.binding)
{
x = b;
return;
}
}
bindings.push_back(b);
}
bool ComputeManager::setInstanceBinding(const std::string &instanceName, const ComputeBinding &binding)
{
auto it = instances.find(instanceName);
if (it == instances.end()) return false;
upsert_binding(it->second.bindings, binding);
return true;
}
bool ComputeManager::setInstanceStorageImage(const std::string &instanceName, uint32_t binding, VkImageView view,
VkImageLayout layout)
{
return setInstanceBinding(instanceName, ComputeBinding::storeImage(binding, view, layout));
}
bool ComputeManager::setInstanceSampledImage(const std::string &instanceName, uint32_t binding, VkImageView view,
VkSampler sampler, VkImageLayout layout)
{
return setInstanceBinding(instanceName, ComputeBinding::sampledImage(binding, view, sampler, layout));
}
bool ComputeManager::setInstanceBuffer(const std::string &instanceName, uint32_t binding, VkBuffer buffer,
VkDeviceSize size, VkDescriptorType type, VkDeviceSize offset)
{
ComputeBinding b{};
b.binding = binding;
b.type = type;
b.buffer.buffer = buffer;
b.buffer.size = size;
b.buffer.offset = offset;
return setInstanceBinding(instanceName, b);
}
AllocatedImage ComputeManager::createAndBindStorageImage(const std::string &instanceName, uint32_t binding,
VkExtent3D extent, VkFormat format, VkImageLayout layout,
VkImageUsageFlags usage)
{
auto it = instances.find(instanceName);
if (it == instances.end()) return {};
AllocatedImage img = context->getResources()->create_image(extent, format, usage);
it->second.ownedImages.push_back(img);
setInstanceStorageImage(instanceName, binding, img.imageView, layout);
return img;
}
AllocatedBuffer ComputeManager::createAndBindStorageBuffer(const std::string &instanceName, uint32_t binding,
VkDeviceSize size, VkBufferUsageFlags usage,
VmaMemoryUsage memUsage)
{
auto it = instances.find(instanceName);
if (it == instances.end()) return {};
AllocatedBuffer buf = context->getResources()->create_buffer(size, usage, memUsage);
it->second.ownedBuffers.push_back(buf);
setInstanceBuffer(instanceName, binding, buf.buffer, size, VK_DESCRIPTOR_TYPE_STORAGE_BUFFER, 0);
return buf;
}
bool ComputeManager::updateInstanceDescriptorSet(const std::string &instanceName)
{
auto it = instances.find(instanceName);
if (it == instances.end()) return false;
updateDescriptorSet(it->second.descriptorSet, it->second.bindings);
return true;
}
void ComputeManager::dispatchInstance(VkCommandBuffer cmd, const std::string &instanceName,
const ComputeDispatchInfo &dispatchInfo)
{
auto it = instances.find(instanceName);
if (it == instances.end())
{
std::cerr << "Compute instance '" << instanceName << "' not found!" << std::endl;
return;
}
auto pit = pipelines.find(it->second.pipelineName);
if (pit == pipelines.end())
{
std::cerr << "Pipeline '" << it->second.pipelineName << "' not found for instance dispatch!" << std::endl;
return;
}
const ComputePipeline &pipeline = pit->second;
vkCmdBindPipeline(cmd, VK_PIPELINE_BIND_POINT_COMPUTE, pipeline.getPipeline());
updateDescriptorSet(it->second.descriptorSet, it->second.bindings);
vkCmdBindDescriptorSets(cmd, VK_PIPELINE_BIND_POINT_COMPUTE, pipeline.getLayout(), 0, 1, &it->second.descriptorSet,
0, nullptr);
if (dispatchInfo.pushConstants && dispatchInfo.pushConstantSize > 0)
{
vkCmdPushConstants(cmd, pipeline.getLayout(), VK_SHADER_STAGE_COMPUTE_BIT, 0, dispatchInfo.pushConstantSize,
dispatchInfo.pushConstants);
}
insertBarriers(cmd, dispatchInfo);
vkCmdDispatch(cmd, dispatchInfo.groupCountX, dispatchInfo.groupCountY, dispatchInfo.groupCountZ);
}
uint32_t ComputeManager::calculateGroupCount(uint32_t workItems, uint32_t localSize)
{
return (workItems + localSize - 1) / localSize;
}
ComputeDispatchInfo ComputeManager::createDispatch2D(uint32_t width, uint32_t height, uint32_t localSizeX,
uint32_t localSizeY)
{
ComputeDispatchInfo info;
info.groupCountX = calculateGroupCount(width, localSizeX);
info.groupCountY = calculateGroupCount(height, localSizeY);
info.groupCountZ = 1;
return info;
}
ComputeDispatchInfo ComputeManager::createDispatch3D(uint32_t width, uint32_t height, uint32_t depth,
uint32_t localSizeX, uint32_t localSizeY, uint32_t localSizeZ)
{
ComputeDispatchInfo info;
info.groupCountX = calculateGroupCount(width, localSizeX);
info.groupCountY = calculateGroupCount(height, localSizeY);
info.groupCountZ = calculateGroupCount(depth, localSizeZ);
return info;
}
void ComputeManager::clearImage(VkCommandBuffer cmd, VkImageView imageView, const glm::vec4 &clearColor)
{
if (!hasPipeline("clear_image"))
{
ComputePipelineCreateInfo createInfo;
createInfo.shaderPath = "../shaders/clear_image.comp.spv";
createInfo.descriptorTypes = {VK_DESCRIPTOR_TYPE_STORAGE_IMAGE};
createInfo.pushConstantSize = sizeof(glm::vec4);
registerPipeline("clear_image", createInfo);
}
ComputeDispatchInfo dispatchInfo;
dispatchInfo.bindings.push_back(ComputeBinding::storeImage(0, imageView));
dispatchInfo.pushConstants = &clearColor;
dispatchInfo.pushConstantSize = sizeof(glm::vec4);
dispatchInfo.groupCountX = 64;
dispatchInfo.groupCountY = 64;
dispatchInfo.groupCountZ = 1;
dispatch(cmd, "clear_image", dispatchInfo);
}
void ComputeManager::copyBuffer(VkCommandBuffer cmd, VkBuffer src, VkBuffer dst, VkDeviceSize size,
VkDeviceSize srcOffset, VkDeviceSize dstOffset)
{
if (!hasPipeline("copy_buffer"))
{
ComputePipelineCreateInfo createInfo;
createInfo.shaderPath = "../shaders/copy_buffer.comp.spv";
createInfo.descriptorTypes = {VK_DESCRIPTOR_TYPE_STORAGE_BUFFER, VK_DESCRIPTOR_TYPE_STORAGE_BUFFER};
createInfo.pushConstantSize = sizeof(uint32_t) * 3;
registerPipeline("copy_buffer", createInfo);
}
ComputeDispatchInfo dispatchInfo;
dispatchInfo.bindings.push_back(ComputeBinding::storageBuffer(0, src, size, srcOffset));
dispatchInfo.bindings.push_back(ComputeBinding::storageBuffer(1, dst, size, dstOffset));
uint32_t pushData[3] = {(uint32_t) size, (uint32_t) srcOffset, (uint32_t) dstOffset};
dispatchInfo.pushConstants = pushData;
dispatchInfo.pushConstantSize = sizeof(pushData);
dispatchInfo.groupCountX = calculateGroupCount(size / 4, 256);
dispatchInfo.groupCountY = 1;
dispatchInfo.groupCountZ = 1;
dispatch(cmd, "copy_buffer", dispatchInfo);
}
bool ComputeManager::createPipeline(const std::string &name, const ComputePipelineCreateInfo &createInfo)
{
ComputePipeline computePipeline;
computePipeline.device = context->getDevice()->device();
VkShaderModule shaderModule;
if (!vkutil::load_shader_module(createInfo.shaderPath.c_str(), context->getDevice()->device(), &shaderModule))
{
std::cerr << "Failed to load compute shader: " << createInfo.shaderPath << std::endl;
return false;
}
if (!createInfo.descriptorTypes.empty())
{
DescriptorLayoutBuilder layoutBuilder;
for (size_t i = 0; i < createInfo.descriptorTypes.size(); ++i)
{
layoutBuilder.add_binding(i, createInfo.descriptorTypes[i]);
}
computePipeline.descriptorLayout = layoutBuilder.build(context->getDevice()->device(), VK_SHADER_STAGE_COMPUTE_BIT);
}
VkPipelineLayoutCreateInfo layoutInfo = vkinit::pipeline_layout_create_info();
if (computePipeline.descriptorLayout != VK_NULL_HANDLE)
{
layoutInfo.setLayoutCount = 1;
layoutInfo.pSetLayouts = &computePipeline.descriptorLayout;
}
VkPushConstantRange pushConstantRange = {};
if (createInfo.pushConstantSize > 0)
{
pushConstantRange.offset = 0;
pushConstantRange.size = createInfo.pushConstantSize;
pushConstantRange.stageFlags = createInfo.pushConstantStages;
layoutInfo.pushConstantRangeCount = 1;
layoutInfo.pPushConstantRanges = &pushConstantRange;
}
VK_CHECK(vkCreatePipelineLayout(context->getDevice()->device(), &layoutInfo, nullptr, &computePipeline.layout));
VkPipelineShaderStageCreateInfo stageInfo = vkinit::pipeline_shader_stage_create_info(
VK_SHADER_STAGE_COMPUTE_BIT, shaderModule);
VkSpecializationInfo specializationInfo = {};
if (!createInfo.specializationEntries.empty())
{
specializationInfo.mapEntryCount = createInfo.specializationEntries.size();
specializationInfo.pMapEntries = createInfo.specializationEntries.data();
specializationInfo.dataSize = createInfo.specializationData.size() * sizeof(uint32_t);
specializationInfo.pData = createInfo.specializationData.data();
stageInfo.pSpecializationInfo = &specializationInfo;
}
VkComputePipelineCreateInfo pipelineInfo = {};
pipelineInfo.sType = VK_STRUCTURE_TYPE_COMPUTE_PIPELINE_CREATE_INFO;
pipelineInfo.stage = stageInfo;
pipelineInfo.layout = computePipeline.layout;
VK_CHECK(
vkCreateComputePipelines(context->getDevice()->device(), VK_NULL_HANDLE, 1, &pipelineInfo, nullptr, &computePipeline.pipeline))
;
vkDestroyShaderModule(context->getDevice()->device(), shaderModule, nullptr);
pipelines[name] = std::move(computePipeline);
return true;
}
VkDescriptorSet ComputeManager::allocateDescriptorSet(const ComputePipeline &pipeline,
const std::vector<ComputeBinding> &bindings)
{
if (pipeline.descriptorLayout == VK_NULL_HANDLE)
{
return VK_NULL_HANDLE;
}
return descriptorAllocator.allocate(context->getDevice()->device(), pipeline.descriptorLayout);
}
void ComputeManager::updateDescriptorSet(VkDescriptorSet descriptorSet, const std::vector<ComputeBinding> &bindings)
{
if (descriptorSet == VK_NULL_HANDLE)
{
return;
}
DescriptorWriter writer;
for (const auto &binding: bindings)
{
switch (binding.type)
{
case VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER:
case VK_DESCRIPTOR_TYPE_STORAGE_BUFFER:
writer.write_buffer(binding.binding, binding.buffer.buffer, binding.buffer.size,
binding.buffer.offset, binding.type);
break;
case VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER:
writer.write_image(binding.binding, binding.storageImage.imageView, binding.image.sampler,
binding.storageImage.layout, binding.type);
break;
case VK_DESCRIPTOR_TYPE_STORAGE_IMAGE:
writer.write_image(binding.binding, binding.storageImage.imageView, VK_NULL_HANDLE,
binding.storageImage.layout, binding.type);
break;
default:
std::cerr << "Unsupported descriptor type: " << binding.type << std::endl;
break;
}
}
writer.update_set(context->getDevice()->device(), descriptorSet);
}
void ComputeManager::insertBarriers(VkCommandBuffer cmd, const ComputeDispatchInfo &dispatchInfo)
{
if (dispatchInfo.memoryBarriers.empty() &&
dispatchInfo.bufferBarriers.empty() &&
dispatchInfo.imageBarriers.empty())
{
return;
}
VkDependencyInfo dependencyInfo = {};
dependencyInfo.sType = VK_STRUCTURE_TYPE_DEPENDENCY_INFO;
dependencyInfo.memoryBarrierCount = dispatchInfo.memoryBarriers.size();
dependencyInfo.pMemoryBarriers = dispatchInfo.memoryBarriers.data();
dependencyInfo.bufferMemoryBarrierCount = dispatchInfo.bufferBarriers.size();
dependencyInfo.pBufferMemoryBarriers = dispatchInfo.bufferBarriers.data();
dependencyInfo.imageMemoryBarrierCount = dispatchInfo.imageBarriers.size();
dependencyInfo.pImageMemoryBarriers = dispatchInfo.imageBarriers.data();
vkCmdPipelineBarrier2(cmd, &dependencyInfo);
}

201
src/compute/vk_compute.h Normal file
View File

@@ -0,0 +1,201 @@
# pragma once
#include <core/vk_types.h>
#include <core/vk_descriptors.h>
#include <functional>
#include <unordered_map>
#include <glm/glm.hpp>
// Common compute data structures used across passes
struct ComputePushConstants
{
glm::vec4 data1;
glm::vec4 data2;
glm::vec4 data3;
glm::vec4 data4;
};
struct ComputeEffect
{
const char *name;
ComputePushConstants data;
};
class EngineContext;
struct ComputeBinding
{
uint32_t binding;
VkDescriptorType type;
union
{
struct
{
VkBuffer buffer;
VkDeviceSize offset;
VkDeviceSize size;
} buffer;
struct
{
VkImage image;
VkImageLayout imageLayout;
VkSampler sampler;
} image;
struct
{
VkImageView imageView;
VkImageLayout layout;
} storageImage;
};
static ComputeBinding uniformBuffer(uint32_t binding, VkBuffer buffer, VkDeviceSize size, VkDeviceSize offset = 0);
static ComputeBinding storageBuffer(uint32_t binding, VkBuffer buffer, VkDeviceSize size, VkDeviceSize offset = 0);
static ComputeBinding sampledImage(uint32_t binding, VkImageView imageView, VkSampler sampler,
VkImageLayout layout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL);
static ComputeBinding storeImage(uint32_t binding, VkImageView imageView,
VkImageLayout layout = VK_IMAGE_LAYOUT_GENERAL);
};
struct ComputePipelineCreateInfo
{
std::string shaderPath;
std::vector<VkDescriptorType> descriptorTypes;
uint32_t pushConstantSize = 0;
VkShaderStageFlags pushConstantStages = VK_SHADER_STAGE_COMPUTE_BIT;
std::vector<VkSpecializationMapEntry> specializationEntries;
std::vector<uint32_t> specializationData;
};
struct ComputeDispatchInfo
{
uint32_t groupCountX = 1;
uint32_t groupCountY = 1;
uint32_t groupCountZ = 1;
std::vector<ComputeBinding> bindings;
const void *pushConstants = nullptr;
uint32_t pushConstantSize = 0;
std::vector<VkMemoryBarrier2> memoryBarriers;
std::vector<VkBufferMemoryBarrier2> bufferBarriers;
std::vector<VkImageMemoryBarrier2> imageBarriers;
};
class ComputePipeline
{
public:
ComputePipeline() = default;
~ComputePipeline();
ComputePipeline(ComputePipeline &&other) noexcept;
ComputePipeline &operator=(ComputePipeline &&other) noexcept;
ComputePipeline(const ComputePipeline &) = delete;
ComputePipeline &operator=(const ComputePipeline &) = delete;
bool isValid() const { return pipeline != VK_NULL_HANDLE; }
VkPipeline getPipeline() const { return pipeline; }
VkPipelineLayout getLayout() const { return layout; }
private:
friend class ComputeManager;
VkDevice device = VK_NULL_HANDLE;
VkPipeline pipeline = VK_NULL_HANDLE;
VkPipelineLayout layout = VK_NULL_HANDLE;
VkDescriptorSetLayout descriptorLayout = VK_NULL_HANDLE;
void cleanup();
};
class ComputeManager
{
public:
ComputeManager() = default;
~ComputeManager();
void init(EngineContext *context);
void cleanup();
bool registerPipeline(const std::string &name, const ComputePipelineCreateInfo &createInfo);
bool createComputePipeline(const std::string &name, const ComputePipelineCreateInfo &createInfo) {
return registerPipeline(name, createInfo);
}
void unregisterPipeline(const std::string &name);
bool hasPipeline(const std::string &name) const;
void dispatch(VkCommandBuffer cmd, const std::string &pipelineName, const ComputeDispatchInfo &dispatchInfo);
void dispatchImmediate(const std::string &pipelineName, const ComputeDispatchInfo &dispatchInfo);
static uint32_t calculateGroupCount(uint32_t workItems, uint32_t localSize);
static ComputeDispatchInfo createDispatch2D(uint32_t width, uint32_t height, uint32_t localSizeX = 16,
uint32_t localSizeY = 16);
static ComputeDispatchInfo createDispatch3D(uint32_t width, uint32_t height, uint32_t depth,
uint32_t localSizeX = 8, uint32_t localSizeY = 8,
uint32_t localSizeZ = 8);
void clearImage(VkCommandBuffer cmd, VkImageView imageView, const glm::vec4 &clearColor = {0, 0, 0, 0});
void copyBuffer(VkCommandBuffer cmd, VkBuffer src, VkBuffer dst, VkDeviceSize size, VkDeviceSize srcOffset = 0,
VkDeviceSize dstOffset = 0);
struct ComputeInstance
{
std::string pipelineName;
VkDescriptorSet descriptorSet = VK_NULL_HANDLE;
std::vector<ComputeBinding> bindings;
std::vector<AllocatedImage> ownedImages;
std::vector<AllocatedBuffer> ownedBuffers;
};
bool createInstance(const std::string &instanceName, const std::string &pipelineName);
void destroyInstance(const std::string &instanceName);
bool hasInstance(const std::string &instanceName) const { return instances.find(instanceName) != instances.end(); }
bool setInstanceBinding(const std::string &instanceName, const ComputeBinding &binding);
bool setInstanceStorageImage(const std::string &instanceName, uint32_t binding, VkImageView view,
VkImageLayout layout = VK_IMAGE_LAYOUT_GENERAL);
bool setInstanceSampledImage(const std::string &instanceName, uint32_t binding, VkImageView view, VkSampler sampler,
VkImageLayout layout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL);
bool setInstanceBuffer(const std::string &instanceName, uint32_t binding, VkBuffer buffer, VkDeviceSize size,
VkDescriptorType type, VkDeviceSize offset = 0);
AllocatedImage createAndBindStorageImage(const std::string &instanceName, uint32_t binding, VkExtent3D extent,
VkFormat format,
VkImageLayout layout = VK_IMAGE_LAYOUT_GENERAL,
VkImageUsageFlags usage = VK_IMAGE_USAGE_STORAGE_BIT | VK_IMAGE_USAGE_SAMPLED_BIT);
AllocatedBuffer createAndBindStorageBuffer(const std::string &instanceName, uint32_t binding, VkDeviceSize size,
VkBufferUsageFlags usage = VK_BUFFER_USAGE_STORAGE_BUFFER_BIT,
VmaMemoryUsage memUsage = VMA_MEMORY_USAGE_GPU_ONLY);
bool updateInstanceDescriptorSet(const std::string &instanceName);
void dispatchInstance(VkCommandBuffer cmd, const std::string &instanceName, const ComputeDispatchInfo &dispatchInfo);
private:
EngineContext *context = nullptr;
std::unordered_map<std::string, ComputePipeline> pipelines;
DescriptorAllocatorGrowable descriptorAllocator;
std::unordered_map<std::string, ComputeInstance> instances;
bool createPipeline(const std::string &name, const ComputePipelineCreateInfo &createInfo);
VkDescriptorSet allocateDescriptorSet(const ComputePipeline &pipeline, const std::vector<ComputeBinding> &bindings);
void updateDescriptorSet(VkDescriptorSet descriptorSet, const std::vector<ComputeBinding> &bindings);
void insertBarriers(VkCommandBuffer cmd, const ComputeDispatchInfo &dispatchInfo);
};

125
src/core/asset_locator.cpp Normal file
View File

@@ -0,0 +1,125 @@
#include "asset_locator.h"
#include <cstdlib>
using std::filesystem::path;
static path get_env_path(const char *name)
{
const char *v = std::getenv(name);
if (!v || !*v) return {};
path p = v;
if (std::filesystem::exists(p)) return std::filesystem::canonical(p);
return {};
}
static path find_upwards_containing(path start, const std::string &subdir, int maxDepth = 6)
{
path cur = std::filesystem::weakly_canonical(start);
for (int i = 0; i <= maxDepth; i++)
{
path candidate = cur / subdir;
if (std::filesystem::exists(candidate)) return cur;
if (!cur.has_parent_path()) break;
cur = cur.parent_path();
}
return {};
}
AssetPaths AssetPaths::detect(const path &startDir)
{
AssetPaths out{};
if (auto root = get_env_path("VKG_ASSET_ROOT"); !root.empty())
{
out.root = root;
if (std::filesystem::exists(root / "assets")) out.assets = root / "assets";
if (std::filesystem::exists(root / "shaders")) out.shaders = root / "shaders";
return out;
}
if (auto aroot = find_upwards_containing(startDir, "assets"); !aroot.empty())
{
out.assets = aroot / "assets";
out.root = aroot;
}
if (auto sroot = find_upwards_containing(startDir, "shaders"); !sroot.empty())
{
out.shaders = sroot / "shaders";
if (out.root.empty()) out.root = sroot;
}
if (out.assets.empty())
{
path p1 = startDir / "assets";
path p2 = startDir / ".." / "assets";
if (std::filesystem::exists(p1)) out.assets = p1;
else if (std::filesystem::exists(p2)) out.assets = std::filesystem::weakly_canonical(p2);
}
if (out.shaders.empty())
{
path p1 = startDir / "shaders";
path p2 = startDir / ".." / "shaders";
if (std::filesystem::exists(p1)) out.shaders = p1;
else if (std::filesystem::exists(p2)) out.shaders = std::filesystem::weakly_canonical(p2);
}
return out;
}
void AssetLocator::init()
{
_paths = AssetPaths::detect();
}
bool AssetLocator::file_exists(const path &p)
{
std::error_code ec;
return !p.empty() && std::filesystem::exists(p, ec) && std::filesystem::is_regular_file(p, ec);
}
std::string AssetLocator::resolve_in(const path &base, std::string_view name)
{
if (name.empty()) return {};
path in = base / std::string(name);
if (file_exists(in)) return in.string();
return {};
}
std::string AssetLocator::shaderPath(std::string_view name) const
{
if (name.empty()) return {};
path np = std::string(name);
if (np.is_absolute() && file_exists(np)) return np.string();
if (file_exists(np)) return np.string();
if (!_paths.shaders.empty())
{
if (auto r = resolve_in(_paths.shaders, name); !r.empty()) return r;
}
if (auto r = resolve_in(std::filesystem::current_path() / "shaders", name); !r.empty()) return r;
if (auto r = resolve_in(std::filesystem::current_path() / ".." / "shaders", name); !r.empty()) return r;
return np.string();
}
std::string AssetLocator::assetPath(std::string_view name) const
{
if (name.empty()) return {};
path np = std::string(name);
if (np.is_absolute() && file_exists(np)) return np.string();
if (file_exists(np)) return np.string();
if (!_paths.assets.empty())
{
if (auto r = resolve_in(_paths.assets, name); !r.empty()) return r;
}
if (auto r = resolve_in(std::filesystem::current_path() / "assets", name); !r.empty()) return r;
if (auto r = resolve_in(std::filesystem::current_path() / ".." / "assets", name); !r.empty()) return r;
return np.string();
}

44
src/core/asset_locator.h Normal file
View File

@@ -0,0 +1,44 @@
#pragma once
#include <filesystem>
#include <optional>
#include <string>
#include <string_view>
struct AssetPaths
{
std::filesystem::path root;
std::filesystem::path assets;
std::filesystem::path shaders;
bool valid() const
{
return (!assets.empty() && std::filesystem::exists(assets)) ||
(!shaders.empty() && std::filesystem::exists(shaders));
}
static AssetPaths detect(const std::filesystem::path &startDir = std::filesystem::current_path());
};
class AssetLocator
{
public:
void init();
const AssetPaths &paths() const { return _paths; }
void setPaths(const AssetPaths &p) { _paths = p; }
std::string shaderPath(std::string_view name) const;
std::string assetPath(std::string_view name) const;
std::string modelPath(std::string_view name) const { return assetPath(name); }
private:
static bool file_exists(const std::filesystem::path &p);
static std::string resolve_in(const std::filesystem::path &base, std::string_view name);
AssetPaths _paths{};
};

335
src/core/asset_manager.cpp Normal file
View File

@@ -0,0 +1,335 @@
#include "asset_manager.h"
#include <cstdlib>
#include <iostream>
#include <core/vk_engine.h>
#include <core/vk_resource.h>
#include <render/vk_materials.h>
#include <render/primitives.h>
#include <stb_image.h>
#include "asset_locator.h"
using std::filesystem::path;
void AssetManager::init(VulkanEngine *engine)
{
_engine = engine;
_locator.init();
}
void AssetManager::cleanup()
{
if (_engine && _engine->_resourceManager)
{
for (auto &kv: _meshCache)
{
if (kv.second)
{
_engine->_resourceManager->destroy_buffer(kv.second->meshBuffers.indexBuffer);
_engine->_resourceManager->destroy_buffer(kv.second->meshBuffers.vertexBuffer);
}
}
for (auto &kv: _meshMaterialBuffers)
{
_engine->_resourceManager->destroy_buffer(kv.second);
}
for (auto &kv: _meshOwnedImages)
{
for (const auto &img: kv.second)
{
_engine->_resourceManager->destroy_image(img);
}
}
}
_meshCache.clear();
_meshMaterialBuffers.clear();
_meshOwnedImages.clear();
_gltfCacheByPath.clear();
}
std::string AssetManager::shaderPath(std::string_view name) const
{
return _locator.shaderPath(name);
}
std::string AssetManager::assetPath(std::string_view name) const
{
return _locator.assetPath(name);
}
std::string AssetManager::modelPath(std::string_view name) const
{
return _locator.modelPath(name);
}
std::optional<std::shared_ptr<LoadedGLTF> > AssetManager::loadGLTF(std::string_view nameOrPath)
{
if (!_engine) return {};
if (nameOrPath.empty()) return {};
std::string resolved = assetPath(nameOrPath);
path keyPath = resolved;
std::error_code ec;
keyPath = std::filesystem::weakly_canonical(keyPath, ec);
std::string key = (ec ? resolved : keyPath.string());
if (auto it = _gltfCacheByPath.find(key); it != _gltfCacheByPath.end())
{
if (auto sp = it->second.lock()) return sp;
}
auto loaded = loadGltf(_engine, resolved);
if (!loaded.has_value()) return {};
_gltfCacheByPath[key] = loaded.value();
return loaded;
}
std::shared_ptr<MeshAsset> AssetManager::getPrimitive(std::string_view name) const
{
if (name.empty()) return {};
auto findBy = [&](const std::string &key) -> std::shared_ptr<MeshAsset> {
auto it = _meshCache.find(key);
return (it != _meshCache.end()) ? it->second : nullptr;
};
if (name == std::string_view("cube") || name == std::string_view("Cube"))
{
if (auto m = findBy("cube")) return m;
if (auto m = findBy("Cube")) return m;
return {};
}
if (name == std::string_view("sphere") || name == std::string_view("Sphere"))
{
if (auto m = findBy("sphere")) return m;
if (auto m = findBy("Sphere")) return m;
return {};
}
return {};
}
std::shared_ptr<MeshAsset> AssetManager::createMesh(const MeshCreateInfo &info)
{
if (!_engine || !_engine->_resourceManager) return {};
if (info.name.empty()) return {};
if (auto it = _meshCache.find(info.name); it != _meshCache.end())
{
return it->second;
}
std::vector<Vertex> tmpVerts;
std::vector<uint32_t> tmpInds;
std::span<Vertex> vertsSpan{};
std::span<uint32_t> indsSpan{};
switch (info.geometry.type)
{
case MeshGeometryDesc::Type::Provided:
vertsSpan = info.geometry.vertices;
indsSpan = info.geometry.indices;
break;
case MeshGeometryDesc::Type::Cube:
primitives::buildCube(tmpVerts, tmpInds);
vertsSpan = tmpVerts;
indsSpan = tmpInds;
break;
case MeshGeometryDesc::Type::Sphere:
primitives::buildSphere(tmpVerts, tmpInds, info.geometry.sectors, info.geometry.stacks);
vertsSpan = tmpVerts;
indsSpan = tmpInds;
break;
}
if (info.material.kind == MeshMaterialDesc::Kind::Default)
{
return createMesh(info.name, vertsSpan, indsSpan, {});
}
const auto &opt = info.material.options;
auto [albedo, createdAlbedo] = loadImageFromAsset(opt.albedoPath, opt.albedoSRGB);
auto [mr, createdMR] = loadImageFromAsset(opt.metalRoughPath, opt.metalRoughSRGB);
const AllocatedImage &albedoRef = createdAlbedo ? albedo : _engine->_errorCheckerboardImage;
const AllocatedImage &mrRef = createdMR ? mr : _engine->_whiteImage;
AllocatedBuffer matBuffer = createMaterialBufferWithConstants(opt.constants);
GLTFMetallic_Roughness::MaterialResources res{};
res.colorImage = albedoRef;
res.colorSampler = _engine->_samplerManager->defaultLinear();
res.metalRoughImage = mrRef;
res.metalRoughSampler = _engine->_samplerManager->defaultLinear();
res.dataBuffer = matBuffer.buffer;
res.dataBufferOffset = 0;
auto mat = createMaterial(opt.pass, res);
auto mesh = createMesh(info.name, vertsSpan, indsSpan, mat);
_meshMaterialBuffers.emplace(info.name, matBuffer);
if (createdAlbedo) _meshOwnedImages[info.name].push_back(albedo);
if (createdMR) _meshOwnedImages[info.name].push_back(mr);
return mesh;
}
static Bounds compute_bounds(std::span<Vertex> vertices)
{
Bounds b{};
if (vertices.empty())
{
b.origin = glm::vec3(0.0f);
b.extents = glm::vec3(0.5f);
b.sphereRadius = glm::length(b.extents);
return b;
}
glm::vec3 minpos = vertices[0].position;
glm::vec3 maxpos = vertices[0].position;
for (const auto &v: vertices)
{
minpos = glm::min(minpos, v.position);
maxpos = glm::max(maxpos, v.position);
}
b.origin = (maxpos + minpos) / 2.f;
b.extents = (maxpos - minpos) / 2.f;
b.sphereRadius = glm::length(b.extents);
return b;
}
AllocatedBuffer AssetManager::createMaterialBufferWithConstants(
const GLTFMetallic_Roughness::MaterialConstants &constants) const
{
AllocatedBuffer matBuffer = _engine->_resourceManager->create_buffer(
sizeof(GLTFMetallic_Roughness::MaterialConstants),
VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT,
VMA_MEMORY_USAGE_CPU_TO_GPU);
VmaAllocationInfo allocInfo{};
vmaGetAllocationInfo(_engine->_deviceManager->allocator(), matBuffer.allocation, &allocInfo);
auto *matConstants = (GLTFMetallic_Roughness::MaterialConstants *) allocInfo.pMappedData;
*matConstants = constants;
if (matConstants->colorFactors == glm::vec4(0))
{
matConstants->colorFactors = glm::vec4(1.0f);
}
// Ensure writes are visible on non-coherent memory
vmaFlushAllocation(_engine->_deviceManager->allocator(), matBuffer.allocation, 0,
sizeof(GLTFMetallic_Roughness::MaterialConstants));
return matBuffer;
}
std::shared_ptr<GLTFMaterial> AssetManager::createMaterial(
MaterialPass pass, const GLTFMetallic_Roughness::MaterialResources &res) const
{
auto mat = std::make_shared<GLTFMaterial>();
mat->data = _engine->metalRoughMaterial.write_material(
_engine->_deviceManager->device(), pass, res, *_engine->_context->descriptors);
return mat;
}
std::pair<AllocatedImage, bool> AssetManager::loadImageFromAsset(std::string_view imgPath, bool srgb) const
{
AllocatedImage out{};
bool created = false;
if (!imgPath.empty())
{
std::string resolved = assetPath(imgPath);
int w = 0, h = 0, comp = 0;
stbi_uc *pixels = stbi_load(resolved.c_str(), &w, &h, &comp, 4);
if (pixels && w > 0 && h > 0)
{
VkFormat fmt = srgb ? VK_FORMAT_R8G8B8A8_SRGB : VK_FORMAT_R8G8B8A8_UNORM;
out = _engine->_resourceManager->create_image(pixels,
VkExtent3D{static_cast<uint32_t>(w), static_cast<uint32_t>(h), 1},
fmt,
VK_IMAGE_USAGE_SAMPLED_BIT,
false);
created = true;
}
if (pixels) stbi_image_free(pixels);
}
return {out, created};
}
std::shared_ptr<MeshAsset> AssetManager::createMesh(const std::string &name,
std::span<Vertex> vertices,
std::span<uint32_t> indices,
std::shared_ptr<GLTFMaterial> material)
{
if (!_engine || !_engine->_resourceManager) return {};
if (name.empty()) return {};
auto it = _meshCache.find(name);
if (it != _meshCache.end()) return it->second;
if (!material)
{
GLTFMetallic_Roughness::MaterialResources matResources{};
matResources.colorImage = _engine->_whiteImage;
matResources.colorSampler = _engine->_samplerManager->defaultLinear();
matResources.metalRoughImage = _engine->_whiteImage;
matResources.metalRoughSampler = _engine->_samplerManager->defaultLinear();
AllocatedBuffer matBuffer = createMaterialBufferWithConstants({});
matResources.dataBuffer = matBuffer.buffer;
matResources.dataBufferOffset = 0;
material = createMaterial(MaterialPass::MainColor, matResources);
_meshMaterialBuffers.emplace(name, matBuffer);
}
auto mesh = std::make_shared<MeshAsset>();
mesh->name = name;
mesh->meshBuffers = _engine->_resourceManager->uploadMesh(indices, vertices);
GeoSurface surf{};
surf.startIndex = 0;
surf.count = (uint32_t) indices.size();
surf.material = material;
surf.bounds = compute_bounds(vertices);
mesh->surfaces.push_back(surf);
_meshCache.emplace(name, mesh);
return mesh;
}
std::shared_ptr<MeshAsset> AssetManager::getMesh(const std::string &name) const
{
auto it = _meshCache.find(name);
return (it != _meshCache.end()) ? it->second : nullptr;
}
bool AssetManager::removeMesh(const std::string &name)
{
auto it = _meshCache.find(name);
if (it == _meshCache.end()) return false;
if (_engine && _engine->_resourceManager)
{
_engine->_resourceManager->destroy_buffer(it->second->meshBuffers.indexBuffer);
_engine->_resourceManager->destroy_buffer(it->second->meshBuffers.vertexBuffer);
}
_meshCache.erase(it);
auto itb = _meshMaterialBuffers.find(name);
if (itb != _meshMaterialBuffers.end())
{
if (_engine && _engine->_resourceManager)
{
_engine->_resourceManager->destroy_buffer(itb->second);
}
_meshMaterialBuffers.erase(itb);
}
auto iti = _meshOwnedImages.find(name);
if (iti != _meshOwnedImages.end())
{
if (_engine && _engine->_resourceManager)
{
for (const auto &img: iti->second)
{
_engine->_resourceManager->destroy_image(img);
}
}
_meshOwnedImages.erase(iti);
}
return true;
}

106
src/core/asset_manager.h Normal file
View File

@@ -0,0 +1,106 @@
#pragma once
#include <memory>
#include <optional>
#include <string>
#include <string_view>
#include <unordered_map>
#include <filesystem>
#include <vector>
#include <utility>
#include <scene/vk_loader.h>
#include <core/vk_types.h>
#include "vk_materials.h"
#include "asset_locator.h"
class VulkanEngine;
class MeshAsset;
class AssetManager
{
public:
struct MaterialOptions
{
std::string albedoPath;
std::string metalRoughPath;
bool albedoSRGB = true;
bool metalRoughSRGB = false;
GLTFMetallic_Roughness::MaterialConstants constants{};
MaterialPass pass = MaterialPass::MainColor;
};
struct MeshGeometryDesc
{
enum class Type { Provided, Cube, Sphere };
Type type = Type::Provided;
std::span<Vertex> vertices{};
std::span<uint32_t> indices{};
int sectors = 16;
int stacks = 16;
};
struct MeshMaterialDesc
{
enum class Kind { Default, Textured };
Kind kind = Kind::Default;
MaterialOptions options{};
};
struct MeshCreateInfo
{
std::string name;
MeshGeometryDesc geometry;
MeshMaterialDesc material;
};
void init(VulkanEngine *engine);
void cleanup();
std::string shaderPath(std::string_view name) const;
std::string modelPath(std::string_view name) const;
std::string assetPath(std::string_view name) const;
std::optional<std::shared_ptr<LoadedGLTF> > loadGLTF(std::string_view nameOrPath);
std::shared_ptr<MeshAsset> createMesh(const MeshCreateInfo &info);
std::shared_ptr<MeshAsset> getPrimitive(std::string_view name) const;
std::shared_ptr<MeshAsset> createMesh(const std::string &name,
std::span<Vertex> vertices,
std::span<uint32_t> indices,
std::shared_ptr<GLTFMaterial> material = {});
std::shared_ptr<MeshAsset> getMesh(const std::string &name) const;
bool removeMesh(const std::string &name);
const AssetPaths &paths() const { return _locator.paths(); }
void setPaths(const AssetPaths &p) { _locator.setPaths(p); }
private:
VulkanEngine *_engine = nullptr;
AssetLocator _locator;
std::unordered_map<std::string, std::weak_ptr<LoadedGLTF> > _gltfCacheByPath;
std::unordered_map<std::string, std::shared_ptr<MeshAsset> > _meshCache;
std::unordered_map<std::string, AllocatedBuffer> _meshMaterialBuffers;
std::unordered_map<std::string, std::vector<AllocatedImage> > _meshOwnedImages;
AllocatedBuffer createMaterialBufferWithConstants(const GLTFMetallic_Roughness::MaterialConstants &constants) const;
std::shared_ptr<GLTFMaterial> createMaterial(MaterialPass pass,
const GLTFMetallic_Roughness::MaterialResources &res) const;
std::pair<AllocatedImage, bool> loadImageFromAsset(std::string_view path, bool srgb) const;
};

8
src/core/config.h Normal file
View File

@@ -0,0 +1,8 @@
#pragma once
// Centralized engine configuration flags
#ifdef NDEBUG
inline constexpr bool kUseValidationLayers = false;
#else
inline constexpr bool kUseValidationLayers = true;
#endif

View File

@@ -0,0 +1,13 @@
#include "engine_context.h"
#include "scene/vk_scene.h"
const GPUSceneData &EngineContext::getSceneData() const
{
return scene->getSceneData();
}
const DrawContext &EngineContext::getMainDrawContext() const
{
return const_cast<SceneManager *>(scene)->getMainDrawContext();
}

77
src/core/engine_context.h Normal file
View File

@@ -0,0 +1,77 @@
#pragma once
#include <memory>
#include <core/vk_types.h>
#include <core/vk_descriptors.h>
// Avoid including vk_scene.h here to prevent cycles
struct EngineStats
{
float frametime;
int triangle_count;
int drawcall_count;
float scene_update_time;
float mesh_draw_time;
};
class DeviceManager;
class ResourceManager;
class SwapchainManager;
class DescriptorManager;
class SamplerManager;
class SceneManager;
class MeshAsset;
struct DrawContext;
struct GPUSceneData;
class ComputeManager;
class PipelineManager;
struct FrameResources;
struct SDL_Window;
class AssetManager;
class RenderGraph;
class EngineContext
{
public:
// Owned shared resources
std::shared_ptr<DeviceManager> device;
std::shared_ptr<ResourceManager> resources;
std::shared_ptr<DescriptorAllocatorGrowable> descriptors;
// Non-owning pointers to global managers owned by VulkanEngine
SwapchainManager* swapchain = nullptr;
DescriptorManager* descriptorLayouts = nullptr;
SamplerManager* samplers = nullptr;
SceneManager* scene = nullptr;
// Per-frame and subsystem pointers for modules to use without VulkanEngine
FrameResources* currentFrame = nullptr; // set by engine each frame
EngineStats* stats = nullptr; // points to engine stats
ComputeManager* compute = nullptr; // compute subsystem
PipelineManager* pipelines = nullptr; // graphics pipeline manager
RenderGraph* renderGraph = nullptr; // render graph (built per-frame)
SDL_Window* window = nullptr; // SDL window handle
// Frequently used values
VkExtent2D drawExtent{};
// Optional convenience content pointers (moved to AssetManager for meshes)
// Assets
AssetManager* assets = nullptr; // non-owning pointer to central AssetManager
// Accessors
DeviceManager *getDevice() const { return device.get(); }
ResourceManager *getResources() const { return resources.get(); }
DescriptorAllocatorGrowable *getDescriptors() const { return descriptors.get(); }
SwapchainManager* getSwapchain() const { return swapchain; }
DescriptorManager* getDescriptorLayouts() const { return descriptorLayouts; }
SamplerManager* getSamplers() const { return samplers; }
const GPUSceneData& getSceneData() const;
const DrawContext& getMainDrawContext() const;
VkExtent2D getDrawExtent() const { return drawExtent; }
AssetManager* getAssets() const { return assets; }
// Convenience alias (singular) requested
AssetManager* getAsset() const { return assets; }
RenderGraph* getRenderGraph() const { return renderGraph; }
};

View File

@@ -0,0 +1,53 @@
#include "frame_resources.h"
#include <span>
#include "vk_descriptors.h"
#include "vk_device.h"
#include "vk_initializers.h"
#include "vk_types.h"
void FrameResources::init(DeviceManager *deviceManager,
std::span<DescriptorAllocatorGrowable::PoolSizeRatio> framePoolSizes)
{
VkCommandPoolCreateInfo commandPoolInfo = vkinit::command_pool_create_info(
deviceManager->graphicsQueueFamily(), VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT);
VK_CHECK(vkCreateCommandPool(deviceManager->device(), &commandPoolInfo, nullptr, &_commandPool));
VkCommandBufferAllocateInfo cmdAllocInfo = vkinit::command_buffer_allocate_info(_commandPool, 1);
VK_CHECK(vkAllocateCommandBuffers(deviceManager->device(), &cmdAllocInfo, &_mainCommandBuffer));
VkFenceCreateInfo fenceCreateInfo = vkinit::fence_create_info(VK_FENCE_CREATE_SIGNALED_BIT);
VkSemaphoreCreateInfo semaphoreCreateInfo = vkinit::semaphore_create_info();
VK_CHECK(vkCreateFence(deviceManager->device(), &fenceCreateInfo, nullptr, &_renderFence));
VK_CHECK(vkCreateSemaphore(deviceManager->device(), &semaphoreCreateInfo, nullptr, &_swapchainSemaphore));
VK_CHECK(vkCreateSemaphore(deviceManager->device(), &semaphoreCreateInfo, nullptr, &_renderSemaphore));
_frameDescriptors.init(deviceManager->device(), 1000, framePoolSizes);
}
void FrameResources::cleanup(DeviceManager *deviceManager)
{
_frameDescriptors.destroy_pools(deviceManager->device());
if (_commandPool)
{
vkDestroyCommandPool(deviceManager->device(), _commandPool, nullptr);
_commandPool = VK_NULL_HANDLE;
}
if (_renderFence)
{
vkDestroyFence(deviceManager->device(), _renderFence, nullptr);
_renderFence = VK_NULL_HANDLE;
}
if (_renderSemaphore)
{
vkDestroySemaphore(deviceManager->device(), _renderSemaphore, nullptr);
_renderSemaphore = VK_NULL_HANDLE;
}
if (_swapchainSemaphore)
{
vkDestroySemaphore(deviceManager->device(), _swapchainSemaphore, nullptr);
_swapchainSemaphore = VK_NULL_HANDLE;
}
}

View File

@@ -0,0 +1,24 @@
#pragma once
#include <core/vk_types.h>
#include <core/vk_descriptors.h>
class DeviceManager;
struct FrameResources
{
VkSemaphore _swapchainSemaphore = VK_NULL_HANDLE;
VkSemaphore _renderSemaphore = VK_NULL_HANDLE;
VkFence _renderFence = VK_NULL_HANDLE;
VkCommandPool _commandPool = VK_NULL_HANDLE;
VkCommandBuffer _mainCommandBuffer = VK_NULL_HANDLE;
DeletionQueue _deletionQueue;
DescriptorAllocatorGrowable _frameDescriptors;
void init(DeviceManager *deviceManager,
std::span<DescriptorAllocatorGrowable::PoolSizeRatio> framePoolSizes);
void cleanup(DeviceManager *deviceManager);
};

53
src/core/vk_debug.cpp Normal file
View File

@@ -0,0 +1,53 @@
#include <core/vk_debug.h>
#include <cstring>
namespace vkdebug {
static inline PFN_vkCmdBeginDebugUtilsLabelEXT get_begin_fn(VkDevice device)
{
static PFN_vkCmdBeginDebugUtilsLabelEXT fn = nullptr;
static VkDevice cached = VK_NULL_HANDLE;
if (device != cached)
{
cached = device;
fn = reinterpret_cast<PFN_vkCmdBeginDebugUtilsLabelEXT>(
vkGetDeviceProcAddr(device, "vkCmdBeginDebugUtilsLabelEXT"));
}
return fn;
}
static inline PFN_vkCmdEndDebugUtilsLabelEXT get_end_fn(VkDevice device)
{
static PFN_vkCmdEndDebugUtilsLabelEXT fn = nullptr;
static VkDevice cached = VK_NULL_HANDLE;
if (device != cached)
{
cached = device;
fn = reinterpret_cast<PFN_vkCmdEndDebugUtilsLabelEXT>(
vkGetDeviceProcAddr(device, "vkCmdEndDebugUtilsLabelEXT"));
}
return fn;
}
void cmd_begin_label(VkDevice device, VkCommandBuffer cmd, const char* name,
float r, float g, float b, float a)
{
auto fn = get_begin_fn(device);
if (!fn) return;
VkDebugUtilsLabelEXT label{};
label.sType = VK_STRUCTURE_TYPE_DEBUG_UTILS_LABEL_EXT;
label.pLabelName = name;
label.color[0] = r; label.color[1] = g; label.color[2] = b; label.color[3] = a;
fn(cmd, &label);
}
void cmd_end_label(VkDevice device, VkCommandBuffer cmd)
{
auto fn = get_end_fn(device);
if (!fn) return;
fn(cmd);
}
} // namespace vkdebug

14
src/core/vk_debug.h Normal file
View File

@@ -0,0 +1,14 @@
#pragma once
#include <core/vk_types.h>
namespace vkdebug
{
// Begin a debug label on a command buffer if VK_EXT_debug_utils is available.
void cmd_begin_label(VkDevice device, VkCommandBuffer cmd, const char *name,
float r = 0.2f, float g = 0.6f, float b = 0.9f, float a = 1.0f);
// End a debug label on a command buffer if VK_EXT_debug_utils is available.
void cmd_end_label(VkDevice device, VkCommandBuffer cmd);
}

View File

@@ -0,0 +1,35 @@
#include "vk_descriptor_manager.h"
#include "vk_device.h"
#include "vk_descriptors.h"
void DescriptorManager::init(DeviceManager *deviceManager)
{
_deviceManager = deviceManager;
{
DescriptorLayoutBuilder builder;
builder.add_binding(0, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER);
_singleImageDescriptorLayout = builder.build(_deviceManager->device(), VK_SHADER_STAGE_FRAGMENT_BIT);
} {
DescriptorLayoutBuilder builder;
builder.add_binding(0, VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER);
_gpuSceneDataDescriptorLayout = builder.build(
_deviceManager->device(), VK_SHADER_STAGE_VERTEX_BIT | VK_SHADER_STAGE_FRAGMENT_BIT);
}
}
void DescriptorManager::cleanup()
{
if (!_deviceManager) return;
if (_singleImageDescriptorLayout)
{
vkDestroyDescriptorSetLayout(_deviceManager->device(), _singleImageDescriptorLayout, nullptr);
_singleImageDescriptorLayout = VK_NULL_HANDLE;
}
if (_gpuSceneDataDescriptorLayout)
{
vkDestroyDescriptorSetLayout(_deviceManager->device(), _gpuSceneDataDescriptorLayout, nullptr);
_gpuSceneDataDescriptorLayout = VK_NULL_HANDLE;
}
}

View File

@@ -0,0 +1,24 @@
#pragma once
#include <core/vk_types.h>
#include <core/vk_descriptors.h>
#include "vk_device.h"
class DeviceManager;
class DescriptorManager
{
public:
void init(DeviceManager *deviceManager);
void cleanup();
VkDescriptorSetLayout gpuSceneDataLayout() const { return _gpuSceneDataDescriptorLayout; }
VkDescriptorSetLayout singleImageLayout() const { return _singleImageDescriptorLayout; }
private:
DeviceManager *_deviceManager = nullptr;
VkDescriptorSetLayout _singleImageDescriptorLayout = VK_NULL_HANDLE;
VkDescriptorSetLayout _gpuSceneDataDescriptorLayout = VK_NULL_HANDLE;
};

257
src/core/vk_descriptors.cpp Normal file
View File

@@ -0,0 +1,257 @@
#include <core/vk_descriptors.h>
void DescriptorLayoutBuilder::add_binding(uint32_t binding, VkDescriptorType type)
{
VkDescriptorSetLayoutBinding newbind{};
newbind.binding = binding;
newbind.descriptorCount = 1;
newbind.descriptorType = type;
bindings.push_back(newbind);
}
void DescriptorLayoutBuilder::clear()
{
bindings.clear();
}
VkDescriptorSetLayout DescriptorLayoutBuilder::build(VkDevice device, VkShaderStageFlags shaderStages, void *pNext,
VkDescriptorSetLayoutCreateFlags flags)
{
for (auto &b: bindings)
{
b.stageFlags |= shaderStages;
}
VkDescriptorSetLayoutCreateInfo info = {.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO};
info.pNext = pNext;
info.pBindings = bindings.data();
info.bindingCount = (uint32_t) bindings.size();
info.flags = flags;
VkDescriptorSetLayout set;
VK_CHECK(vkCreateDescriptorSetLayout(device, &info, nullptr, &set));
return set;
}
void DescriptorWriter::write_buffer(int binding, VkBuffer buffer, size_t size, size_t offset, VkDescriptorType type)
{
VkDescriptorBufferInfo &info = bufferInfos.emplace_back(VkDescriptorBufferInfo{
.buffer = buffer,
.offset = offset,
.range = size
});
VkWriteDescriptorSet write = {.sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET};
write.dstBinding = binding;
write.dstSet = VK_NULL_HANDLE; //left empty for now until we need to write it
write.descriptorCount = 1;
write.descriptorType = type;
write.pBufferInfo = &info;
writes.push_back(write);
}
void DescriptorWriter::write_image(int binding, VkImageView image, VkSampler sampler, VkImageLayout layout,
VkDescriptorType type)
{
VkDescriptorImageInfo &info = imageInfos.emplace_back(VkDescriptorImageInfo{
.sampler = sampler,
.imageView = image,
.imageLayout = layout
});
VkWriteDescriptorSet write = {.sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET};
write.dstBinding = binding;
write.dstSet = VK_NULL_HANDLE; //left empty for now until we need to write it
write.descriptorCount = 1;
write.descriptorType = type;
write.pImageInfo = &info;
writes.push_back(write);
}
void DescriptorWriter::clear()
{
imageInfos.clear();
writes.clear();
bufferInfos.clear();
}
void DescriptorWriter::update_set(VkDevice device, VkDescriptorSet set)
{
for (VkWriteDescriptorSet& write : writes) {
write.dstSet = set;
}
vkUpdateDescriptorSets(device, (uint32_t)writes.size(), writes.data(), 0, nullptr);
}
void DescriptorAllocator::init_pool(VkDevice device, uint32_t maxSets, std::span<PoolSizeRatio> poolRatios)
{
std::vector<VkDescriptorPoolSize> poolSizes;
for (PoolSizeRatio ratio: poolRatios)
{
poolSizes.push_back(VkDescriptorPoolSize{
.type = ratio.type,
.descriptorCount = uint32_t(ratio.ratio * maxSets)
});
}
VkDescriptorPoolCreateInfo pool_info = {.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO};
pool_info.flags = 0;
pool_info.maxSets = maxSets;
pool_info.poolSizeCount = (uint32_t) poolSizes.size();
pool_info.pPoolSizes = poolSizes.data();
vkCreateDescriptorPool(device, &pool_info, nullptr, &pool);
}
void DescriptorAllocator::clear_descriptors(VkDevice device)
{
vkResetDescriptorPool(device, pool, 0);
}
void DescriptorAllocator::destroy_pool(VkDevice device)
{
vkDestroyDescriptorPool(device, pool, nullptr);
}
VkDescriptorSet DescriptorAllocator::allocate(VkDevice device, VkDescriptorSetLayout layout)
{
VkDescriptorSetAllocateInfo allocInfo = {.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_ALLOCATE_INFO};
allocInfo.pNext = nullptr;
allocInfo.descriptorPool = pool;
allocInfo.descriptorSetCount = 1;
allocInfo.pSetLayouts = &layout;
VkDescriptorSet ds;
VK_CHECK(vkAllocateDescriptorSets(device, &allocInfo, &ds));
return ds;
}
VkDescriptorPool DescriptorAllocatorGrowable::get_pool(VkDevice device)
{
VkDescriptorPool newPool;
if (readyPools.size() != 0)
{
newPool = readyPools.back();
readyPools.pop_back();
}
else
{
//need to create a new pool
newPool = create_pool(device, setsPerPool, ratios);
setsPerPool = setsPerPool * 1.5;
if (setsPerPool > 4092)
{
setsPerPool = 4092;
}
}
return newPool;
}
VkDescriptorPool DescriptorAllocatorGrowable::create_pool(VkDevice device, uint32_t setCount,
std::span<PoolSizeRatio> poolRatios)
{
std::vector<VkDescriptorPoolSize> poolSizes;
for (PoolSizeRatio ratio: poolRatios)
{
poolSizes.push_back(VkDescriptorPoolSize{
.type = ratio.type,
.descriptorCount = uint32_t(ratio.ratio * setCount)
});
}
VkDescriptorPoolCreateInfo pool_info = {};
pool_info.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO;
pool_info.flags = 0;
pool_info.maxSets = setCount;
pool_info.poolSizeCount = (uint32_t) poolSizes.size();
pool_info.pPoolSizes = poolSizes.data();
VkDescriptorPool newPool;
vkCreateDescriptorPool(device, &pool_info, nullptr, &newPool);
return newPool;
}
void DescriptorAllocatorGrowable::init(VkDevice device, uint32_t maxSets, std::span<PoolSizeRatio> poolRatios)
{
ratios.clear();
for (auto r: poolRatios)
{
ratios.push_back(r);
}
VkDescriptorPool newPool = create_pool(device, maxSets, poolRatios);
setsPerPool = maxSets * 1.5; //grow it next allocation
readyPools.push_back(newPool);
}
void DescriptorAllocatorGrowable::clear_pools(VkDevice device)
{
for (auto p: readyPools)
{
vkResetDescriptorPool(device, p, 0);
}
for (auto p: fullPools)
{
vkResetDescriptorPool(device, p, 0);
readyPools.push_back(p);
}
fullPools.clear();
}
void DescriptorAllocatorGrowable::destroy_pools(VkDevice device)
{
for (auto p: readyPools)
{
vkDestroyDescriptorPool(device, p, nullptr);
}
readyPools.clear();
for (auto p: fullPools)
{
vkDestroyDescriptorPool(device, p, nullptr);
}
fullPools.clear();
}
VkDescriptorSet DescriptorAllocatorGrowable::allocate(VkDevice device, VkDescriptorSetLayout layout, void *pNext)
{
//get or create a pool to allocate from
VkDescriptorPool poolToUse = get_pool(device);
VkDescriptorSetAllocateInfo allocInfo = {};
allocInfo.pNext = pNext;
allocInfo.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_ALLOCATE_INFO;
allocInfo.descriptorPool = poolToUse;
allocInfo.descriptorSetCount = 1;
allocInfo.pSetLayouts = &layout;
VkDescriptorSet ds;
VkResult result = vkAllocateDescriptorSets(device, &allocInfo, &ds);
//allocation failed. Try again
if (result == VK_ERROR_OUT_OF_POOL_MEMORY || result == VK_ERROR_FRAGMENTED_POOL)
{
fullPools.push_back(poolToUse);
poolToUse = get_pool(device);
allocInfo.descriptorPool = poolToUse;
VK_CHECK(vkAllocateDescriptorSets(device, &allocInfo, &ds));
}
readyPools.push_back(poolToUse);
return ds;
}

79
src/core/vk_descriptors.h Normal file
View File

@@ -0,0 +1,79 @@
#pragma once
#include <core/vk_types.h>
struct DescriptorLayoutBuilder
{
std::vector<VkDescriptorSetLayoutBinding> bindings;
void add_binding(uint32_t binding, VkDescriptorType type);
void clear();
VkDescriptorSetLayout build(VkDevice device, VkShaderStageFlags shaderStages, void *pNext = nullptr,
VkDescriptorSetLayoutCreateFlags flags = 0);
};
struct DescriptorWriter
{
std::deque<VkDescriptorImageInfo> imageInfos;
std::deque<VkDescriptorBufferInfo> bufferInfos;
std::vector<VkWriteDescriptorSet> writes;
void write_image(int binding, VkImageView image, VkSampler sampler, VkImageLayout layout, VkDescriptorType type);
void write_buffer(int binding, VkBuffer buffer, size_t size, size_t offset, VkDescriptorType type);
void clear();
void update_set(VkDevice device, VkDescriptorSet set);
};
struct DescriptorAllocator
{
struct PoolSizeRatio
{
VkDescriptorType type;
float ratio;
};
VkDescriptorPool pool;
void init_pool(VkDevice device, uint32_t maxSets, std::span<PoolSizeRatio> poolRatios);
void clear_descriptors(VkDevice device);
void destroy_pool(VkDevice device);
VkDescriptorSet allocate(VkDevice device, VkDescriptorSetLayout layout);
};
struct DescriptorAllocatorGrowable
{
public:
struct PoolSizeRatio
{
VkDescriptorType type;
float ratio;
};
void init(VkDevice device, uint32_t initialSets, std::span<PoolSizeRatio> poolRatios);
void clear_pools(VkDevice device);
void destroy_pools(VkDevice device);
VkDescriptorSet allocate(VkDevice device, VkDescriptorSetLayout layout, void *pNext = nullptr);
private:
VkDescriptorPool get_pool(VkDevice device);
VkDescriptorPool create_pool(VkDevice device, uint32_t setCount, std::span<PoolSizeRatio> poolRatios);
std::vector<PoolSizeRatio> ratios;
std::vector<VkDescriptorPool> fullPools;
std::vector<VkDescriptorPool> readyPools;
uint32_t setsPerPool;
};

83
src/core/vk_device.cpp Normal file
View File

@@ -0,0 +1,83 @@
#include "vk_device.h"
#include "config.h"
#include "SDL2/SDL.h"
#include "SDL2/SDL_vulkan.h"
void DeviceManager::init_vulkan(SDL_Window *window)
{
vkb::InstanceBuilder builder;
//make the vulkan instance, with basic debug features
auto inst_ret = builder.set_app_name("Example Vulkan Application")
.request_validation_layers(kUseValidationLayers)
.use_default_debug_messenger()
.require_api_version(1, 3, 0)
.build();
vkb::Instance vkb_inst = inst_ret.value();
//grab the instance
_instance = vkb_inst.instance;
_debug_messenger = vkb_inst.debug_messenger;
SDL_Vulkan_CreateSurface(window, _instance, &_surface);
VkPhysicalDeviceVulkan13Features features{.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VULKAN_1_3_FEATURES};
features.dynamicRendering = true;
features.synchronization2 = true;
VkPhysicalDeviceVulkan12Features features12{.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VULKAN_1_2_FEATURES};
features12.bufferDeviceAddress = true;
features12.descriptorIndexing = true;
//use vkbootstrap to select a gpu.
//We want a gpu that can write to the SDL surface and supports vulkan 1.2
vkb::PhysicalDeviceSelector selector{vkb_inst};
vkb::PhysicalDevice physicalDevice = selector
.set_minimum_version(1, 3)
.set_required_features_13(features)
.set_required_features_12(features12)
.set_surface(_surface)
.select()
.value();
//physicalDevice.features.
//create the final vulkan device
vkb::DeviceBuilder deviceBuilder{physicalDevice};
vkb::Device vkbDevice = deviceBuilder.build().value();
// Get the VkDevice handle used in the rest of a vulkan application
_device = vkbDevice.device;
_chosenGPU = physicalDevice.physical_device;
// use vkbootstrap to get a Graphics queue
_graphicsQueue = vkbDevice.get_queue(vkb::QueueType::graphics).value();
_graphicsQueueFamily = vkbDevice.get_queue_index(vkb::QueueType::graphics).value();
//> vma_init
//initialize the memory allocator
VmaAllocatorCreateInfo allocatorInfo = {};
allocatorInfo.physicalDevice = _chosenGPU;
allocatorInfo.device = _device;
allocatorInfo.instance = _instance;
allocatorInfo.flags = VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT;
vmaCreateAllocator(&allocatorInfo, &_allocator);
_deletionQueue.push_function([&]() {
vmaDestroyAllocator(_allocator);
});
//< vma_init
}
void DeviceManager::cleanup()
{
vkDestroySurfaceKHR(_instance, _surface, nullptr);
_deletionQueue.flush();
vkDestroyDevice(_device, nullptr);
vkb::destroy_debug_utils_messenger(_instance, _debug_messenger);
vkDestroyInstance(_instance, nullptr);
fmt::print("DeviceManager::cleanup()\n");
}

33
src/core/vk_device.h Normal file
View File

@@ -0,0 +1,33 @@
#pragma once
#include <core/vk_types.h>
#include "VkBootstrap.h"
class DeviceManager
{
public:
void init_vulkan(struct SDL_Window *window);
void cleanup();
VkDevice device() const { return _device; }
VkInstance instance() const { return _instance; }
VkPhysicalDevice physicalDevice() const { return _chosenGPU; }
VkSurfaceKHR surface() const { return _surface; }
VkQueue graphicsQueue() const { return _graphicsQueue; }
uint32_t graphicsQueueFamily() const { return _graphicsQueueFamily; }
VmaAllocator allocator() const { return _allocator; }
VkDebugUtilsMessengerEXT debugMessenger() { return _debug_messenger; }
private:
VkInstance _instance = nullptr;
VkDebugUtilsMessengerEXT _debug_messenger = nullptr;
VkPhysicalDevice _chosenGPU = nullptr;
VkDevice _device = nullptr;
VkSurfaceKHR _surface = nullptr;
VkQueue _graphicsQueue = nullptr;
uint32_t _graphicsQueueFamily = 0;
VmaAllocator _allocator = nullptr;
DeletionQueue _deletionQueue;
};

741
src/core/vk_engine.cpp Normal file
View File

@@ -0,0 +1,741 @@
//> includes
#include "vk_engine.h"
#include <core/vk_images.h>
#include "SDL2/SDL.h"
#include "SDL2/SDL_vulkan.h"
#include <core/vk_initializers.h>
#include <core/vk_types.h>
#include "VkBootstrap.h"
#include <chrono>
#include <thread>
#include "render/vk_pipelines.h"
#include <iostream>
#include <glm/gtx/transform.hpp>
#include "render/primitives.h"
#include "vk_mem_alloc.h"
#include "imgui.h"
#include "imgui_impl_sdl2.h"
#include "imgui_impl_vulkan.h"
#include "render/vk_renderpass_geometry.h"
#include "render/vk_renderpass_imgui.h"
#include "render/vk_renderpass_lighting.h"
#include "render/vk_renderpass_transparent.h"
#include "render/vk_renderpass_tonemap.h"
#include "render/vk_renderpass_shadow.h"
#include "vk_resource.h"
#include "engine_context.h"
#include "core/vk_pipeline_manager.h"
VulkanEngine *loadedEngine = nullptr;
void VulkanEngine::init()
{
// We initialize SDL and create a window with it.
SDL_Init(SDL_INIT_VIDEO);
constexpr auto window_flags = static_cast<SDL_WindowFlags>(SDL_WINDOW_VULKAN | SDL_WINDOW_RESIZABLE);
_swapchainManager = std::make_unique<SwapchainManager>();
_window = SDL_CreateWindow(
"Vulkan Engine",
SDL_WINDOWPOS_UNDEFINED,
SDL_WINDOWPOS_UNDEFINED,
_swapchainManager->windowExtent().width,
_swapchainManager->windowExtent().height,
window_flags
);
_deviceManager = std::make_shared<DeviceManager>();
_deviceManager->init_vulkan(_window);
_resourceManager = std::make_shared<ResourceManager>();
_resourceManager->init(_deviceManager.get());
_descriptorManager = std::make_unique<DescriptorManager>();
_descriptorManager->init(_deviceManager.get());
_samplerManager = std::make_unique<SamplerManager>();
_samplerManager->init(_deviceManager.get());
// Build dependency-injection context
_context = std::make_shared<EngineContext>();
_context->device = _deviceManager;
_context->resources = _resourceManager;
_context->descriptors = std::make_shared<DescriptorAllocatorGrowable>(); {
std::vector<DescriptorAllocatorGrowable::PoolSizeRatio> sizes = {
{VK_DESCRIPTOR_TYPE_STORAGE_IMAGE, 1},
{VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, 1},
{VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, 4},
};
_context->descriptors->init(_deviceManager->device(), 10, sizes);
}
_swapchainManager->init(_deviceManager.get(), _resourceManager.get());
_swapchainManager->init_swapchain();
// Fill remaining context pointers now that managers exist
_context->descriptorLayouts = _descriptorManager.get();
_context->samplers = _samplerManager.get();
_context->swapchain = _swapchainManager.get();
// Create graphics pipeline manager (after swapchain is ready)
_pipelineManager = std::make_unique<PipelineManager>();
_pipelineManager->init(_context.get());
_context->pipelines = _pipelineManager.get();
// Create central AssetManager for paths and asset caching
_assetManager = std::make_unique<AssetManager>();
_assetManager->init(this);
_context->assets = _assetManager.get();
_sceneManager = std::make_unique<SceneManager>();
_sceneManager->init(_context.get());
_context->scene = _sceneManager.get();
compute.init(_context.get());
// Publish engine-owned subsystems into context for modules
_context->compute = &compute;
_context->window = _window;
_context->stats = &stats;
// Render graph skeleton
_renderGraph = std::make_unique<RenderGraph>();
_renderGraph->init(_context.get());
_context->renderGraph = _renderGraph.get();
init_frame_resources();
// Build material pipelines early so materials can be created
metalRoughMaterial.build_pipelines(this);
init_default_data();
_renderPassManager = std::make_unique<RenderPassManager>();
_renderPassManager->init(_context.get());
auto imguiPass = std::make_unique<ImGuiPass>();
_renderPassManager->setImGuiPass(std::move(imguiPass));
const std::string structurePath = _assetManager->modelPath("seoul_high.glb");
const auto structureFile = _assetManager->loadGLTF(structurePath);
assert(structureFile.has_value());
_sceneManager->loadScene("structure", *structureFile);
_resourceManager->set_deferred_uploads(true);
//everything went fine
_isInitialized = true;
}
void VulkanEngine::init_default_data()
{
//> default_img
//3 default textures, white, grey, black. 1 pixel each
uint32_t white = glm::packUnorm4x8(glm::vec4(1, 1, 1, 1));
_whiteImage = _resourceManager->create_image((void *) &white, VkExtent3D{1, 1, 1}, VK_FORMAT_R8G8B8A8_UNORM,
VK_IMAGE_USAGE_SAMPLED_BIT);
uint32_t grey = glm::packUnorm4x8(glm::vec4(0.66f, 0.66f, 0.66f, 1));
_greyImage = _resourceManager->create_image((void *) &grey, VkExtent3D{1, 1, 1}, VK_FORMAT_R8G8B8A8_UNORM,
VK_IMAGE_USAGE_SAMPLED_BIT);
uint32_t black = glm::packUnorm4x8(glm::vec4(0, 0, 0, 0));
_blackImage = _resourceManager->create_image((void *) &black, VkExtent3D{1, 1, 1}, VK_FORMAT_R8G8B8A8_UNORM,
VK_IMAGE_USAGE_SAMPLED_BIT);
//checkerboard image
uint32_t magenta = glm::packUnorm4x8(glm::vec4(1, 0, 1, 1));
std::array<uint32_t, 16 * 16> pixels{}; //for 16x16 checkerboard texture
for (int x = 0; x < 16; x++)
{
for (int y = 0; y < 16; y++)
{
pixels[y * 16 + x] = ((x % 2) ^ (y % 2)) ? magenta : black;
}
}
_errorCheckerboardImage = _resourceManager->create_image(pixels.data(), VkExtent3D{16, 16, 1},
VK_FORMAT_R8G8B8A8_UNORM,
VK_IMAGE_USAGE_SAMPLED_BIT);
// build default primitive meshes via generic AssetManager API
{
AssetManager::MeshCreateInfo ci{};
ci.name = "Cube";
ci.geometry.type = AssetManager::MeshGeometryDesc::Type::Cube;
ci.material.kind = AssetManager::MeshMaterialDesc::Kind::Default;
cubeMesh = _assetManager->createMesh(ci);
}
{
AssetManager::MeshCreateInfo ci{};
ci.name = "Sphere";
ci.geometry.type = AssetManager::MeshGeometryDesc::Type::Sphere;
ci.geometry.sectors = 16;
ci.geometry.stacks = 16;
ci.material.kind = AssetManager::MeshMaterialDesc::Kind::Default;
sphereMesh = _assetManager->createMesh(ci);
}
// Register default primitives as dynamic scene instances
if (_sceneManager)
{
_sceneManager->addMeshInstance("default.cube", cubeMesh,
glm::translate(glm::mat4(1.f), glm::vec3(-2.f, 0.f, -2.f)));
_sceneManager->addMeshInstance("default.sphere", sphereMesh,
glm::translate(glm::mat4(1.f), glm::vec3(2.f, 0.f, -2.f)));
}
_mainDeletionQueue.push_function([&]() {
_resourceManager->destroy_image(_whiteImage);
_resourceManager->destroy_image(_greyImage);
_resourceManager->destroy_image(_blackImage);
_resourceManager->destroy_image(_errorCheckerboardImage);
});
//< default_img
}
void VulkanEngine::cleanup()
{
vkDeviceWaitIdle(_deviceManager->device());
_sceneManager->cleanup();
if (_isInitialized)
{
//make sure the gpu has stopped doing its things
vkDeviceWaitIdle(_deviceManager->device());
// Flush all frame deletion queues first while VMA allocator is still alive
for (int i = 0; i < FRAME_OVERLAP; i++)
{
_frames[i]._deletionQueue.flush();
}
for (int i = 0; i < FRAME_OVERLAP; i++)
{
_frames[i].cleanup(_deviceManager.get());
}
metalRoughMaterial.clear_resources(_deviceManager->device());
_mainDeletionQueue.flush();
_renderPassManager->cleanup();
_pipelineManager->cleanup();
compute.cleanup();
_swapchainManager->cleanup();
if (_assetManager) _assetManager->cleanup();
_resourceManager->cleanup();
_samplerManager->cleanup();
_descriptorManager->cleanup();
_context->descriptors->destroy_pools(_deviceManager->device());
_deviceManager->cleanup();
SDL_DestroyWindow(_window);
}
}
void VulkanEngine::draw()
{
_sceneManager->update_scene();
//> frame_clear
//wait until the gpu has finished rendering the last frame. Timeout of 1 second
VK_CHECK(vkWaitForFences(_deviceManager->device(), 1, &get_current_frame()._renderFence, true, 1000000000));
get_current_frame()._deletionQueue.flush();
get_current_frame()._frameDescriptors.clear_pools(_deviceManager->device());
//< frame_clear
uint32_t swapchainImageIndex;
VkResult e = vkAcquireNextImageKHR(_deviceManager->device(), _swapchainManager->swapchain(), 1000000000,
get_current_frame()._swapchainSemaphore,
nullptr, &swapchainImageIndex);
if (e == VK_ERROR_OUT_OF_DATE_KHR)
{
resize_requested = true;
return;
}
_drawExtent.height = std::min(_swapchainManager->swapchainExtent().height,
_swapchainManager->drawImage().imageExtent.height) * renderScale;
_drawExtent.width = std::min(_swapchainManager->swapchainExtent().width,
_swapchainManager->drawImage().imageExtent.width) * renderScale;
VK_CHECK(vkResetFences(_deviceManager->device(), 1, &get_current_frame()._renderFence));
//now that we are sure that the commands finished executing, we can safely reset the command buffer to begin recording again.
VK_CHECK(vkResetCommandBuffer(get_current_frame()._mainCommandBuffer, 0));
//naming it cmd for shorter writing
VkCommandBuffer cmd = get_current_frame()._mainCommandBuffer;
//begin the command buffer recording. We will use this command buffer exactly once, so we want to let vulkan know that
VkCommandBufferBeginInfo cmdBeginInfo = vkinit::command_buffer_begin_info(
VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT);
//---------------------------
VK_CHECK(vkBeginCommandBuffer(cmd, &cmdBeginInfo));
// publish per-frame pointers and draw extent to context for passes
_context->currentFrame = &get_current_frame();
_context->drawExtent = _drawExtent;
// Optional: check for shader changes and hot-reload pipelines
if (_pipelineManager)
{
_pipelineManager->hotReloadChanged();
}
// --- RenderGraph frame build ---
if (_renderGraph)
{
_renderGraph->clear();
RGImageHandle hDraw = _renderGraph->import_draw_image();
RGImageHandle hDepth = _renderGraph->import_depth_image();
RGImageHandle hGBufferPosition = _renderGraph->import_gbuffer_position();
RGImageHandle hGBufferNormal = _renderGraph->import_gbuffer_normal();
RGImageHandle hGBufferAlbedo = _renderGraph->import_gbuffer_albedo();
RGImageHandle hSwapchain = _renderGraph->import_swapchain_image(swapchainImageIndex);
// Create a transient shadow depth target (fixed resolution for now)
const VkExtent2D shadowExtent{2048, 2048};
RGImageHandle hShadow = _renderGraph->create_depth_image("shadow.depth", shadowExtent, VK_FORMAT_D32_SFLOAT);
_resourceManager->register_upload_pass(*_renderGraph, get_current_frame());
ImGuiPass *imguiPass = nullptr;
RGImageHandle finalColor = hDraw; // by default, present HDR draw directly (copy)
if (_renderPassManager)
{
if (auto *background = _renderPassManager->getPass<BackgroundPass>())
{
background->register_graph(_renderGraph.get(), hDraw, hDepth);
}
if (auto *shadow = _renderPassManager->getPass<ShadowPass>())
{
shadow->register_graph(_renderGraph.get(), hShadow, shadowExtent);
}
if (auto *geometry = _renderPassManager->getPass<GeometryPass>())
{
geometry->register_graph(_renderGraph.get(), hGBufferPosition, hGBufferNormal, hGBufferAlbedo, hDepth);
}
if (auto *lighting = _renderPassManager->getPass<LightingPass>())
{
lighting->register_graph(_renderGraph.get(), hDraw, hGBufferPosition, hGBufferNormal, hGBufferAlbedo, hShadow);
}
if (auto *transparent = _renderPassManager->getPass<TransparentPass>())
{
transparent->register_graph(_renderGraph.get(), hDraw, hDepth);
}
imguiPass = _renderPassManager->getImGuiPass();
// Optional Tonemap pass: sample HDR draw -> LDR intermediate
if (auto *tonemap = _renderPassManager->getPass<TonemapPass>())
{
finalColor = tonemap->register_graph(_renderGraph.get(), hDraw);
}
}
auto appendPresentExtras = [imguiPass, hSwapchain](RenderGraph &graph)
{
if (imguiPass)
{
imguiPass->register_graph(&graph, hSwapchain);
}
};
_renderGraph->add_present_chain(finalColor, hSwapchain, appendPresentExtras);
// Apply persistent pass enable overrides
for (size_t i = 0; i < _renderGraph->pass_count(); ++i)
{
const char* name = _renderGraph->pass_name(i);
auto it = _rgPassToggles.find(name);
if (it != _rgPassToggles.end())
{
_renderGraph->set_pass_enabled(i, it->second);
}
}
if (_renderGraph->compile())
{
_renderGraph->execute(cmd);
}
}
VK_CHECK(vkEndCommandBuffer(cmd));
VkCommandBufferSubmitInfo cmdinfo = vkinit::command_buffer_submit_info(cmd);
VkSemaphoreSubmitInfo waitInfo = vkinit::semaphore_submit_info(VK_PIPELINE_STAGE_2_COLOR_ATTACHMENT_OUTPUT_BIT_KHR,
get_current_frame()._swapchainSemaphore);
VkSemaphoreSubmitInfo signalInfo = vkinit::semaphore_submit_info(VK_PIPELINE_STAGE_2_ALL_GRAPHICS_BIT,
get_current_frame()._renderSemaphore);
VkSubmitInfo2 submit = vkinit::submit_info(&cmdinfo, &signalInfo, &waitInfo);
VK_CHECK(vkQueueSubmit2(_deviceManager->graphicsQueue(), 1, &submit, get_current_frame()._renderFence));
VkPresentInfoKHR presentInfo = vkinit::present_info();
VkSwapchainKHR swapchain = _swapchainManager->swapchain();
presentInfo.pSwapchains = &swapchain;
presentInfo.swapchainCount = 1;
presentInfo.pWaitSemaphores = &get_current_frame()._renderSemaphore;
presentInfo.waitSemaphoreCount = 1;
presentInfo.pImageIndices = &swapchainImageIndex;
VkResult presentResult = vkQueuePresentKHR(_deviceManager->graphicsQueue(), &presentInfo);
if (presentResult == VK_ERROR_OUT_OF_DATE_KHR)
{
resize_requested = true;
}
_frameNumber++;
}
void VulkanEngine::run()
{
SDL_Event e;
bool bQuit = false;
//main loop
while (!bQuit)
{
auto start = std::chrono::system_clock::now();
//Handle events on queue
while (SDL_PollEvent(&e) != 0)
{
//close the window when user alt-f4s or clicks the X button
if (e.type == SDL_QUIT) bQuit = true;
if (e.type == SDL_WINDOWEVENT)
{
if (e.window.event == SDL_WINDOWEVENT_MINIMIZED)
{
freeze_rendering = true;
}
if (e.window.event == SDL_WINDOWEVENT_RESTORED)
{
freeze_rendering = false;
}
}
_sceneManager->getMainCamera().processSDLEvent(e);
ImGui_ImplSDL2_ProcessEvent(&e);
}
if (freeze_rendering)
{
//throttle the speed to avoid the endless spinning
std::this_thread::sleep_for(std::chrono::milliseconds(100));
continue;
}
if (resize_requested)
{
_swapchainManager->resize_swapchain(_window);
}
// imgui new frame
ImGui_ImplVulkan_NewFrame();
ImGui_ImplSDL2_NewFrame();
ImGui::NewFrame();
if (ImGui::Begin("background"))
{
auto background_pass = _renderPassManager->getPass<BackgroundPass>();
ComputeEffect &selected = background_pass->_backgroundEffects[background_pass->_currentEffect];
ImGui::Text("Selected effect: %s", selected.name);
ImGui::SliderInt("Effect Index", &background_pass->_currentEffect, 0,
background_pass->_backgroundEffects.size() - 1);
ImGui::InputFloat4("data1", reinterpret_cast<float *>(&selected.data.data1));
ImGui::InputFloat4("data2", reinterpret_cast<float *>(&selected.data.data2));
ImGui::InputFloat4("data3", reinterpret_cast<float *>(&selected.data.data3));
ImGui::InputFloat4("data4", reinterpret_cast<float *>(&selected.data.data4));
ImGui::SliderFloat("Render Scale", &renderScale, 0.3f, 1.f);
ImGui::End();
}
if (ImGui::Begin("Stats"))
{
ImGui::Text("frametime %f ms", stats.frametime);
ImGui::Text("draw time %f ms", stats.mesh_draw_time);
ImGui::Text("update time %f ms", _sceneManager->stats.scene_update_time);
ImGui::Text("triangles %i", stats.triangle_count);
ImGui::Text("draws %i", stats.drawcall_count);
ImGui::End();
}
// Render Graph debug window
if (ImGui::Begin("Render Graph"))
{
if (_renderGraph)
{
auto &graph = *_renderGraph;
std::vector<RenderGraph::RGDebugPassInfo> passInfos;
graph.debug_get_passes(passInfos);
if (ImGui::Button("Reload Pipelines")) { _pipelineManager->hotReloadChanged(); }
ImGui::SameLine();
ImGui::Text("%zu passes", passInfos.size());
if (ImGui::BeginTable("passes", 6, ImGuiTableFlags_RowBg | ImGuiTableFlags_SizingStretchProp))
{
ImGui::TableSetupColumn("Enable", ImGuiTableColumnFlags_WidthFixed, 70);
ImGui::TableSetupColumn("Name");
ImGui::TableSetupColumn("Type", ImGuiTableColumnFlags_WidthFixed, 90);
ImGui::TableSetupColumn("Imgs", ImGuiTableColumnFlags_WidthFixed, 60);
ImGui::TableSetupColumn("Bufs", ImGuiTableColumnFlags_WidthFixed, 60);
ImGui::TableSetupColumn("Attachments", ImGuiTableColumnFlags_WidthFixed, 100);
ImGui::TableHeadersRow();
auto typeName = [](RGPassType t){
switch (t) {
case RGPassType::Graphics: return "Graphics";
case RGPassType::Compute: return "Compute";
case RGPassType::Transfer: return "Transfer";
default: return "?";
}
};
for (size_t i = 0; i < passInfos.size(); ++i)
{
auto &pi = passInfos[i];
ImGui::TableNextRow();
ImGui::TableSetColumnIndex(0);
bool enabled = true;
if (auto it = _rgPassToggles.find(pi.name); it != _rgPassToggles.end()) enabled = it->second;
std::string chkId = std::string("##en") + std::to_string(i);
if (ImGui::Checkbox(chkId.c_str(), &enabled))
{
_rgPassToggles[pi.name] = enabled;
}
ImGui::TableSetColumnIndex(1);
ImGui::TextUnformatted(pi.name.c_str());
ImGui::TableSetColumnIndex(2);
ImGui::TextUnformatted(typeName(pi.type));
ImGui::TableSetColumnIndex(3);
ImGui::Text("%u/%u", pi.imageReads, pi.imageWrites);
ImGui::TableSetColumnIndex(4);
ImGui::Text("%u/%u", pi.bufferReads, pi.bufferWrites);
ImGui::TableSetColumnIndex(5);
ImGui::Text("%u%s", pi.colorAttachmentCount, pi.hasDepth ? "+D" : "");
}
ImGui::EndTable();
}
if (ImGui::CollapsingHeader("Images", ImGuiTreeNodeFlags_DefaultOpen))
{
std::vector<RenderGraph::RGDebugImageInfo> imgs;
graph.debug_get_images(imgs);
if (ImGui::BeginTable("images", 7, ImGuiTableFlags_RowBg | ImGuiTableFlags_SizingStretchProp))
{
ImGui::TableSetupColumn("Id", ImGuiTableColumnFlags_WidthFixed, 40);
ImGui::TableSetupColumn("Name");
ImGui::TableSetupColumn("Fmt", ImGuiTableColumnFlags_WidthFixed, 120);
ImGui::TableSetupColumn("Extent", ImGuiTableColumnFlags_WidthFixed, 120);
ImGui::TableSetupColumn("Imported", ImGuiTableColumnFlags_WidthFixed, 70);
ImGui::TableSetupColumn("Usage", ImGuiTableColumnFlags_WidthFixed, 80);
ImGui::TableSetupColumn("Life", ImGuiTableColumnFlags_WidthFixed, 80);
ImGui::TableHeadersRow();
for (const auto &im : imgs)
{
ImGui::TableNextRow();
ImGui::TableSetColumnIndex(0); ImGui::Text("%u", im.id);
ImGui::TableSetColumnIndex(1); ImGui::TextUnformatted(im.name.c_str());
ImGui::TableSetColumnIndex(2); ImGui::TextUnformatted(string_VkFormat(im.format));
ImGui::TableSetColumnIndex(3); ImGui::Text("%ux%u", im.extent.width, im.extent.height);
ImGui::TableSetColumnIndex(4); ImGui::TextUnformatted(im.imported ? "yes" : "no");
ImGui::TableSetColumnIndex(5); ImGui::Text("0x%x", (unsigned)im.creationUsage);
ImGui::TableSetColumnIndex(6); ImGui::Text("%d..%d", im.firstUse, im.lastUse);
}
ImGui::EndTable();
}
}
if (ImGui::CollapsingHeader("Buffers"))
{
std::vector<RenderGraph::RGDebugBufferInfo> bufs;
graph.debug_get_buffers(bufs);
if (ImGui::BeginTable("buffers", 6, ImGuiTableFlags_RowBg | ImGuiTableFlags_SizingStretchProp))
{
ImGui::TableSetupColumn("Id", ImGuiTableColumnFlags_WidthFixed, 40);
ImGui::TableSetupColumn("Name");
ImGui::TableSetupColumn("Size", ImGuiTableColumnFlags_WidthFixed, 100);
ImGui::TableSetupColumn("Imported", ImGuiTableColumnFlags_WidthFixed, 70);
ImGui::TableSetupColumn("Usage", ImGuiTableColumnFlags_WidthFixed, 100);
ImGui::TableSetupColumn("Life", ImGuiTableColumnFlags_WidthFixed, 80);
ImGui::TableHeadersRow();
for (const auto &bf : bufs)
{
ImGui::TableNextRow();
ImGui::TableSetColumnIndex(0); ImGui::Text("%u", bf.id);
ImGui::TableSetColumnIndex(1); ImGui::TextUnformatted(bf.name.c_str());
ImGui::TableSetColumnIndex(2); ImGui::Text("%zu", (size_t)bf.size);
ImGui::TableSetColumnIndex(3); ImGui::TextUnformatted(bf.imported ? "yes" : "no");
ImGui::TableSetColumnIndex(4); ImGui::Text("0x%x", (unsigned)bf.usage);
ImGui::TableSetColumnIndex(5); ImGui::Text("%d..%d", bf.firstUse, bf.lastUse);
}
ImGui::EndTable();
}
}
}
ImGui::End();
}
// Pipelines debug window (graphics)
if (ImGui::Begin("Pipelines"))
{
if (_pipelineManager)
{
std::vector<PipelineManager::GraphicsPipelineDebugInfo> pipes;
_pipelineManager->debug_get_graphics(pipes);
if (ImGui::Button("Reload Changed")) { _pipelineManager->hotReloadChanged(); }
ImGui::SameLine(); ImGui::Text("%zu graphics pipelines", pipes.size());
if (ImGui::BeginTable("gfxpipes", 5, ImGuiTableFlags_RowBg | ImGuiTableFlags_SizingStretchProp))
{
ImGui::TableSetupColumn("Name");
ImGui::TableSetupColumn("VS");
ImGui::TableSetupColumn("FS");
ImGui::TableSetupColumn("Valid", ImGuiTableColumnFlags_WidthFixed, 60);
ImGui::TableHeadersRow();
for (const auto &p : pipes)
{
ImGui::TableNextRow();
ImGui::TableSetColumnIndex(0); ImGui::TextUnformatted(p.name.c_str());
ImGui::TableSetColumnIndex(1); ImGui::TextUnformatted(p.vertexShaderPath.c_str());
ImGui::TableSetColumnIndex(2); ImGui::TextUnformatted(p.fragmentShaderPath.c_str());
ImGui::TableSetColumnIndex(3); ImGui::TextUnformatted(p.valid ? "yes" : "no");
}
ImGui::EndTable();
}
}
ImGui::End();
}
// Draw targets window
if (ImGui::Begin("Targets"))
{
ImGui::Text("Draw extent: %ux%u", _drawExtent.width, _drawExtent.height);
auto scExt = _swapchainManager->swapchainExtent();
ImGui::Text("Swapchain: %ux%u", scExt.width, scExt.height);
ImGui::Text("Draw fmt: %s", string_VkFormat(_swapchainManager->drawImage().imageFormat));
ImGui::Text("Swap fmt: %s", string_VkFormat(_swapchainManager->swapchainImageFormat()));
ImGui::End();
}
// PostFX window
if (ImGui::Begin("PostFX"))
{
if (auto *tm = _renderPassManager->getPass<TonemapPass>())
{
float exp = tm->exposure();
int mode = tm->mode();
if (ImGui::SliderFloat("Exposure", &exp, 0.05f, 8.0f)) { tm->setExposure(exp); }
ImGui::TextUnformatted("Operator");
ImGui::SameLine();
if (ImGui::RadioButton("Reinhard", mode == 0)) { mode = 0; tm->setMode(mode); }
ImGui::SameLine();
if (ImGui::RadioButton("ACES", mode == 1)) { mode = 1; tm->setMode(mode); }
}
else
{
ImGui::TextUnformatted("Tonemap pass not available");
}
ImGui::End();
}
// Scene window
if (ImGui::Begin("Scene"))
{
const DrawContext &dc = _context->getMainDrawContext();
ImGui::Text("Opaque draws: %zu", dc.OpaqueSurfaces.size());
ImGui::Text("Transp draws: %zu", dc.TransparentSurfaces.size());
ImGui::End();
}
ImGui::Render();
draw();
auto end = std::chrono::system_clock::now();
//convert to microseconds (integer), and then come back to miliseconds
auto elapsed = std::chrono::duration_cast<std::chrono::microseconds>(end - start);
stats.frametime = elapsed.count() / 1000.f;
}
}
void VulkanEngine::init_frame_resources()
{
// descriptor pool sizes per-frame
std::vector<DescriptorAllocatorGrowable::PoolSizeRatio> frame_sizes = {
{VK_DESCRIPTOR_TYPE_STORAGE_IMAGE, 3},
{VK_DESCRIPTOR_TYPE_STORAGE_BUFFER, 3},
{VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, 3},
{VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, 4},
};
for (int i = 0; i < FRAME_OVERLAP; i++)
{
_frames[i].init(_deviceManager.get(), frame_sizes);
}
}
void VulkanEngine::init_pipelines()
{
metalRoughMaterial.build_pipelines(this);
}
void MeshNode::Draw(const glm::mat4 &topMatrix, DrawContext &ctx)
{
glm::mat4 nodeMatrix = topMatrix * worldTransform;
for (auto &s: mesh->surfaces)
{
RenderObject def{};
def.indexCount = s.count;
def.firstIndex = s.startIndex;
def.indexBuffer = mesh->meshBuffers.indexBuffer.buffer;
def.vertexBuffer = mesh->meshBuffers.vertexBuffer.buffer;
def.bounds = s.bounds; // ensure culling uses correct mesh-local AABB
def.material = &s.material->data;
def.transform = nodeMatrix;
def.vertexBufferAddress = mesh->meshBuffers.vertexBufferAddress;
if (s.material->data.passType == MaterialPass::Transparent)
{
ctx.TransparentSurfaces.push_back(def);
}
else
{
ctx.OpaqueSurfaces.push_back(def);
}
}
// recurse down
Node::Draw(topMatrix, ctx);
}

133
src/core/vk_engine.h Normal file
View File

@@ -0,0 +1,133 @@
// vulkan_engine.h : Include file for standard system include files,
// or project specific include files.
#pragma once
#include <core/vk_types.h>
#include <vector>
#include <string>
#include <unordered_map>
#include "vk_mem_alloc.h"
#include <deque>
#include <functional>
#include "vk_descriptors.h"
#include "scene/vk_loader.h"
#include "compute/vk_compute.h"
#include <scene/camera.h>
#include "vk_device.h"
#include "render/vk_renderpass.h"
#include "render/vk_renderpass_background.h"
#include "vk_resource.h"
#include "vk_swapchain.h"
#include "scene/vk_scene.h"
#include "render/vk_materials.h"
#include "frame_resources.h"
#include "vk_descriptor_manager.h"
#include "vk_sampler_manager.h"
#include "core/engine_context.h"
#include "core/vk_pipeline_manager.h"
#include "core/asset_manager.h"
#include "render/rg_graph.h"
constexpr unsigned int FRAME_OVERLAP = 2;
// Compute push constants and effects are declared in compute/vk_compute.h now.
struct RenderPass
{
std::string name;
std::function<void(VkCommandBuffer)> execute;
};
struct MeshNode : public Node
{
std::shared_ptr<MeshAsset> mesh;
virtual void Draw(const glm::mat4 &topMatrix, DrawContext &ctx) override;
};
class VulkanEngine
{
public:
bool _isInitialized{false};
int _frameNumber{0};
std::shared_ptr<DeviceManager> _deviceManager;
std::unique_ptr<SwapchainManager> _swapchainManager;
std::shared_ptr<ResourceManager> _resourceManager;
std::unique_ptr<RenderPassManager> _renderPassManager;
std::unique_ptr<SceneManager> _sceneManager;
std::unique_ptr<PipelineManager> _pipelineManager;
std::unique_ptr<AssetManager> _assetManager;
std::unique_ptr<RenderGraph> _renderGraph;
struct SDL_Window *_window{nullptr};
FrameResources _frames[FRAME_OVERLAP];
FrameResources &get_current_frame() { return _frames[_frameNumber % FRAME_OVERLAP]; };
VkExtent2D _drawExtent;
float renderScale = 1.f;
std::unique_ptr<DescriptorManager> _descriptorManager;
std::unique_ptr<SamplerManager> _samplerManager;
ComputeManager compute;
std::shared_ptr<EngineContext> _context;
std::vector<VkFramebuffer> _framebuffers;
DeletionQueue _mainDeletionQueue;
VkPipelineLayout _meshPipelineLayout;
VkPipeline _meshPipeline;
GPUMeshBuffers rectangle;
std::shared_ptr<MeshAsset> cubeMesh;
std::shared_ptr<MeshAsset> sphereMesh;
AllocatedImage _whiteImage;
AllocatedImage _blackImage;
AllocatedImage _greyImage;
AllocatedImage _errorCheckerboardImage;
MaterialInstance defaultData;
GLTFMetallic_Roughness metalRoughMaterial;
EngineStats stats;
std::vector<RenderPass> renderPasses;
// Debug: persistent pass enable overrides (by pass name)
std::unordered_map<std::string, bool> _rgPassToggles;
//initializes everything in the engine
void init();
//shuts down the engine
void cleanup();
//draw loop
void draw();
//run main loop
void run();
bool resize_requested{false};
bool freeze_rendering{false};
private:
void init_frame_resources();
void init_pipelines();
void init_mesh_pipeline();
void init_default_data();
};

212
src/core/vk_images.cpp Normal file
View File

@@ -0,0 +1,212 @@
#include <core/vk_images.h>
#include <core/vk_initializers.h>
#define STB_IMAGE_IMPLEMENTATION
#include "stb_image.h"
//> transition
#include <core/vk_initializers.h>
void vkutil::transition_image(VkCommandBuffer cmd, VkImage image, VkImageLayout currentLayout, VkImageLayout newLayout)
{
VkImageMemoryBarrier2 imageBarrier{.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER_2};
imageBarrier.pNext = nullptr;
// Choose aspect from the destination layout (depth vs color)
const VkImageAspectFlags aspectMask =
(newLayout == VK_IMAGE_LAYOUT_DEPTH_ATTACHMENT_OPTIMAL) ? VK_IMAGE_ASPECT_DEPTH_BIT : VK_IMAGE_ASPECT_COLOR_BIT;
// Reasoned pipeline stages + accesses per transition. This avoids over-broad
// ALL_COMMANDS barriers that can be ignored by stricter drivers (NVIDIA).
VkPipelineStageFlags2 srcStage = VK_PIPELINE_STAGE_2_TOP_OF_PIPE_BIT;
VkAccessFlags2 srcAccess = 0;
VkPipelineStageFlags2 dstStage = VK_PIPELINE_STAGE_2_BOTTOM_OF_PIPE_BIT;
VkAccessFlags2 dstAccess = 0;
switch (currentLayout)
{
case VK_IMAGE_LAYOUT_UNDEFINED:
srcStage = VK_PIPELINE_STAGE_2_TOP_OF_PIPE_BIT;
srcAccess = 0;
break;
case VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL:
srcStage = VK_PIPELINE_STAGE_2_TRANSFER_BIT;
srcAccess = VK_ACCESS_2_TRANSFER_WRITE_BIT;
break;
case VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL:
srcStage = VK_PIPELINE_STAGE_2_TRANSFER_BIT;
srcAccess = VK_ACCESS_2_TRANSFER_READ_BIT;
break;
case VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL:
srcStage = VK_PIPELINE_STAGE_2_FRAGMENT_SHADER_BIT;
srcAccess = VK_ACCESS_2_SHADER_SAMPLED_READ_BIT;
break;
case VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL:
srcStage = VK_PIPELINE_STAGE_2_COLOR_ATTACHMENT_OUTPUT_BIT;
srcAccess = VK_ACCESS_2_COLOR_ATTACHMENT_WRITE_BIT | VK_ACCESS_2_COLOR_ATTACHMENT_READ_BIT;
break;
case VK_IMAGE_LAYOUT_DEPTH_ATTACHMENT_OPTIMAL:
srcStage = VK_PIPELINE_STAGE_2_EARLY_FRAGMENT_TESTS_BIT | VK_PIPELINE_STAGE_2_LATE_FRAGMENT_TESTS_BIT;
srcAccess = VK_ACCESS_2_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT | VK_ACCESS_2_DEPTH_STENCIL_ATTACHMENT_READ_BIT;
break;
default:
// Fallback to a safe superset
srcStage = VK_PIPELINE_STAGE_2_ALL_COMMANDS_BIT;
srcAccess = VK_ACCESS_2_MEMORY_WRITE_BIT | VK_ACCESS_2_MEMORY_READ_BIT;
break;
}
switch (newLayout)
{
case VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL:
dstStage = VK_PIPELINE_STAGE_2_TRANSFER_BIT;
dstAccess = VK_ACCESS_2_TRANSFER_WRITE_BIT;
break;
case VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL:
dstStage = VK_PIPELINE_STAGE_2_TRANSFER_BIT;
dstAccess = VK_ACCESS_2_TRANSFER_READ_BIT;
break;
case VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL:
// If you sample in other stages, extend this mask accordingly.
dstStage = VK_PIPELINE_STAGE_2_FRAGMENT_SHADER_BIT;
dstAccess = VK_ACCESS_2_SHADER_SAMPLED_READ_BIT;
break;
case VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL:
dstStage = VK_PIPELINE_STAGE_2_COLOR_ATTACHMENT_OUTPUT_BIT;
dstAccess = VK_ACCESS_2_COLOR_ATTACHMENT_WRITE_BIT | VK_ACCESS_2_COLOR_ATTACHMENT_READ_BIT;
break;
case VK_IMAGE_LAYOUT_DEPTH_ATTACHMENT_OPTIMAL:
dstStage = VK_PIPELINE_STAGE_2_EARLY_FRAGMENT_TESTS_BIT | VK_PIPELINE_STAGE_2_LATE_FRAGMENT_TESTS_BIT;
dstAccess = VK_ACCESS_2_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT | VK_ACCESS_2_DEPTH_STENCIL_ATTACHMENT_READ_BIT;
break;
default:
dstStage = VK_PIPELINE_STAGE_2_ALL_COMMANDS_BIT;
dstAccess = VK_ACCESS_2_MEMORY_WRITE_BIT | VK_ACCESS_2_MEMORY_READ_BIT;
break;
}
imageBarrier.srcStageMask = srcStage;
imageBarrier.srcAccessMask = srcAccess;
imageBarrier.dstStageMask = dstStage;
imageBarrier.dstAccessMask = dstAccess;
imageBarrier.oldLayout = currentLayout;
imageBarrier.newLayout = newLayout;
imageBarrier.subresourceRange = vkinit::image_subresource_range(aspectMask);
imageBarrier.image = image;
VkDependencyInfo depInfo{.sType = VK_STRUCTURE_TYPE_DEPENDENCY_INFO};
depInfo.pImageMemoryBarriers = &imageBarrier;
depInfo.imageMemoryBarrierCount = 1;
vkCmdPipelineBarrier2(cmd, &depInfo);
}
//< transition
//> copyimg
void vkutil::copy_image_to_image(VkCommandBuffer cmd, VkImage source, VkImage destination, VkExtent2D srcSize, VkExtent2D dstSize)
{
VkImageBlit2 blitRegion{ .sType = VK_STRUCTURE_TYPE_IMAGE_BLIT_2, .pNext = nullptr };
blitRegion.srcOffsets[1].x = srcSize.width;
blitRegion.srcOffsets[1].y = srcSize.height;
blitRegion.srcOffsets[1].z = 1;
blitRegion.dstOffsets[1].x = dstSize.width;
blitRegion.dstOffsets[1].y = dstSize.height;
blitRegion.dstOffsets[1].z = 1;
blitRegion.srcSubresource.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
blitRegion.srcSubresource.baseArrayLayer = 0;
blitRegion.srcSubresource.layerCount = 1;
blitRegion.srcSubresource.mipLevel = 0;
blitRegion.dstSubresource.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
blitRegion.dstSubresource.baseArrayLayer = 0;
blitRegion.dstSubresource.layerCount = 1;
blitRegion.dstSubresource.mipLevel = 0;
VkBlitImageInfo2 blitInfo{ .sType = VK_STRUCTURE_TYPE_BLIT_IMAGE_INFO_2, .pNext = nullptr };
blitInfo.dstImage = destination;
blitInfo.dstImageLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL;
blitInfo.srcImage = source;
blitInfo.srcImageLayout = VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL;
blitInfo.filter = VK_FILTER_LINEAR;
blitInfo.regionCount = 1;
blitInfo.pRegions = &blitRegion;
vkCmdBlitImage2(cmd, &blitInfo);
}
//< copyimg
//> mipgen
void vkutil::generate_mipmaps(VkCommandBuffer cmd, VkImage image, VkExtent2D imageSize)
{
int mipLevels = int(std::floor(std::log2(std::max(imageSize.width, imageSize.height)))) + 1;
for (int mip = 0; mip < mipLevels; mip++) {
VkExtent2D halfSize = imageSize;
halfSize.width /= 2;
halfSize.height /= 2;
VkImageMemoryBarrier2 imageBarrier{ .sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER_2, .pNext = nullptr };
// Prepare source level for blit: DST -> SRC
imageBarrier.srcStageMask = VK_PIPELINE_STAGE_2_TRANSFER_BIT;
imageBarrier.srcAccessMask = VK_ACCESS_2_TRANSFER_WRITE_BIT;
imageBarrier.dstStageMask = VK_PIPELINE_STAGE_2_TRANSFER_BIT;
imageBarrier.dstAccessMask = VK_ACCESS_2_TRANSFER_READ_BIT;
imageBarrier.oldLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL;
imageBarrier.newLayout = VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL;
VkImageAspectFlags aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
imageBarrier.subresourceRange = vkinit::image_subresource_range(aspectMask);
imageBarrier.subresourceRange.levelCount = 1;
imageBarrier.subresourceRange.baseMipLevel = mip;
imageBarrier.image = image;
VkDependencyInfo depInfo{ .sType = VK_STRUCTURE_TYPE_DEPENDENCY_INFO, .pNext = nullptr };
depInfo.imageMemoryBarrierCount = 1;
depInfo.pImageMemoryBarriers = &imageBarrier;
vkCmdPipelineBarrier2(cmd, &depInfo);
if (mip < mipLevels - 1) {
VkImageBlit2 blitRegion { .sType = VK_STRUCTURE_TYPE_IMAGE_BLIT_2, .pNext = nullptr };
blitRegion.srcOffsets[1].x = imageSize.width;
blitRegion.srcOffsets[1].y = imageSize.height;
blitRegion.srcOffsets[1].z = 1;
blitRegion.dstOffsets[1].x = halfSize.width;
blitRegion.dstOffsets[1].y = halfSize.height;
blitRegion.dstOffsets[1].z = 1;
blitRegion.srcSubresource.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
blitRegion.srcSubresource.baseArrayLayer = 0;
blitRegion.srcSubresource.layerCount = 1;
blitRegion.srcSubresource.mipLevel = mip;
blitRegion.dstSubresource.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
blitRegion.dstSubresource.baseArrayLayer = 0;
blitRegion.dstSubresource.layerCount = 1;
blitRegion.dstSubresource.mipLevel = mip + 1;
VkBlitImageInfo2 blitInfo {.sType = VK_STRUCTURE_TYPE_BLIT_IMAGE_INFO_2, .pNext = nullptr};
blitInfo.dstImage = image;
blitInfo.dstImageLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL;
blitInfo.srcImage = image;
blitInfo.srcImageLayout = VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL;
blitInfo.filter = VK_FILTER_LINEAR;
blitInfo.regionCount = 1;
blitInfo.pRegions = &blitRegion;
vkCmdBlitImage2(cmd, &blitInfo);
imageSize = halfSize;
}
}
// transition all mip levels into the final read_only layout
transition_image(cmd, image, VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL, VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL);
}
//< mipgen

10
src/core/vk_images.h Normal file
View File

@@ -0,0 +1,10 @@
#pragma once
#include <core/vk_types.h>
namespace vkutil {
void transition_image(VkCommandBuffer cmd, VkImage image, VkImageLayout currentLayout, VkImageLayout newLayout);
void copy_image_to_image(VkCommandBuffer cmd, VkImage source, VkImage destination, VkExtent2D srcSize, VkExtent2D dstSize);
void generate_mipmaps(VkCommandBuffer cmd, VkImage image, VkExtent2D imageSize);
};

View File

@@ -0,0 +1,365 @@
#include <core/vk_initializers.h>
//> init_cmd
VkCommandPoolCreateInfo vkinit::command_pool_create_info(uint32_t queueFamilyIndex,
VkCommandPoolCreateFlags flags /*= 0*/)
{
VkCommandPoolCreateInfo info = {};
info.sType = VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO;
info.pNext = nullptr;
info.queueFamilyIndex = queueFamilyIndex;
info.flags = flags;
return info;
}
VkCommandBufferAllocateInfo vkinit::command_buffer_allocate_info(
VkCommandPool pool, uint32_t count /*= 1*/)
{
VkCommandBufferAllocateInfo info = {};
info.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO;
info.pNext = nullptr;
info.commandPool = pool;
info.commandBufferCount = count;
info.level = VK_COMMAND_BUFFER_LEVEL_PRIMARY;
return info;
}
//< init_cmd
//
//> init_cmd_draw
VkCommandBufferBeginInfo vkinit::command_buffer_begin_info(VkCommandBufferUsageFlags flags /*= 0*/)
{
VkCommandBufferBeginInfo info = {};
info.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO;
info.pNext = nullptr;
info.pInheritanceInfo = nullptr;
info.flags = flags;
return info;
}
//< init_cmd_draw
//> init_sync
VkFenceCreateInfo vkinit::fence_create_info(VkFenceCreateFlags flags /*= 0*/)
{
VkFenceCreateInfo info = {};
info.sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO;
info.pNext = nullptr;
info.flags = flags;
return info;
}
VkSemaphoreCreateInfo vkinit::semaphore_create_info(VkSemaphoreCreateFlags flags /*= 0*/)
{
VkSemaphoreCreateInfo info = {};
info.sType = VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO;
info.pNext = nullptr;
info.flags = flags;
return info;
}
//< init_sync
//> init_submit
VkSemaphoreSubmitInfo vkinit::semaphore_submit_info(VkPipelineStageFlags2 stageMask, VkSemaphore semaphore)
{
VkSemaphoreSubmitInfo submitInfo{};
submitInfo.sType = VK_STRUCTURE_TYPE_SEMAPHORE_SUBMIT_INFO;
submitInfo.pNext = nullptr;
submitInfo.semaphore = semaphore;
submitInfo.stageMask = stageMask;
submitInfo.deviceIndex = 0;
submitInfo.value = 1;
return submitInfo;
}
VkCommandBufferSubmitInfo vkinit::command_buffer_submit_info(VkCommandBuffer cmd)
{
VkCommandBufferSubmitInfo info{};
info.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_SUBMIT_INFO;
info.pNext = nullptr;
info.commandBuffer = cmd;
info.deviceMask = 0;
return info;
}
VkSubmitInfo2 vkinit::submit_info(VkCommandBufferSubmitInfo *cmd, VkSemaphoreSubmitInfo *signalSemaphoreInfo,
VkSemaphoreSubmitInfo *waitSemaphoreInfo)
{
VkSubmitInfo2 info = {};
info.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO_2;
info.pNext = nullptr;
info.waitSemaphoreInfoCount = waitSemaphoreInfo == nullptr ? 0 : 1;
info.pWaitSemaphoreInfos = waitSemaphoreInfo;
info.signalSemaphoreInfoCount = signalSemaphoreInfo == nullptr ? 0 : 1;
info.pSignalSemaphoreInfos = signalSemaphoreInfo;
info.commandBufferInfoCount = 1;
info.pCommandBufferInfos = cmd;
return info;
}
//< init_submit
VkPresentInfoKHR vkinit::present_info()
{
VkPresentInfoKHR info = {};
info.sType = VK_STRUCTURE_TYPE_PRESENT_INFO_KHR;
info.pNext = 0;
info.swapchainCount = 0;
info.pSwapchains = nullptr;
info.pWaitSemaphores = nullptr;
info.waitSemaphoreCount = 0;
info.pImageIndices = nullptr;
return info;
}
//> color_info
VkRenderingAttachmentInfo vkinit::attachment_info(
VkImageView view, VkClearValue *clear, VkImageLayout layout /*= VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL*/)
{
VkRenderingAttachmentInfo colorAttachment{};
colorAttachment.sType = VK_STRUCTURE_TYPE_RENDERING_ATTACHMENT_INFO;
colorAttachment.pNext = nullptr;
colorAttachment.imageView = view;
colorAttachment.imageLayout = layout;
colorAttachment.loadOp = clear ? VK_ATTACHMENT_LOAD_OP_CLEAR : VK_ATTACHMENT_LOAD_OP_LOAD;
colorAttachment.storeOp = VK_ATTACHMENT_STORE_OP_STORE;
if (clear)
{
colorAttachment.clearValue = *clear;
}
return colorAttachment;
}
//< color_info
//> depth_info
VkRenderingAttachmentInfo vkinit::depth_attachment_info(
VkImageView view, VkImageLayout layout /*= VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL*/)
{
VkRenderingAttachmentInfo depthAttachment{};
depthAttachment.sType = VK_STRUCTURE_TYPE_RENDERING_ATTACHMENT_INFO;
depthAttachment.pNext = nullptr;
depthAttachment.imageView = view;
depthAttachment.imageLayout = layout;
depthAttachment.loadOp = VK_ATTACHMENT_LOAD_OP_CLEAR;
depthAttachment.storeOp = VK_ATTACHMENT_STORE_OP_STORE;
// Reverse-Z path clears to 0.0
depthAttachment.clearValue.depthStencil.depth = 0.f;
return depthAttachment;
}
//< depth_info
//> render_info
VkRenderingInfo vkinit::rendering_info(VkExtent2D renderExtent, VkRenderingAttachmentInfo *colorAttachment,
VkRenderingAttachmentInfo *depthAttachment)
{
VkRenderingInfo renderInfo{};
renderInfo.sType = VK_STRUCTURE_TYPE_RENDERING_INFO;
renderInfo.pNext = nullptr;
renderInfo.renderArea = VkRect2D{VkOffset2D{0, 0}, renderExtent};
renderInfo.layerCount = 1;
renderInfo.colorAttachmentCount = 1;
renderInfo.pColorAttachments = colorAttachment;
renderInfo.pDepthAttachment = depthAttachment;
renderInfo.pStencilAttachment = nullptr;
return renderInfo;
}
VkRenderingInfo vkinit::rendering_info_multi(VkExtent2D renderExtent, uint32_t colorCount,
VkRenderingAttachmentInfo *colorAttachments,
VkRenderingAttachmentInfo *depthAttachment)
{
VkRenderingInfo renderInfo{};
renderInfo.sType = VK_STRUCTURE_TYPE_RENDERING_INFO;
renderInfo.pNext = nullptr;
renderInfo.renderArea = VkRect2D{VkOffset2D{0, 0}, renderExtent};
renderInfo.layerCount = 1;
renderInfo.colorAttachmentCount = colorCount;
renderInfo.pColorAttachments = colorAttachments;
renderInfo.pDepthAttachment = depthAttachment;
renderInfo.pStencilAttachment = nullptr;
return renderInfo;
}
//< render_info
//> subresource
VkImageSubresourceRange vkinit::image_subresource_range(VkImageAspectFlags aspectMask)
{
VkImageSubresourceRange subImage{};
subImage.aspectMask = aspectMask;
subImage.baseMipLevel = 0;
subImage.levelCount = VK_REMAINING_MIP_LEVELS;
subImage.baseArrayLayer = 0;
subImage.layerCount = VK_REMAINING_ARRAY_LAYERS;
return subImage;
}
//< subresource
VkDescriptorSetLayoutBinding vkinit::descriptorset_layout_binding(VkDescriptorType type, VkShaderStageFlags stageFlags,
uint32_t binding)
{
VkDescriptorSetLayoutBinding setbind = {};
setbind.binding = binding;
setbind.descriptorCount = 1;
setbind.descriptorType = type;
setbind.pImmutableSamplers = nullptr;
setbind.stageFlags = stageFlags;
return setbind;
}
VkDescriptorSetLayoutCreateInfo vkinit::descriptorset_layout_create_info(VkDescriptorSetLayoutBinding *bindings,
uint32_t bindingCount)
{
VkDescriptorSetLayoutCreateInfo info = {};
info.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO;
info.pNext = nullptr;
info.pBindings = bindings;
info.bindingCount = bindingCount;
info.flags = 0;
return info;
}
VkWriteDescriptorSet vkinit::write_descriptor_image(VkDescriptorType type, VkDescriptorSet dstSet,
VkDescriptorImageInfo *imageInfo, uint32_t binding)
{
VkWriteDescriptorSet write = {};
write.sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET;
write.pNext = nullptr;
write.dstBinding = binding;
write.dstSet = dstSet;
write.descriptorCount = 1;
write.descriptorType = type;
write.pImageInfo = imageInfo;
return write;
}
VkWriteDescriptorSet vkinit::write_descriptor_buffer(VkDescriptorType type, VkDescriptorSet dstSet,
VkDescriptorBufferInfo *bufferInfo, uint32_t binding)
{
VkWriteDescriptorSet write = {};
write.sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET;
write.pNext = nullptr;
write.dstBinding = binding;
write.dstSet = dstSet;
write.descriptorCount = 1;
write.descriptorType = type;
write.pBufferInfo = bufferInfo;
return write;
}
VkDescriptorBufferInfo vkinit::buffer_info(VkBuffer buffer, VkDeviceSize offset, VkDeviceSize range)
{
VkDescriptorBufferInfo binfo{};
binfo.buffer = buffer;
binfo.offset = offset;
binfo.range = range;
return binfo;
}
//> image_set
VkImageCreateInfo vkinit::image_create_info(VkFormat format, VkImageUsageFlags usageFlags, VkExtent3D extent)
{
VkImageCreateInfo info = {};
info.sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
info.pNext = nullptr;
info.imageType = VK_IMAGE_TYPE_2D;
info.format = format;
info.extent = extent;
info.mipLevels = 1;
info.arrayLayers = 1;
//for MSAA. we will not be using it by default, so default it to 1 sample per pixel.
info.samples = VK_SAMPLE_COUNT_1_BIT;
//optimal tiling, which means the image is stored on the best gpu format
info.tiling = VK_IMAGE_TILING_OPTIMAL;
info.usage = usageFlags;
return info;
}
VkImageViewCreateInfo vkinit::imageview_create_info(VkFormat format, VkImage image, VkImageAspectFlags aspectFlags)
{
// build a image-view for the depth image to use for rendering
VkImageViewCreateInfo info = {};
info.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO;
info.pNext = nullptr;
info.viewType = VK_IMAGE_VIEW_TYPE_2D;
info.image = image;
info.format = format;
info.subresourceRange.baseMipLevel = 0;
info.subresourceRange.levelCount = 1;
info.subresourceRange.baseArrayLayer = 0;
info.subresourceRange.layerCount = 1;
info.subresourceRange.aspectMask = aspectFlags;
return info;
}
//< image_set
VkPipelineLayoutCreateInfo vkinit::pipeline_layout_create_info()
{
VkPipelineLayoutCreateInfo info{};
info.sType = VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO;
info.pNext = nullptr;
// empty defaults
info.flags = 0;
info.setLayoutCount = 0;
info.pSetLayouts = nullptr;
info.pushConstantRangeCount = 0;
info.pPushConstantRanges = nullptr;
return info;
}
VkPipelineShaderStageCreateInfo vkinit::pipeline_shader_stage_create_info(VkShaderStageFlagBits stage,
VkShaderModule shaderModule,
const char *entry)
{
VkPipelineShaderStageCreateInfo info{};
info.sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
info.pNext = nullptr;
// shader stage
info.stage = stage;
// module containing the code for this shader stage
info.module = shaderModule;
// the entry point of the shader
info.pName = entry;
return info;
}

View File

@@ -0,0 +1,71 @@
// vulkan_engine.h : Include file for standard system include files,
// or project specific include files.
#pragma once
#include <core/vk_types.h>
namespace vkinit
{
//> init_cmd
VkCommandPoolCreateInfo command_pool_create_info(uint32_t queueFamilyIndex, VkCommandPoolCreateFlags flags = 0);
VkCommandBufferAllocateInfo command_buffer_allocate_info(VkCommandPool pool, uint32_t count = 1);
//< init_cmd
VkCommandBufferBeginInfo command_buffer_begin_info(VkCommandBufferUsageFlags flags = 0);
VkCommandBufferSubmitInfo command_buffer_submit_info(VkCommandBuffer cmd);
VkFenceCreateInfo fence_create_info(VkFenceCreateFlags flags = 0);
VkSemaphoreCreateInfo semaphore_create_info(VkSemaphoreCreateFlags flags = 0);
VkSubmitInfo2 submit_info(VkCommandBufferSubmitInfo *cmd, VkSemaphoreSubmitInfo *signalSemaphoreInfo,
VkSemaphoreSubmitInfo *waitSemaphoreInfo);
VkPresentInfoKHR present_info();
VkRenderingAttachmentInfo attachment_info(VkImageView view, VkClearValue *clear,
VkImageLayout layout /*= VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL*/);
VkRenderingAttachmentInfo depth_attachment_info(VkImageView view,
VkImageLayout layout
/*= VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL*/);
VkRenderingInfo rendering_info(VkExtent2D renderExtent, VkRenderingAttachmentInfo *colorAttachment,
VkRenderingAttachmentInfo *depthAttachment);
VkRenderingInfo rendering_info_multi(VkExtent2D renderExtent, uint32_t colorCount,
VkRenderingAttachmentInfo *colorAttachments,
VkRenderingAttachmentInfo *depthAttachment);
VkImageSubresourceRange image_subresource_range(VkImageAspectFlags aspectMask);
VkSemaphoreSubmitInfo semaphore_submit_info(VkPipelineStageFlags2 stageMask, VkSemaphore semaphore);
VkDescriptorSetLayoutBinding descriptorset_layout_binding(VkDescriptorType type, VkShaderStageFlags stageFlags,
uint32_t binding);
VkDescriptorSetLayoutCreateInfo descriptorset_layout_create_info(VkDescriptorSetLayoutBinding *bindings,
uint32_t bindingCount);
VkWriteDescriptorSet write_descriptor_image(VkDescriptorType type, VkDescriptorSet dstSet,
VkDescriptorImageInfo *imageInfo, uint32_t binding);
VkWriteDescriptorSet write_descriptor_buffer(VkDescriptorType type, VkDescriptorSet dstSet,
VkDescriptorBufferInfo *bufferInfo, uint32_t binding);
VkDescriptorBufferInfo buffer_info(VkBuffer buffer, VkDeviceSize offset, VkDeviceSize range);
VkImageCreateInfo image_create_info(VkFormat format, VkImageUsageFlags usageFlags, VkExtent3D extent);
VkImageViewCreateInfo imageview_create_info(VkFormat format, VkImage image, VkImageAspectFlags aspectFlags);
VkPipelineLayoutCreateInfo pipeline_layout_create_info();
VkPipelineShaderStageCreateInfo pipeline_shader_stage_create_info(VkShaderStageFlagBits stage,
VkShaderModule shaderModule,
const char *entry = "main");
} // namespace vkinit

View File

@@ -0,0 +1,308 @@
#include <core/vk_pipeline_manager.h>
#include <core/engine_context.h>
#include <core/vk_initializers.h>
#include <render/vk_pipelines.h>
#include <vk_device.h>
#include <filesystem>
PipelineManager::~PipelineManager()
{
cleanup();
}
void PipelineManager::init(EngineContext *ctx)
{
_context = ctx;
}
void PipelineManager::cleanup()
{
for (auto &kv: _graphicsPipelines)
{
destroyGraphics(kv.second);
}
_graphicsPipelines.clear();
_context = nullptr;
}
bool PipelineManager::registerGraphics(const std::string &name, const GraphicsPipelineCreateInfo &info)
{
if (! _context || !_context->getDevice()) return false;
auto it = _graphicsPipelines.find(name);
if (it != _graphicsPipelines.end())
{
fmt::println("Graphics pipeline '{}' already exists", name);
return false;
}
GraphicsPipelineRecord rec{};
rec.spec = info;
if (!buildGraphics(rec))
{
destroyGraphics(rec);
return false;
}
_graphicsPipelines.emplace(name, std::move(rec));
return true;
}
void PipelineManager::unregisterGraphics(const std::string &name)
{
auto it = _graphicsPipelines.find(name);
if (it == _graphicsPipelines.end()) return;
destroyGraphics(it->second);
_graphicsPipelines.erase(it);
}
bool PipelineManager::getGraphics(const std::string &name, VkPipeline &pipeline, VkPipelineLayout &layout) const
{
auto it = _graphicsPipelines.find(name);
if (it == _graphicsPipelines.end()) return false;
pipeline = it->second.pipeline;
layout = it->second.layout;
return pipeline != VK_NULL_HANDLE && layout != VK_NULL_HANDLE;
}
bool PipelineManager::getMaterialPipeline(const std::string &name, MaterialPipeline &out) const
{
VkPipeline p{}; VkPipelineLayout l{};
if (!getGraphics(name, p, l)) return false;
out.pipeline = p;
out.layout = l;
return true;
}
void PipelineManager::hotReloadChanged()
{
if (!_context || !_context->getDevice()) return;
for (auto &kv: _graphicsPipelines)
{
auto &rec = kv.second;
try
{
bool needReload = false;
if (!rec.spec.vertexShaderPath.empty())
{
auto t = std::filesystem::last_write_time(rec.spec.vertexShaderPath);
if (rec.vertTime != std::filesystem::file_time_type{} && t != rec.vertTime) needReload = true;
}
if (!rec.spec.fragmentShaderPath.empty())
{
auto t = std::filesystem::last_write_time(rec.spec.fragmentShaderPath);
if (rec.fragTime != std::filesystem::file_time_type{} && t != rec.fragTime) needReload = true;
}
if (needReload)
{
GraphicsPipelineRecord fresh = rec;
fresh.pipeline = VK_NULL_HANDLE;
fresh.layout = VK_NULL_HANDLE;
if (buildGraphics(fresh))
{
destroyGraphics(rec);
rec = std::move(fresh);
fmt::println("Reloaded graphics pipeline '{}'", kv.first);
}
}
}
catch (const std::exception &)
{
// ignore hot-reload errors to avoid spamming
}
}
}
void PipelineManager::debug_get_graphics(std::vector<GraphicsPipelineDebugInfo> &out) const
{
out.clear();
out.reserve(_graphicsPipelines.size());
for (const auto &kv : _graphicsPipelines)
{
const auto &rec = kv.second;
GraphicsPipelineDebugInfo info{};
info.name = kv.first;
info.vertexShaderPath = rec.spec.vertexShaderPath;
info.fragmentShaderPath = rec.spec.fragmentShaderPath;
info.valid = (rec.pipeline != VK_NULL_HANDLE) && (rec.layout != VK_NULL_HANDLE);
out.push_back(std::move(info));
}
}
bool PipelineManager::buildGraphics(GraphicsPipelineRecord &rec) const
{
VkShaderModule vert = VK_NULL_HANDLE;
VkShaderModule frag = VK_NULL_HANDLE;
if (!rec.spec.vertexShaderPath.empty())
{
if (!vkutil::load_shader_module(rec.spec.vertexShaderPath.c_str(), _context->getDevice()->device(), &vert))
{
fmt::println("Failed to load vertex shader: {}", rec.spec.vertexShaderPath);
return false;
}
}
if (!rec.spec.fragmentShaderPath.empty())
{
if (!vkutil::load_shader_module(rec.spec.fragmentShaderPath.c_str(), _context->getDevice()->device(), &frag))
{
if (vert != VK_NULL_HANDLE) vkDestroyShaderModule(_context->getDevice()->device(), vert, nullptr);
fmt::println("Failed to load fragment shader: {}", rec.spec.fragmentShaderPath);
return false;
}
}
VkPipelineLayoutCreateInfo layoutInfo = vkinit::pipeline_layout_create_info();
layoutInfo.setLayoutCount = static_cast<uint32_t>(rec.spec.setLayouts.size());
layoutInfo.pSetLayouts = rec.spec.setLayouts.empty() ? nullptr : rec.spec.setLayouts.data();
layoutInfo.pushConstantRangeCount = static_cast<uint32_t>(rec.spec.pushConstants.size());
layoutInfo.pPushConstantRanges = rec.spec.pushConstants.empty() ? nullptr : rec.spec.pushConstants.data();
VK_CHECK(vkCreatePipelineLayout(_context->getDevice()->device(), &layoutInfo, nullptr, &rec.layout));
PipelineBuilder builder;
if (vert != VK_NULL_HANDLE || frag != VK_NULL_HANDLE)
{
builder.set_shaders(vert, frag);
}
if (rec.spec.configure) rec.spec.configure(builder);
builder._pipelineLayout = rec.layout;
rec.pipeline = builder.build_pipeline(_context->getDevice()->device());
if (vert != VK_NULL_HANDLE)
vkDestroyShaderModule(_context->getDevice()->device(), vert, nullptr);
if (frag != VK_NULL_HANDLE)
vkDestroyShaderModule(_context->getDevice()->device(), frag, nullptr);
if (rec.pipeline == VK_NULL_HANDLE)
{
vkDestroyPipelineLayout(_context->getDevice()->device(), rec.layout, nullptr);
rec.layout = VK_NULL_HANDLE;
return false;
}
// Record timestamps for hot reload
try
{
if (!rec.spec.vertexShaderPath.empty())
rec.vertTime = std::filesystem::last_write_time(rec.spec.vertexShaderPath);
if (!rec.spec.fragmentShaderPath.empty())
rec.fragTime = std::filesystem::last_write_time(rec.spec.fragmentShaderPath);
}
catch (const std::exception &)
{
// ignore timestamp errors
}
return true;
}
void PipelineManager::destroyGraphics(GraphicsPipelineRecord &rec)
{
if (!_context || !_context->getDevice()) return;
if (rec.pipeline != VK_NULL_HANDLE)
{
vkDestroyPipeline(_context->getDevice()->device(), rec.pipeline, nullptr);
rec.pipeline = VK_NULL_HANDLE;
}
if (rec.layout != VK_NULL_HANDLE)
{
vkDestroyPipelineLayout(_context->getDevice()->device(), rec.layout, nullptr);
rec.layout = VK_NULL_HANDLE;
}
}
// --- Compute forwarding API ---
bool PipelineManager::createComputePipeline(const std::string &name, const ComputePipelineCreateInfo &info)
{
if (!_context || !_context->compute) return false;
return _context->compute->registerPipeline(name, info);
}
void PipelineManager::destroyComputePipeline(const std::string &name)
{
if (!_context || !_context->compute) return;
_context->compute->unregisterPipeline(name);
}
bool PipelineManager::hasComputePipeline(const std::string &name) const
{
if (!_context || !_context->compute) return false;
return _context->compute->hasPipeline(name);
}
void PipelineManager::dispatchCompute(VkCommandBuffer cmd, const std::string &name, const ComputeDispatchInfo &info)
{
if (!_context || !_context->compute) return;
_context->compute->dispatch(cmd, name, info);
}
void PipelineManager::dispatchComputeImmediate(const std::string &name, const ComputeDispatchInfo &info)
{
if (!_context || !_context->compute) return;
_context->compute->dispatchImmediate(name, info);
}
bool PipelineManager::createComputeInstance(const std::string &instanceName, const std::string &pipelineName)
{
if (!_context || !_context->compute) return false;
return _context->compute->createInstance(instanceName, pipelineName);
}
void PipelineManager::destroyComputeInstance(const std::string &instanceName)
{
if (!_context || !_context->compute) return;
_context->compute->destroyInstance(instanceName);
}
bool PipelineManager::setComputeInstanceStorageImage(const std::string &instanceName, uint32_t binding, VkImageView view,
VkImageLayout layout)
{
if (!_context || !_context->compute) return false;
return _context->compute->setInstanceStorageImage(instanceName, binding, view, layout);
}
bool PipelineManager::setComputeInstanceSampledImage(const std::string &instanceName, uint32_t binding, VkImageView view,
VkSampler sampler, VkImageLayout layout)
{
if (!_context || !_context->compute) return false;
return _context->compute->setInstanceSampledImage(instanceName, binding, view, sampler, layout);
}
bool PipelineManager::setComputeInstanceBuffer(const std::string &instanceName, uint32_t binding, VkBuffer buffer,
VkDeviceSize size, VkDescriptorType type, VkDeviceSize offset)
{
if (!_context || !_context->compute) return false;
return _context->compute->setInstanceBuffer(instanceName, binding, buffer, size, type, offset);
}
AllocatedImage PipelineManager::createAndBindComputeStorageImage(const std::string &instanceName, uint32_t binding,
VkExtent3D extent, VkFormat format,
VkImageLayout layout, VkImageUsageFlags usage)
{
if (!_context || !_context->compute) return {};
return _context->compute->createAndBindStorageImage(instanceName, binding, extent, format, layout, usage);
}
AllocatedBuffer PipelineManager::createAndBindComputeStorageBuffer(const std::string &instanceName, uint32_t binding,
VkDeviceSize size, VkBufferUsageFlags usage,
VmaMemoryUsage memUsage)
{
if (!_context || !_context->compute) return {};
return _context->compute->createAndBindStorageBuffer(instanceName, binding, size, usage, memUsage);
}
void PipelineManager::dispatchComputeInstance(VkCommandBuffer cmd, const std::string &instanceName,
const ComputeDispatchInfo &info)
{
if (!_context || !_context->compute) return;
_context->compute->dispatchInstance(cmd, instanceName, info);
}

View File

@@ -0,0 +1,128 @@
#pragma once
#include <core/vk_types.h>
#include <render/vk_pipelines.h>
#include <compute/vk_compute.h>
#include <functional>
#include <string>
#include <unordered_map>
#include <vector>
#include <filesystem>
class EngineContext;
struct GraphicsPipelineCreateInfo
{
std::string vertexShaderPath;
std::string fragmentShaderPath;
std::vector<VkDescriptorSetLayout> setLayouts;
std::vector<VkPushConstantRange> pushConstants;
// This function MUST set things like topology, rasterization, depth/blend state
// and color/depth attachment formats on the builder.
std::function<void(PipelineBuilder &)> configure;
};
class PipelineManager
{
public:
PipelineManager() = default;
~PipelineManager();
void init(EngineContext *ctx);
void cleanup();
// Register and build a graphics pipeline under a unique name
bool registerGraphics(const std::string &name, const GraphicsPipelineCreateInfo &info);
// Convenience alias for registerGraphics to match desired API
bool createGraphicsPipeline(const std::string &name, const GraphicsPipelineCreateInfo &info)
{
return registerGraphics(name, info);
}
// Compute wrappers (forward to ComputeManager for a unified API)
bool createComputePipeline(const std::string &name, const ComputePipelineCreateInfo &info);
void destroyComputePipeline(const std::string &name);
bool hasComputePipeline(const std::string &name) const;
void dispatchCompute(VkCommandBuffer cmd, const std::string &name, const ComputeDispatchInfo &info);
void dispatchComputeImmediate(const std::string &name, const ComputeDispatchInfo &info);
// Persistent compute instances (forwarded to ComputeManager)
bool createComputeInstance(const std::string &instanceName, const std::string &pipelineName);
void destroyComputeInstance(const std::string &instanceName);
bool setComputeInstanceStorageImage(const std::string &instanceName, uint32_t binding, VkImageView view,
VkImageLayout layout = VK_IMAGE_LAYOUT_GENERAL);
bool setComputeInstanceSampledImage(const std::string &instanceName, uint32_t binding, VkImageView view,
VkSampler sampler,
VkImageLayout layout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL);
bool setComputeInstanceBuffer(const std::string &instanceName, uint32_t binding, VkBuffer buffer, VkDeviceSize size,
VkDescriptorType type, VkDeviceSize offset = 0);
AllocatedImage createAndBindComputeStorageImage(const std::string &instanceName, uint32_t binding,
VkExtent3D extent,
VkFormat format,
VkImageLayout layout = VK_IMAGE_LAYOUT_GENERAL,
VkImageUsageFlags usage =
VK_IMAGE_USAGE_STORAGE_BIT | VK_IMAGE_USAGE_SAMPLED_BIT);
AllocatedBuffer createAndBindComputeStorageBuffer(const std::string &instanceName, uint32_t binding,
VkDeviceSize size,
VkBufferUsageFlags usage = VK_BUFFER_USAGE_STORAGE_BUFFER_BIT,
VmaMemoryUsage memUsage = VMA_MEMORY_USAGE_GPU_ONLY);
void dispatchComputeInstance(VkCommandBuffer cmd, const std::string &instanceName, const ComputeDispatchInfo &info);
// Remove and destroy a graphics pipeline
void unregisterGraphics(const std::string &name);
// Get pipeline handles for binding
bool getGraphics(const std::string &name, VkPipeline &pipeline, VkPipelineLayout &layout) const;
// Convenience to interop with MaterialInstance
bool getMaterialPipeline(const std::string &name, MaterialPipeline &out) const;
// Rebuild pipelines whose shaders changed on disk
void hotReloadChanged();
// Debug helpers (graphics only)
struct GraphicsPipelineDebugInfo
{
std::string name;
std::string vertexShaderPath;
std::string fragmentShaderPath;
bool valid = false;
};
void debug_get_graphics(std::vector<GraphicsPipelineDebugInfo>& out) const;
private:
struct GraphicsPipelineRecord
{
VkPipeline pipeline = VK_NULL_HANDLE;
VkPipelineLayout layout = VK_NULL_HANDLE;
GraphicsPipelineCreateInfo spec;
std::filesystem::file_time_type vertTime{};
std::filesystem::file_time_type fragTime{};
};
EngineContext *_context = nullptr;
std::unordered_map<std::string, GraphicsPipelineRecord> _graphicsPipelines;
bool buildGraphics(GraphicsPipelineRecord &rec) const;
void destroyGraphics(GraphicsPipelineRecord &rec);
};

486
src/core/vk_resource.cpp Normal file
View File

@@ -0,0 +1,486 @@
#include "vk_resource.h"
#include "vk_device.h"
#include "vk_images.h"
#include "vk_initializers.h"
#include "vk_mem_alloc.h"
#include <render/rg_graph.h>
#include <render/rg_builder.h>
#include <render/rg_resources.h>
#include "frame_resources.h"
#include <memory>
#include <string>
#include <unordered_map>
#include <utility>
void ResourceManager::init(DeviceManager *deviceManager)
{
_deviceManager = deviceManager;
VkCommandPoolCreateInfo commandPoolInfo = vkinit::command_pool_create_info(
_deviceManager->graphicsQueueFamily(),
VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT
);
VK_CHECK(vkCreateCommandPool(_deviceManager->device(), &commandPoolInfo, nullptr, &_immCommandPool));
VkCommandBufferAllocateInfo cmdAllocInfo = vkinit::command_buffer_allocate_info(_immCommandPool, 1);
VK_CHECK(vkAllocateCommandBuffers(_deviceManager->device(), &cmdAllocInfo, &_immCommandBuffer));
VkFenceCreateInfo fenceCreateInfo = vkinit::fence_create_info(VK_FENCE_CREATE_SIGNALED_BIT);
VK_CHECK(vkCreateFence(_deviceManager->device(), &fenceCreateInfo, nullptr, &_immFence));
_deletionQueue.push_function([=]() {
vkDestroyCommandPool(_deviceManager->device(), _immCommandPool, nullptr);
vkDestroyFence(_deviceManager->device(), _immFence, nullptr);
});
}
AllocatedBuffer ResourceManager::create_buffer(size_t allocSize, VkBufferUsageFlags usage,
VmaMemoryUsage memoryUsage) const
{
VkBufferCreateInfo bufferInfo = {.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO};
bufferInfo.pNext = nullptr;
bufferInfo.size = allocSize;
bufferInfo.usage = usage;
VmaAllocationCreateInfo vmaallocInfo = {};
vmaallocInfo.usage = memoryUsage;
vmaallocInfo.flags = VMA_ALLOCATION_CREATE_MAPPED_BIT;
AllocatedBuffer newBuffer{};
VK_CHECK(vmaCreateBuffer(_deviceManager->allocator(), &bufferInfo, &vmaallocInfo,
&newBuffer.buffer, &newBuffer.allocation, &newBuffer.info));
return newBuffer;
}
void ResourceManager::immediate_submit(std::function<void(VkCommandBuffer)> &&function) const
{
VK_CHECK(vkResetFences(_deviceManager->device(), 1, &_immFence));
VK_CHECK(vkResetCommandBuffer(_immCommandBuffer, 0));
VkCommandBuffer cmd = _immCommandBuffer;
VkCommandBufferBeginInfo cmdBeginInfo = vkinit::command_buffer_begin_info(
VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT);
VK_CHECK(vkBeginCommandBuffer(cmd, &cmdBeginInfo));
function(cmd);
VK_CHECK(vkEndCommandBuffer(cmd));
VkCommandBufferSubmitInfo cmdinfo = vkinit::command_buffer_submit_info(cmd);
VkSubmitInfo2 submit = vkinit::submit_info(&cmdinfo, nullptr, nullptr);
VK_CHECK(vkQueueSubmit2(_deviceManager->graphicsQueue(), 1, &submit, _immFence));
VK_CHECK(vkWaitForFences(_deviceManager->device(), 1, &_immFence, true, 9999999999));
}
void ResourceManager::destroy_buffer(const AllocatedBuffer &buffer) const
{
vmaDestroyBuffer(_deviceManager->allocator(), buffer.buffer, buffer.allocation);
}
void ResourceManager::cleanup()
{
fmt::print("ResourceManager::cleanup()\n");
clear_pending_uploads();
_deletionQueue.flush();
}
AllocatedImage ResourceManager::create_image(VkExtent3D size, VkFormat format, VkImageUsageFlags usage,
bool mipmapped) const
{
AllocatedImage newImage{};
newImage.imageFormat = format;
newImage.imageExtent = size;
VkImageCreateInfo img_info = vkinit::image_create_info(format, usage, size);
if (mipmapped)
{
img_info.mipLevels = static_cast<uint32_t>(std::floor(std::log2(std::max(size.width, size.height)))) + 1;
}
// always allocate images on dedicated GPU memory
VmaAllocationCreateInfo allocinfo = {};
allocinfo.usage = VMA_MEMORY_USAGE_GPU_ONLY;
allocinfo.requiredFlags = static_cast<VkMemoryPropertyFlags>(VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT);
// allocate and create the image
VK_CHECK(
vmaCreateImage(_deviceManager->allocator(), &img_info, &allocinfo, &newImage.image, &newImage.allocation,
nullptr));
// if the format is a depth format, we will need to have it use the correct
// aspect flag
VkImageAspectFlags aspectFlag = VK_IMAGE_ASPECT_COLOR_BIT;
if (format == VK_FORMAT_D32_SFLOAT)
{
aspectFlag = VK_IMAGE_ASPECT_DEPTH_BIT;
}
// build a image-view for the image
VkImageViewCreateInfo view_info = vkinit::imageview_create_info(format, newImage.image, aspectFlag);
view_info.subresourceRange.levelCount = img_info.mipLevels;
VK_CHECK(vkCreateImageView(_deviceManager->device(), &view_info, nullptr, &newImage.imageView));
return newImage;
}
AllocatedImage ResourceManager::create_image(const void *data, VkExtent3D size, VkFormat format,
VkImageUsageFlags usage,
bool mipmapped)
{
size_t data_size = size.depth * size.width * size.height * 4;
AllocatedBuffer uploadbuffer = create_buffer(data_size, VK_BUFFER_USAGE_TRANSFER_SRC_BIT,
VMA_MEMORY_USAGE_CPU_TO_GPU);
memcpy(uploadbuffer.info.pMappedData, data, data_size);
vmaFlushAllocation(_deviceManager->allocator(), uploadbuffer.allocation, 0, data_size);
AllocatedImage new_image = create_image(size, format,
usage | VK_IMAGE_USAGE_TRANSFER_DST_BIT | VK_IMAGE_USAGE_TRANSFER_SRC_BIT,
mipmapped);
PendingImageUpload pending{};
pending.staging = uploadbuffer;
pending.image = new_image.image;
pending.extent = size;
pending.format = format;
pending.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
pending.finalLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
pending.generateMips = mipmapped;
_pendingImageUploads.push_back(std::move(pending));
if (!_deferUploads)
{
process_queued_uploads_immediate();
}
return new_image;
}
void ResourceManager::destroy_image(const AllocatedImage &img) const
{
vkDestroyImageView(_deviceManager->device(), img.imageView, nullptr);
vmaDestroyImage(_deviceManager->allocator(), img.image, img.allocation);
}
GPUMeshBuffers ResourceManager::uploadMesh(std::span<uint32_t> indices, std::span<Vertex> vertices)
{
const size_t vertexBufferSize = vertices.size() * sizeof(Vertex);
const size_t indexBufferSize = indices.size() * sizeof(uint32_t);
GPUMeshBuffers newSurface{};
//create vertex buffer
newSurface.vertexBuffer = create_buffer(vertexBufferSize,
VK_BUFFER_USAGE_STORAGE_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT |
VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT,
VMA_MEMORY_USAGE_GPU_ONLY);
//find the adress of the vertex buffer
VkBufferDeviceAddressInfo deviceAdressInfo{
.sType = VK_STRUCTURE_TYPE_BUFFER_DEVICE_ADDRESS_INFO, .buffer = newSurface.vertexBuffer.buffer
};
newSurface.vertexBufferAddress = vkGetBufferDeviceAddress(_deviceManager->device(), &deviceAdressInfo);
//create index buffer
newSurface.indexBuffer = create_buffer(indexBufferSize,
VK_BUFFER_USAGE_INDEX_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT,
VMA_MEMORY_USAGE_GPU_ONLY);
AllocatedBuffer staging = create_buffer(vertexBufferSize + indexBufferSize, VK_BUFFER_USAGE_TRANSFER_SRC_BIT,
VMA_MEMORY_USAGE_CPU_ONLY);
VmaAllocationInfo allocInfo{};
vmaGetAllocationInfo(_deviceManager->allocator(), staging.allocation, &allocInfo);
void *data = allocInfo.pMappedData;
// copy vertex/index data to staging (host visible)
memcpy(data, vertices.data(), vertexBufferSize);
memcpy((char *) data + vertexBufferSize, indices.data(), indexBufferSize);
// Ensure visibility on non-coherent memory before GPU copies
vmaFlushAllocation(_deviceManager->allocator(), staging.allocation, 0, vertexBufferSize + indexBufferSize);
PendingBufferUpload pending{};
pending.staging = staging;
pending.copies.push_back(BufferCopyRegion{
.destination = newSurface.vertexBuffer.buffer,
.dstOffset = 0,
.size = vertexBufferSize,
.stagingOffset = 0,
});
pending.copies.push_back(BufferCopyRegion{
.destination = newSurface.indexBuffer.buffer,
.dstOffset = 0,
.size = indexBufferSize,
.stagingOffset = vertexBufferSize,
});
_pendingBufferUploads.push_back(std::move(pending));
if (!_deferUploads)
{
process_queued_uploads_immediate();
}
return newSurface;
}
bool ResourceManager::has_pending_uploads() const
{
return !_pendingBufferUploads.empty() || !_pendingImageUploads.empty();
}
void ResourceManager::clear_pending_uploads()
{
for (auto &upload : _pendingBufferUploads)
{
destroy_buffer(upload.staging);
}
for (auto &upload : _pendingImageUploads)
{
destroy_buffer(upload.staging);
}
_pendingBufferUploads.clear();
_pendingImageUploads.clear();
}
void ResourceManager::process_queued_uploads_immediate()
{
if (!has_pending_uploads()) return;
immediate_submit([&](VkCommandBuffer cmd) {
for (auto &bufferUpload : _pendingBufferUploads)
{
for (const auto &copy : bufferUpload.copies)
{
VkBufferCopy region{};
region.srcOffset = copy.stagingOffset;
region.dstOffset = copy.dstOffset;
region.size = copy.size;
vkCmdCopyBuffer(cmd, bufferUpload.staging.buffer, copy.destination, 1, &region);
}
}
for (auto &imageUpload : _pendingImageUploads)
{
vkutil::transition_image(cmd, imageUpload.image, imageUpload.initialLayout,
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL);
VkBufferImageCopy copyRegion = {};
copyRegion.bufferOffset = 0;
copyRegion.bufferRowLength = 0;
copyRegion.bufferImageHeight = 0;
copyRegion.imageSubresource.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
copyRegion.imageSubresource.mipLevel = 0;
copyRegion.imageSubresource.baseArrayLayer = 0;
copyRegion.imageSubresource.layerCount = 1;
copyRegion.imageExtent = imageUpload.extent;
vkCmdCopyBufferToImage(cmd,
imageUpload.staging.buffer,
imageUpload.image,
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
1,
&copyRegion);
if (imageUpload.generateMips)
{
vkutil::generate_mipmaps(cmd, imageUpload.image,
VkExtent2D{imageUpload.extent.width, imageUpload.extent.height});
}
else
{
vkutil::transition_image(cmd, imageUpload.image, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
imageUpload.finalLayout);
}
}
});
clear_pending_uploads();
}
void ResourceManager::register_upload_pass(RenderGraph &graph, FrameResources &frame)
{
if (_pendingBufferUploads.empty() && _pendingImageUploads.empty()) return;
auto bufferUploads = std::make_shared<std::vector<PendingBufferUpload>>(std::move(_pendingBufferUploads));
auto imageUploads = std::make_shared<std::vector<PendingImageUpload>>(std::move(_pendingImageUploads));
struct BufferBinding
{
size_t uploadIndex{};
RGBufferHandle stagingHandle{};
std::vector<RGBufferHandle> destinationHandles;
};
struct ImageBinding
{
size_t uploadIndex{};
RGBufferHandle stagingHandle{};
RGImageHandle imageHandle{};
};
auto bufferBindings = std::make_shared<std::vector<BufferBinding>>();
auto imageBindings = std::make_shared<std::vector<ImageBinding>>();
bufferBindings->reserve(bufferUploads->size());
imageBindings->reserve(imageUploads->size());
std::unordered_map<VkBuffer, RGBufferHandle> destBufferHandles;
std::unordered_map<VkImage, RGImageHandle> imageHandles;
for (size_t i = 0; i < bufferUploads->size(); ++i)
{
const auto &upload = bufferUploads->at(i);
BufferBinding binding{};
binding.uploadIndex = i;
RGImportedBufferDesc stagingDesc{};
stagingDesc.name = std::string("upload.staging.buffer.") + std::to_string(i);
stagingDesc.buffer = upload.staging.buffer;
stagingDesc.size = upload.staging.info.size;
stagingDesc.currentStage = VK_PIPELINE_STAGE_2_TOP_OF_PIPE_BIT;
stagingDesc.currentAccess = 0;
binding.stagingHandle = graph.import_buffer(stagingDesc);
binding.destinationHandles.reserve(upload.copies.size());
for (const auto &copy : upload.copies)
{
RGBufferHandle handle{};
auto it = destBufferHandles.find(copy.destination);
if (it == destBufferHandles.end())
{
RGImportedBufferDesc dstDesc{};
dstDesc.name = std::string("upload.dst.buffer.") + std::to_string(destBufferHandles.size());
dstDesc.buffer = copy.destination;
dstDesc.size = copy.dstOffset + copy.size;
dstDesc.currentStage = VK_PIPELINE_STAGE_2_TOP_OF_PIPE_BIT;
dstDesc.currentAccess = 0;
handle = graph.import_buffer(dstDesc);
destBufferHandles.emplace(copy.destination, handle);
}
else
{
handle = it->second;
}
binding.destinationHandles.push_back(handle);
}
bufferBindings->push_back(std::move(binding));
}
for (size_t i = 0; i < imageUploads->size(); ++i)
{
const auto &upload = imageUploads->at(i);
ImageBinding binding{};
binding.uploadIndex = i;
RGImportedBufferDesc stagingDesc{};
stagingDesc.name = std::string("upload.staging.image.") + std::to_string(i);
stagingDesc.buffer = upload.staging.buffer;
stagingDesc.size = upload.staging.info.size;
stagingDesc.currentStage = VK_PIPELINE_STAGE_2_TOP_OF_PIPE_BIT;
stagingDesc.currentAccess = 0;
binding.stagingHandle = graph.import_buffer(stagingDesc);
auto it = imageHandles.find(upload.image);
if (it == imageHandles.end())
{
RGImportedImageDesc imgDesc{};
imgDesc.name = std::string("upload.image.") + std::to_string(imageHandles.size());
imgDesc.image = upload.image;
imgDesc.imageView = VK_NULL_HANDLE;
imgDesc.format = upload.format;
imgDesc.extent = {upload.extent.width, upload.extent.height};
imgDesc.currentLayout = upload.initialLayout;
binding.imageHandle = graph.import_image(imgDesc);
imageHandles.emplace(upload.image, binding.imageHandle);
}
else
{
binding.imageHandle = it->second;
}
imageBindings->push_back(std::move(binding));
}
graph.add_pass("ResourceUploads", RGPassType::Transfer,
[bufferBindings, imageBindings](RGPassBuilder &builder, EngineContext *)
{
for (const auto &binding : *bufferBindings)
{
builder.read_buffer(binding.stagingHandle, RGBufferUsage::TransferSrc);
for (auto handle : binding.destinationHandles)
{
builder.write_buffer(handle, RGBufferUsage::TransferDst);
}
}
for (const auto &binding : *imageBindings)
{
builder.read_buffer(binding.stagingHandle, RGBufferUsage::TransferSrc);
builder.write(binding.imageHandle, RGImageUsage::TransferDst);
}
},
[bufferUploads, imageUploads, bufferBindings, imageBindings, this](VkCommandBuffer cmd, const RGPassResources &res, EngineContext *)
{
for (const auto &binding : *bufferBindings)
{
const auto &upload = bufferUploads->at(binding.uploadIndex);
VkBuffer staging = res.buffer(binding.stagingHandle);
for (size_t copyIndex = 0; copyIndex < upload.copies.size(); ++copyIndex)
{
const auto &copy = upload.copies[copyIndex];
VkBuffer destination = res.buffer(binding.destinationHandles[copyIndex]);
VkBufferCopy region{};
region.srcOffset = copy.stagingOffset;
region.dstOffset = copy.dstOffset;
region.size = copy.size;
vkCmdCopyBuffer(cmd, staging, destination, 1, &region);
}
}
for (const auto &binding : *imageBindings)
{
const auto &upload = imageUploads->at(binding.uploadIndex);
VkBuffer staging = res.buffer(binding.stagingHandle);
VkImage image = res.image(binding.imageHandle);
VkBufferImageCopy region{};
region.bufferOffset = 0;
region.bufferRowLength = 0;
region.bufferImageHeight = 0;
region.imageSubresource.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
region.imageSubresource.mipLevel = 0;
region.imageSubresource.baseArrayLayer = 0;
region.imageSubresource.layerCount = 1;
region.imageExtent = upload.extent;
vkCmdCopyBufferToImage(cmd, staging, image, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, 1, &region);
if (upload.generateMips)
{
vkutil::generate_mipmaps(cmd, image, VkExtent2D{upload.extent.width, upload.extent.height});
vkutil::transition_image(cmd, image, VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL);
}
}
});
frame._deletionQueue.push_function([buffers = bufferUploads, images = imageUploads, this]()
{
for (const auto &upload : *buffers)
{
destroy_buffer(upload.staging);
}
for (const auto &upload : *images)
{
destroy_buffer(upload.staging);
}
});
}

82
src/core/vk_resource.h Normal file
View File

@@ -0,0 +1,82 @@
#pragma once
#include <core/vk_types.h>
#include <functional>
#include <vector>
class DeviceManager;
class RenderGraph;
struct FrameResources;
class ResourceManager
{
public:
struct BufferCopyRegion
{
VkBuffer destination = VK_NULL_HANDLE;
VkDeviceSize dstOffset = 0;
VkDeviceSize size = 0;
VkDeviceSize stagingOffset = 0;
};
struct PendingBufferUpload
{
AllocatedBuffer staging;
std::vector<BufferCopyRegion> copies;
};
struct PendingImageUpload
{
AllocatedBuffer staging;
VkImage image = VK_NULL_HANDLE;
VkExtent3D extent{0, 0, 0};
VkFormat format = VK_FORMAT_UNDEFINED;
VkImageLayout initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
VkImageLayout finalLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
bool generateMips = false;
};
void init(DeviceManager *deviceManager);
void cleanup();
AllocatedBuffer create_buffer(size_t allocSize, VkBufferUsageFlags usage, VmaMemoryUsage memoryUsage) const;
void destroy_buffer(const AllocatedBuffer &buffer) const;
AllocatedImage create_image(VkExtent3D size, VkFormat format, VkImageUsageFlags usage,
bool mipmapped = false) const;
AllocatedImage create_image(const void *data, VkExtent3D size, VkFormat format, VkImageUsageFlags usage,
bool mipmapped = false);
void destroy_image(const AllocatedImage &img) const;
GPUMeshBuffers uploadMesh(std::span<uint32_t> indices, std::span<Vertex> vertices);
void immediate_submit(std::function<void(VkCommandBuffer)> &&function) const;
bool has_pending_uploads() const;
const std::vector<PendingBufferUpload> &pending_buffer_uploads() const { return _pendingBufferUploads; }
const std::vector<PendingImageUpload> &pending_image_uploads() const { return _pendingImageUploads; }
void clear_pending_uploads();
void process_queued_uploads_immediate();
void register_upload_pass(RenderGraph &graph, FrameResources &frame);
void set_deferred_uploads(bool enabled) { _deferUploads = enabled; }
bool deferred_uploads() const { return _deferUploads; }
private:
DeviceManager *_deviceManager = nullptr;
// immediate submit structures
VkFence _immFence = nullptr;
VkCommandBuffer _immCommandBuffer = nullptr;
VkCommandPool _immCommandPool = nullptr;
std::vector<PendingBufferUpload> _pendingBufferUploads;
std::vector<PendingImageUpload> _pendingImageUploads;
bool _deferUploads = false;
DeletionQueue _deletionQueue;
};

View File

@@ -0,0 +1,47 @@
#include "vk_sampler_manager.h"
#include "vk_device.h"
void SamplerManager::init(DeviceManager *deviceManager)
{
_deviceManager = deviceManager;
VkSamplerCreateInfo sampl{.sType = VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO};
// Sensible, cross-vendor defaults
sampl.addressModeU = VK_SAMPLER_ADDRESS_MODE_REPEAT;
sampl.addressModeV = VK_SAMPLER_ADDRESS_MODE_REPEAT;
sampl.addressModeW = VK_SAMPLER_ADDRESS_MODE_REPEAT;
sampl.mipmapMode = VK_SAMPLER_MIPMAP_MODE_LINEAR;
sampl.minLod = 0.0f;
sampl.maxLod = VK_LOD_CLAMP_NONE;
sampl.mipLodBias = 0.0f;
sampl.anisotropyEnable = VK_FALSE; // set true + maxAnisotropy if feature enabled
sampl.borderColor = VK_BORDER_COLOR_INT_OPAQUE_BLACK;
sampl.unnormalizedCoordinates = VK_FALSE;
// Nearest defaults
sampl.magFilter = VK_FILTER_NEAREST;
sampl.minFilter = VK_FILTER_NEAREST;
vkCreateSampler(_deviceManager->device(), &sampl, nullptr, &_defaultSamplerNearest);
// Linear defaults
sampl.magFilter = VK_FILTER_LINEAR;
sampl.minFilter = VK_FILTER_LINEAR;
vkCreateSampler(_deviceManager->device(), &sampl, nullptr, &_defaultSamplerLinear);
}
void SamplerManager::cleanup()
{
if (!_deviceManager) return;
if (_defaultSamplerNearest)
{
vkDestroySampler(_deviceManager->device(), _defaultSamplerNearest, nullptr);
_defaultSamplerNearest = VK_NULL_HANDLE;
}
if (_defaultSamplerLinear)
{
vkDestroySampler(_deviceManager->device(), _defaultSamplerLinear, nullptr);
_defaultSamplerLinear = VK_NULL_HANDLE;
}
}

View File

@@ -0,0 +1,22 @@
#pragma once
#include <core/vk_types.h>
class DeviceManager;
class SamplerManager
{
public:
void init(DeviceManager *deviceManager);
void cleanup();
VkSampler defaultLinear() const { return _defaultSamplerLinear; }
VkSampler defaultNearest() const { return _defaultSamplerNearest; }
private:
DeviceManager *_deviceManager = nullptr;
VkSampler _defaultSamplerLinear = VK_NULL_HANDLE;
VkSampler _defaultSamplerNearest = VK_NULL_HANDLE;
};

197
src/core/vk_swapchain.cpp Normal file
View File

@@ -0,0 +1,197 @@
#include "vk_swapchain.h"
#include <SDL_video.h>
#include "vk_device.h"
#include "vk_initializers.h"
#include "vk_resource.h"
void SwapchainManager::init_swapchain()
{
create_swapchain(_windowExtent.width, _windowExtent.height);
// Create images used across the frame (draw, depth, GBuffer)
// Split to helper so we can reuse on resize
// (Definition added below)
//
// On creation we also push a cleanup lambda to _deletionQueue for final shutdown.
// On resize we will flush that queue first to destroy previous resources.
// depth/draw/gbuffer sized to current window extent
auto create_frame_images = [this]() {
VkExtent3D drawImageExtent = { _windowExtent.width, _windowExtent.height, 1 };
// Draw HDR target
_drawImage.imageFormat = VK_FORMAT_R16G16B16A16_SFLOAT;
_drawImage.imageExtent = drawImageExtent;
VkImageUsageFlags drawImageUsages{};
drawImageUsages |= VK_IMAGE_USAGE_TRANSFER_SRC_BIT;
drawImageUsages |= VK_IMAGE_USAGE_STORAGE_BIT;
drawImageUsages |= VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
VkImageCreateInfo rimg_info = vkinit::image_create_info(_drawImage.imageFormat, drawImageUsages, drawImageExtent);
VmaAllocationCreateInfo rimg_allocinfo = {};
rimg_allocinfo.usage = VMA_MEMORY_USAGE_GPU_ONLY;
rimg_allocinfo.requiredFlags = static_cast<VkMemoryPropertyFlags>(VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT);
vmaCreateImage(_deviceManager->allocator(), &rimg_info, &rimg_allocinfo,
&_drawImage.image, &_drawImage.allocation, nullptr);
VkImageViewCreateInfo rview_info = vkinit::imageview_create_info(_drawImage.imageFormat, _drawImage.image,
VK_IMAGE_ASPECT_COLOR_BIT);
VK_CHECK(vkCreateImageView(_deviceManager->device(), &rview_info, nullptr, &_drawImage.imageView));
// Depth
_depthImage.imageFormat = VK_FORMAT_D32_SFLOAT;
_depthImage.imageExtent = drawImageExtent;
VkImageUsageFlags depthImageUsages{};
depthImageUsages |= VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT;
VkImageCreateInfo dimg_info = vkinit::image_create_info(_depthImage.imageFormat, depthImageUsages, drawImageExtent);
vmaCreateImage(_deviceManager->allocator(), &dimg_info, &rimg_allocinfo, &_depthImage.image,
&_depthImage.allocation, nullptr);
VkImageViewCreateInfo dview_info = vkinit::imageview_create_info(_depthImage.imageFormat, _depthImage.image,
VK_IMAGE_ASPECT_DEPTH_BIT);
VK_CHECK(vkCreateImageView(_deviceManager->device(), &dview_info, nullptr, &_depthImage.imageView));
// GBuffer (SRGB not used to keep linear lighting)
_gBufferPosition = _resourceManager->create_image(drawImageExtent, VK_FORMAT_R16G16B16A16_SFLOAT,
VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT | VK_IMAGE_USAGE_SAMPLED_BIT);
_gBufferNormal = _resourceManager->create_image(drawImageExtent, VK_FORMAT_R16G16B16A16_SFLOAT,
VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT | VK_IMAGE_USAGE_SAMPLED_BIT);
_gBufferAlbedo = _resourceManager->create_image(drawImageExtent, VK_FORMAT_R8G8B8A8_UNORM,
VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT | VK_IMAGE_USAGE_SAMPLED_BIT);
_deletionQueue.push_function([=]() {
vkDestroyImageView(_deviceManager->device(), _drawImage.imageView, nullptr);
vmaDestroyImage(_deviceManager->allocator(), _drawImage.image, _drawImage.allocation);
vkDestroyImageView(_deviceManager->device(), _depthImage.imageView, nullptr);
vmaDestroyImage(_deviceManager->allocator(), _depthImage.image, _depthImage.allocation);
_resourceManager->destroy_image(_gBufferPosition);
_resourceManager->destroy_image(_gBufferNormal);
_resourceManager->destroy_image(_gBufferAlbedo);
});
};
create_frame_images();
}
void SwapchainManager::cleanup()
{
_deletionQueue.flush();
destroy_swapchain();
fmt::print("SwapchainManager::cleanup()\n");
}
void SwapchainManager::create_swapchain(uint32_t width, uint32_t height)
{
vkb::SwapchainBuilder swapchainBuilder{
_deviceManager->physicalDevice(), _deviceManager->device(), _deviceManager->surface()
};
_swapchainImageFormat = VK_FORMAT_B8G8R8A8_UNORM;
vkb::Swapchain vkbSwapchain = swapchainBuilder
//.use_default_format_selection()
.set_desired_format(VkSurfaceFormatKHR{
.format = _swapchainImageFormat, .colorSpace = VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
})
//use vsync present mode
.set_desired_present_mode(VK_PRESENT_MODE_FIFO_KHR)
.set_desired_extent(width, height)
.add_image_usage_flags(VK_IMAGE_USAGE_TRANSFER_DST_BIT)
.build()
.value();
_swapchainExtent = vkbSwapchain.extent;
//store swapchain and its related images
_swapchain = vkbSwapchain.swapchain;
_swapchainImages = vkbSwapchain.get_images().value();
_swapchainImageViews = vkbSwapchain.get_image_views().value();
}
void SwapchainManager::destroy_swapchain() const
{
vkDestroySwapchainKHR(_deviceManager->device(), _swapchain, nullptr);
for (auto _swapchainImageView: _swapchainImageViews)
{
vkDestroyImageView(_deviceManager->device(), _swapchainImageView, nullptr);
}
}
void SwapchainManager::resize_swapchain(struct SDL_Window *window)
{
vkDeviceWaitIdle(_deviceManager->device());
destroy_swapchain();
// Destroy per-frame images before recreating them
_deletionQueue.flush();
int w, h;
SDL_GetWindowSize(window, &w, &h);
_windowExtent.width = w;
_windowExtent.height = h;
create_swapchain(_windowExtent.width, _windowExtent.height);
// Recreate frame images at the new size
// (duplicate the same logic used at init time)
VkExtent3D drawImageExtent = { _windowExtent.width, _windowExtent.height, 1 };
_drawImage.imageFormat = VK_FORMAT_R16G16B16A16_SFLOAT;
_drawImage.imageExtent = drawImageExtent;
VkImageUsageFlags drawImageUsages{};
drawImageUsages |= VK_IMAGE_USAGE_TRANSFER_SRC_BIT;
drawImageUsages |= VK_IMAGE_USAGE_STORAGE_BIT;
drawImageUsages |= VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
VkImageCreateInfo rimg_info = vkinit::image_create_info(_drawImage.imageFormat, drawImageUsages, drawImageExtent);
VmaAllocationCreateInfo rimg_allocinfo = {};
rimg_allocinfo.usage = VMA_MEMORY_USAGE_GPU_ONLY;
rimg_allocinfo.requiredFlags = static_cast<VkMemoryPropertyFlags>(VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT);
vmaCreateImage(_deviceManager->allocator(), &rimg_info, &rimg_allocinfo, &_drawImage.image, &_drawImage.allocation,
nullptr);
VkImageViewCreateInfo rview_info = vkinit::imageview_create_info(_drawImage.imageFormat, _drawImage.image,
VK_IMAGE_ASPECT_COLOR_BIT);
VK_CHECK(vkCreateImageView(_deviceManager->device(), &rview_info, nullptr, &_drawImage.imageView));
_depthImage.imageFormat = VK_FORMAT_D32_SFLOAT;
_depthImage.imageExtent = drawImageExtent;
VkImageUsageFlags depthImageUsages{};
depthImageUsages |= VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT;
VkImageCreateInfo dimg_info = vkinit::image_create_info(_depthImage.imageFormat, depthImageUsages, drawImageExtent);
vmaCreateImage(_deviceManager->allocator(), &dimg_info, &rimg_allocinfo, &_depthImage.image,
&_depthImage.allocation, nullptr);
VkImageViewCreateInfo dview_info = vkinit::imageview_create_info(_depthImage.imageFormat, _depthImage.image,
VK_IMAGE_ASPECT_DEPTH_BIT);
VK_CHECK(vkCreateImageView(_deviceManager->device(), &dview_info, nullptr, &_depthImage.imageView));
_gBufferPosition = _resourceManager->create_image(drawImageExtent, VK_FORMAT_R16G16B16A16_SFLOAT,
VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT | VK_IMAGE_USAGE_SAMPLED_BIT);
_gBufferNormal = _resourceManager->create_image(drawImageExtent, VK_FORMAT_R16G16B16A16_SFLOAT,
VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT | VK_IMAGE_USAGE_SAMPLED_BIT);
_gBufferAlbedo = _resourceManager->create_image(drawImageExtent, VK_FORMAT_R8G8B8A8_UNORM,
VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT | VK_IMAGE_USAGE_SAMPLED_BIT);
_deletionQueue.push_function([=]() {
vkDestroyImageView(_deviceManager->device(), _drawImage.imageView, nullptr);
vmaDestroyImage(_deviceManager->allocator(), _drawImage.image, _drawImage.allocation);
vkDestroyImageView(_deviceManager->device(), _depthImage.imageView, nullptr);
vmaDestroyImage(_deviceManager->allocator(), _depthImage.image, _depthImage.allocation);
_resourceManager->destroy_image(_gBufferPosition);
_resourceManager->destroy_image(_gBufferNormal);
_resourceManager->destroy_image(_gBufferAlbedo);
});
resize_requested = false;
}

55
src/core/vk_swapchain.h Normal file
View File

@@ -0,0 +1,55 @@
#pragma once
#include <core/vk_types.h>
class ResourceManager;
class DeviceManager;
class SwapchainManager
{
public:
void init(DeviceManager *deviceManager, ResourceManager* resourceManager)
{this->_deviceManager = deviceManager;this->_resourceManager = resourceManager;};
void cleanup();
void init_swapchain();
void create_swapchain(uint32_t width, uint32_t height);
void destroy_swapchain() const;
void resize_swapchain(struct SDL_Window *window);
VkSwapchainKHR swapchain() const { return _swapchain; }
VkFormat swapchainImageFormat() const { return _swapchainImageFormat; }
VkExtent2D swapchainExtent() const { return _swapchainExtent; }
const std::vector<VkImage> &swapchainImages() const { return _swapchainImages; }
const std::vector<VkImageView> &swapchainImageViews() const { return _swapchainImageViews; }
AllocatedImage drawImage() const { return _drawImage; }
AllocatedImage depthImage() const { return _depthImage; }
AllocatedImage gBufferPosition() const { return _gBufferPosition; }
AllocatedImage gBufferNormal() const { return _gBufferNormal; }
AllocatedImage gBufferAlbedo() const { return _gBufferAlbedo; }
VkExtent2D windowExtent() const { return _windowExtent; }
bool resize_requested{false};
private:
DeviceManager *_deviceManager = nullptr;
ResourceManager* _resourceManager = nullptr;
VkSwapchainKHR _swapchain = nullptr;
VkFormat _swapchainImageFormat = {};
VkExtent2D _swapchainExtent = {};
VkExtent2D _windowExtent{1920, 1080};
std::vector<VkImage> _swapchainImages;
std::vector<VkImageView> _swapchainImageViews;
AllocatedImage _drawImage = {};
AllocatedImage _depthImage = {};
AllocatedImage _gBufferPosition = {};
AllocatedImage _gBufferNormal = {};
AllocatedImage _gBufferAlbedo = {};
DeletionQueue _deletionQueue;
};

155
src/core/vk_types.h Normal file
View File

@@ -0,0 +1,155 @@
// vulkan_engine.h : Include file for standard system include files,
// or project specific include files.
#pragma once
#include <memory>
#include <optional>
#include <string>
#include <vector>
#include <span>
#include <array>
#include <functional>
#include <deque>
#include <vulkan/vulkan.h>
#include <vulkan/vk_enum_string_helper.h>
#include <vk_mem_alloc.h>
#include <fmt/core.h>
#include <glm/mat4x4.hpp>
#include <glm/vec4.hpp>
#define VK_CHECK(x) \
do { \
VkResult err = x; \
if (err) { \
fmt::println("Detected Vulkan error: {}", string_VkResult(err)); \
abort(); \
} \
} while (0)
struct DeletionQueue
{
std::deque<std::function<void()> > deletors;
void push_function(std::function<void()> &&function)
{
deletors.push_back(function);
}
void flush()
{
// reverse iterate the deletion queue to execute all the functions
for (auto it = deletors.rbegin(); it != deletors.rend(); it++)
{
(*it)(); //call functors
}
deletors.clear();
}
};
struct AllocatedImage
{
VkImage image;
VkImageView imageView;
VmaAllocation allocation;
VkFormat imageFormat;
VkExtent3D imageExtent;
};
struct AllocatedBuffer {
VkBuffer buffer;
VmaAllocation allocation;
VmaAllocationInfo info;
};
struct GPUSceneData {
glm::mat4 view;
glm::mat4 proj;
glm::mat4 viewproj;
glm::mat4 lightViewProj;
glm::vec4 ambientColor;
glm::vec4 sunlightDirection; // w for sun power
glm::vec4 sunlightColor;
};
enum class MaterialPass :uint8_t {
MainColor,
Transparent,
Other
};
struct MaterialPipeline {
VkPipeline pipeline;
VkPipelineLayout layout;
};
struct MaterialInstance {
MaterialPipeline* pipeline;
VkDescriptorSet materialSet;
MaterialPass passType;
};
struct Vertex {
glm::vec3 position;
float uv_x;
glm::vec3 normal;
float uv_y;
glm::vec4 color;
};
// holds the resources needed for a mesh
struct GPUMeshBuffers {
AllocatedBuffer indexBuffer;
AllocatedBuffer vertexBuffer;
VkDeviceAddress vertexBufferAddress;
};
// push constants for our mesh object draws
struct GPUDrawPushConstants {
glm::mat4 worldMatrix;
VkDeviceAddress vertexBuffer;
};
struct DrawContext;
// base class for a renderable dynamic object
class IRenderable {
virtual void Draw(const glm::mat4& topMatrix, DrawContext& ctx) = 0;
};
// implementation of a drawable scene node.
// the scene node can hold children and will also keep a transform to propagate
// to them
struct Node : public IRenderable {
// parent pointer must be a weak pointer to avoid circular dependencies
std::weak_ptr<Node> parent;
std::vector<std::shared_ptr<Node>> children;
glm::mat4 localTransform;
glm::mat4 worldTransform;
void refreshTransform(const glm::mat4& parentMatrix)
{
worldTransform = parentMatrix * localTransform;
for (auto c : children) {
c->refreshTransform(worldTransform);
}
}
virtual void Draw(const glm::mat4& topMatrix, DrawContext& ctx)
{
// draw children
for (auto& c : children) {
c->Draw(topMatrix, ctx);
}
}
};

14
src/main.cpp Normal file
View File

@@ -0,0 +1,14 @@
#include "core/vk_engine.h"
int main(int argc, char* argv[])
{
VulkanEngine engine;
engine.init();
engine.run();
engine.cleanup();
return 0;
}

82
src/render/primitives.h Normal file
View File

@@ -0,0 +1,82 @@
#pragma once
#include <vector>
#include <glm/glm.hpp>
#include <glm/gtc/constants.hpp>
#include "core/vk_types.h"
namespace primitives {
inline void buildCube(std::vector<Vertex>& vertices, std::vector<uint32_t>& indices) {
vertices.clear();
indices.clear();
struct Face {
glm::vec3 normal;
glm::vec3 v0, v1, v2, v3;
} faces[6] = {
{ {0,0,1}, { -0.5f,-0.5f, 0.5f}, { 0.5f,-0.5f, 0.5f}, { -0.5f, 0.5f, 0.5f}, { 0.5f, 0.5f, 0.5f} },
{ {0,0,-1},{ -0.5f,-0.5f,-0.5f}, { -0.5f, 0.5f,-0.5f}, { 0.5f,-0.5f,-0.5f}, { 0.5f, 0.5f,-0.5f} },
{ {0,1,0}, { -0.5f, 0.5f, 0.5f}, { 0.5f, 0.5f, 0.5f}, { -0.5f, 0.5f,-0.5f}, { 0.5f, 0.5f,-0.5f} },
{ {0,-1,0},{ -0.5f,-0.5f, 0.5f}, { -0.5f,-0.5f,-0.5f}, { 0.5f,-0.5f, 0.5f}, { 0.5f,-0.5f,-0.5f} },
{ {1,0,0}, { 0.5f,-0.5f, 0.5f}, { 0.5f,-0.5f,-0.5f}, { 0.5f, 0.5f, 0.5f}, { 0.5f, 0.5f,-0.5f} },
{ {-1,0,0},{ -0.5f,-0.5f, 0.5f}, { -0.5f, 0.5f, 0.5f}, { -0.5f,-0.5f,-0.5f}, { -0.5f, 0.5f,-0.5f} }
};
for (auto& f : faces) {
uint32_t start = (uint32_t)vertices.size();
Vertex v0{f.v0, 0, f.normal, 0, glm::vec4(1.0f)};
Vertex v1{f.v1, 1, f.normal, 0, glm::vec4(1.0f)};
Vertex v2{f.v2, 0, f.normal, 1, glm::vec4(1.0f)};
Vertex v3{f.v3, 1, f.normal, 1, glm::vec4(1.0f)};
vertices.push_back(v0);
vertices.push_back(v1);
vertices.push_back(v2);
vertices.push_back(v3);
indices.push_back(start + 0);
indices.push_back(start + 1);
indices.push_back(start + 2);
indices.push_back(start + 2);
indices.push_back(start + 1);
indices.push_back(start + 3);
}
}
inline void buildSphere(std::vector<Vertex>& vertices, std::vector<uint32_t>& indices, int sectors = 16, int stacks = 16) {
vertices.clear();
indices.clear();
float radius = 0.5f;
for (int i = 0; i <= stacks; ++i) {
float v = (float)i / stacks;
const float phi = v * glm::pi<float>();
float y = cos(phi);
float r = sin(phi);
for (int j = 0; j <= sectors; ++j) {
float u = (float)j / sectors;
float theta = u * glm::two_pi<float>();
float x = r * cos(theta);
float z = r * sin(theta);
Vertex vert;
vert.position = glm::vec3(x, y, z) * radius;
vert.normal = glm::normalize(glm::vec3(x, y, z));
vert.uv_x = u;
vert.uv_y = 1.0f - v;
vert.color = glm::vec4(1.0f);
vertices.push_back(vert);
}
}
for (int i = 0; i < stacks; ++i) {
for (int j = 0; j < sectors; ++j) {
uint32_t first = i * (sectors + 1) + j;
uint32_t second = first + sectors + 1;
indices.push_back(first);
indices.push_back(second);
indices.push_back(first + 1);
indices.push_back(first + 1);
indices.push_back(second);
indices.push_back(second + 1);
}
}
}
} // namespace primitives

98
src/render/rg_builder.cpp Normal file
View File

@@ -0,0 +1,98 @@
#include <render/rg_builder.h>
#include <render/rg_resources.h>
// ---- RGPassResources ----
VkImage RGPassResources::image(RGImageHandle h) const
{
const RGImageRecord *rec = _registry ? _registry->get_image(h) : nullptr;
return rec ? rec->image : VK_NULL_HANDLE;
}
VkImageView RGPassResources::image_view(RGImageHandle h) const
{
const RGImageRecord *rec = _registry ? _registry->get_image(h) : nullptr;
return rec ? rec->imageView : VK_NULL_HANDLE;
}
VkBuffer RGPassResources::buffer(RGBufferHandle h) const
{
const RGBufferRecord *rec = _registry ? _registry->get_buffer(h) : nullptr;
return rec ? rec->buffer : VK_NULL_HANDLE;
}
// ---- RGPassBuilder ----
void RGPassBuilder::read(RGImageHandle h, RGImageUsage usage)
{
_imageReads.push_back({h, usage});
}
void RGPassBuilder::write(RGImageHandle h, RGImageUsage usage)
{
_imageWrites.push_back({h, usage});
}
void RGPassBuilder::read_buffer(RGBufferHandle h, RGBufferUsage usage)
{
_bufferReads.push_back({h, usage});
}
void RGPassBuilder::write_buffer(RGBufferHandle h, RGBufferUsage usage)
{
_bufferWrites.push_back({h, usage});
}
void RGPassBuilder::read_buffer(VkBuffer buffer, RGBufferUsage usage, VkDeviceSize size, const char* name)
{
if (!_registry || buffer == VK_NULL_HANDLE) return;
// Dedup/import
RGBufferHandle h = _registry->find_buffer(buffer);
if (!h.valid())
{
RGImportedBufferDesc d{};
d.name = name ? name : "external.buffer";
d.buffer = buffer;
d.size = size;
d.currentStage = VK_PIPELINE_STAGE_2_TOP_OF_PIPE_BIT;
d.currentAccess = 0;
h = _registry->add_imported(d);
}
read_buffer(h, usage);
}
void RGPassBuilder::write_buffer(VkBuffer buffer, RGBufferUsage usage, VkDeviceSize size, const char* name)
{
if (!_registry || buffer == VK_NULL_HANDLE) return;
RGBufferHandle h = _registry->find_buffer(buffer);
if (!h.valid())
{
RGImportedBufferDesc d{};
d.name = name ? name : "external.buffer";
d.buffer = buffer;
d.size = size;
d.currentStage = VK_PIPELINE_STAGE_2_TOP_OF_PIPE_BIT;
d.currentAccess = 0;
h = _registry->add_imported(d);
}
write_buffer(h, usage);
}
void RGPassBuilder::write_color(RGImageHandle h, bool clearOnLoad, VkClearValue clear)
{
RGAttachmentInfo a{};
a.image = h;
a.clearOnLoad = clearOnLoad;
a.clear = clear;
a.store = true;
_colors.push_back(a);
write(h, RGImageUsage::ColorAttachment);
}
void RGPassBuilder::write_depth(RGImageHandle h, bool clearOnLoad, VkClearValue clear)
{
if (_depthRef == nullptr) _depthRef = &_depthTemp;
_depthRef->image = h;
_depthRef->clearOnLoad = clearOnLoad;
_depthRef->clear = clear;
_depthRef->store = true;
write(h, RGImageUsage::DepthAttachment);
}

90
src/render/rg_builder.h Normal file
View File

@@ -0,0 +1,90 @@
#pragma once
#include <render/rg_types.h>
#include <vector>
class RGResourceRegistry;
class EngineContext;
struct RGPassImageAccess
{
RGImageHandle image;
RGImageUsage usage;
};
struct RGPassBufferAccess
{
RGBufferHandle buffer;
RGBufferUsage usage;
};
// Read-only interface for pass record callbacks to fetch resolved resources
class RGPassResources
{
public:
RGPassResources(const RGResourceRegistry *registry) : _registry(registry)
{
}
VkImage image(RGImageHandle h) const;
VkImageView image_view(RGImageHandle h) const;
VkBuffer buffer(RGBufferHandle h) const;
private:
const RGResourceRegistry *_registry;
};
// Builder used inside add_*_pass setup lambda to declare reads/writes/attachments
class RGPassBuilder
{
public:
RGPassBuilder(RGResourceRegistry *registry,
std::vector<RGPassImageAccess> &reads,
std::vector<RGPassImageAccess> &writes,
std::vector<RGPassBufferAccess> &bufferReads,
std::vector<RGPassBufferAccess> &bufferWrites,
std::vector<RGAttachmentInfo> &colorAttachments,
RGAttachmentInfo *&depthAttachmentRef)
: _registry(registry)
, _imageReads(reads)
, _imageWrites(writes)
, _bufferReads(bufferReads)
, _bufferWrites(bufferWrites)
, _colors(colorAttachments)
, _depthRef(depthAttachmentRef)
{
}
// Declare that the pass will sample/read an image
void read(RGImageHandle h, RGImageUsage usage);
// Declare that the pass will write to an image
void write(RGImageHandle h, RGImageUsage usage);
// Declare buffer accesses
void read_buffer(RGBufferHandle h, RGBufferUsage usage);
void write_buffer(RGBufferHandle h, RGBufferUsage usage);
// Convenience: declare access to external VkBuffer. Will import/dedup and
// register the access for this pass.
void read_buffer(VkBuffer buffer, RGBufferUsage usage, VkDeviceSize size = 0, const char* name = nullptr);
void write_buffer(VkBuffer buffer, RGBufferUsage usage, VkDeviceSize size = 0, const char* name = nullptr);
// Graphics attachments
void write_color(RGImageHandle h, bool clearOnLoad = false, VkClearValue clear = {});
void write_depth(RGImageHandle h, bool clearOnLoad = false, VkClearValue clear = {});
private:
RGResourceRegistry *_registry;
std::vector<RGPassImageAccess> &_imageReads;
std::vector<RGPassImageAccess> &_imageWrites;
std::vector<RGPassBufferAccess> &_bufferReads;
std::vector<RGPassBufferAccess> &_bufferWrites;
std::vector<RGAttachmentInfo> &_colors;
RGAttachmentInfo *&_depthRef;
RGAttachmentInfo _depthTemp{}; // temporary storage used during build
};

887
src/render/rg_graph.cpp Normal file
View File

@@ -0,0 +1,887 @@
#include <render/rg_graph.h>
#include <core/engine_context.h>
#include <core/vk_images.h>
#include <core/vk_initializers.h>
#include <unordered_map>
#include <unordered_set>
#include <queue>
#include <algorithm>
#include <cstdio>
#include <core/vk_swapchain.h>
#include <core/vk_initializers.h>
#include <core/vk_debug.h>
#include <fmt/core.h>
#include "vk_device.h"
void RenderGraph::init(EngineContext *ctx)
{
_context = ctx;
_resources.init(ctx);
}
void RenderGraph::clear()
{
_passes.clear();
_resources.reset();
}
RGImageHandle RenderGraph::import_image(const RGImportedImageDesc &desc)
{
return _resources.add_imported(desc);
}
RGBufferHandle RenderGraph::import_buffer(const RGImportedBufferDesc &desc)
{
return _resources.add_imported(desc);
}
RGImageHandle RenderGraph::create_image(const RGImageDesc &desc)
{
return _resources.add_transient(desc);
}
RGImageHandle RenderGraph::create_depth_image(const char* name, VkExtent2D extent, VkFormat format)
{
RGImageDesc d{};
d.name = name ? name : "depth.transient";
d.format = format;
d.extent = extent;
d.usage = VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT | VK_IMAGE_USAGE_SAMPLED_BIT;
return create_image(d);
}
RGBufferHandle RenderGraph::create_buffer(const RGBufferDesc &desc)
{
return _resources.add_transient(desc);
}
void RenderGraph::add_pass(const char *name, RGPassType type, BuildCallback build, RecordCallback record)
{
Pass p{};
p.name = name;
p.type = type;
p.record = std::move(record);
// Build declarations via builder
RGAttachmentInfo *depthRef = nullptr;
RGPassBuilder builder(&_resources,
p.imageReads,
p.imageWrites,
p.bufferReads,
p.bufferWrites,
p.colorAttachments,
depthRef);
if (build) build(builder, _context);
if (depthRef)
{
p.hasDepth = true;
p.depthAttachment = *depthRef; // copy declared depth attachment
}
_passes.push_back(std::move(p));
}
void RenderGraph::add_pass(const char *name, RGPassType type, RecordCallback record)
{
// No declarations
add_pass(name, type, nullptr, std::move(record));
}
bool RenderGraph::compile()
{
if (!_context) return false;
// --- Build dependency graph (topological sort) from declared reads/writes ---
const int n = static_cast<int>(_passes.size());
if (n <= 1)
{
// trivial order; still compute barriers below
}
else
{
std::vector<std::unordered_set<int> > adjSet(n);
std::vector<int> indeg(n, 0);
auto add_edge = [&](int u, int v) {
if (u == v) return;
if (u < 0 || v < 0 || u >= n || v >= n) return;
if (adjSet[u].insert(v).second) indeg[v]++;
};
std::unordered_map<uint32_t, int> lastWriterImage;
std::unordered_map<uint32_t, std::vector<int> > lastReadersImage;
std::unordered_map<uint32_t, int> lastWriterBuffer;
std::unordered_map<uint32_t, std::vector<int> > lastReadersBuffer;
for (int i = 0; i < n; ++i)
{
const auto &p = _passes[i];
if (!p.enabled) continue;
// Image reads
for (const auto &r: p.imageReads)
{
if (!r.image.valid()) continue;
auto it = lastWriterImage.find(r.image.id);
if (it != lastWriterImage.end()) add_edge(it->second, i);
lastReadersImage[r.image.id].push_back(i);
}
// Image writes
for (const auto &w: p.imageWrites)
{
if (!w.image.valid()) continue;
auto itW = lastWriterImage.find(w.image.id);
if (itW != lastWriterImage.end()) add_edge(itW->second, i); // WAW
auto itR = lastReadersImage.find(w.image.id);
if (itR != lastReadersImage.end())
{
for (int rIdx: itR->second) add_edge(rIdx, i); // WAR
itR->second.clear();
}
lastWriterImage[w.image.id] = i;
}
// Buffer reads
for (const auto &r: p.bufferReads)
{
if (!r.buffer.valid()) continue;
auto it = lastWriterBuffer.find(r.buffer.id);
if (it != lastWriterBuffer.end()) add_edge(it->second, i);
lastReadersBuffer[r.buffer.id].push_back(i);
}
// Buffer writes
for (const auto &w: p.bufferWrites)
{
if (!w.buffer.valid()) continue;
auto itW = lastWriterBuffer.find(w.buffer.id);
if (itW != lastWriterBuffer.end()) add_edge(itW->second, i); // WAW
auto itR = lastReadersBuffer.find(w.buffer.id);
if (itR != lastReadersBuffer.end())
{
for (int rIdx: itR->second) add_edge(rIdx, i); // WAR
itR->second.clear();
}
lastWriterBuffer[w.buffer.id] = i;
}
}
// Kahn's algorithm
std::queue<int> q;
for (int i = 0; i < n; ++i) if (indeg[i] == 0) q.push(i);
std::vector<int> order;
order.reserve(n);
while (!q.empty())
{
int u = q.front();
q.pop();
order.push_back(u);
for (int v: adjSet[u])
{
if (--indeg[v] == 0) q.push(v);
}
}
if (static_cast<int>(order.size()) == n)
{
// Reorder passes by topological order
std::vector<Pass> sorted;
sorted.reserve(n);
for (int idx: order) sorted.push_back(std::move(_passes[idx]));
_passes = std::move(sorted);
}
else
{
// Cycle detected; keep insertion order but still compute barriers
}
}
struct ImageState
{
bool initialized = false;
VkImageLayout layout = VK_IMAGE_LAYOUT_UNDEFINED;
VkPipelineStageFlags2 stage = VK_PIPELINE_STAGE_2_NONE;
VkAccessFlags2 access = 0;
};
struct BufferState
{
bool initialized = false;
VkPipelineStageFlags2 stage = VK_PIPELINE_STAGE_2_NONE;
VkAccessFlags2 access = 0;
};
auto is_depth_format = [](VkFormat format) {
switch (format)
{
case VK_FORMAT_D16_UNORM:
case VK_FORMAT_D16_UNORM_S8_UINT:
case VK_FORMAT_D24_UNORM_S8_UINT:
case VK_FORMAT_D32_SFLOAT:
case VK_FORMAT_D32_SFLOAT_S8_UINT:
return true;
default:
return false;
}
};
auto usage_requires_flag = [](RGImageUsage usage) -> VkImageUsageFlags {
switch (usage)
{
case RGImageUsage::SampledFragment:
case RGImageUsage::SampledCompute:
return VK_IMAGE_USAGE_SAMPLED_BIT;
case RGImageUsage::TransferSrc:
return VK_IMAGE_USAGE_TRANSFER_SRC_BIT;
case RGImageUsage::TransferDst:
return VK_IMAGE_USAGE_TRANSFER_DST_BIT;
case RGImageUsage::ColorAttachment:
return VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
case RGImageUsage::DepthAttachment:
return VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT;
case RGImageUsage::ComputeWrite:
return VK_IMAGE_USAGE_STORAGE_BIT;
case RGImageUsage::Present:
return 0; // swapchain image
default:
return 0;
}
};
struct ImageUsageInfo
{
VkPipelineStageFlags2 stage;
VkAccessFlags2 access;
VkImageLayout layout;
};
struct BufferUsageInfo
{
VkPipelineStageFlags2 stage;
VkAccessFlags2 access;
};
auto usage_info_image = [](RGImageUsage usage) {
ImageUsageInfo info{};
switch (usage)
{
case RGImageUsage::SampledFragment:
info.stage = VK_PIPELINE_STAGE_2_FRAGMENT_SHADER_BIT;
info.access = VK_ACCESS_2_SHADER_SAMPLED_READ_BIT;
info.layout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
break;
case RGImageUsage::SampledCompute:
info.stage = VK_PIPELINE_STAGE_2_COMPUTE_SHADER_BIT;
info.access = VK_ACCESS_2_SHADER_SAMPLED_READ_BIT;
info.layout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
break;
case RGImageUsage::TransferSrc:
info.stage = VK_PIPELINE_STAGE_2_TRANSFER_BIT;
info.access = VK_ACCESS_2_TRANSFER_READ_BIT;
info.layout = VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL;
break;
case RGImageUsage::TransferDst:
info.stage = VK_PIPELINE_STAGE_2_TRANSFER_BIT;
info.access = VK_ACCESS_2_TRANSFER_WRITE_BIT;
info.layout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL;
break;
case RGImageUsage::ColorAttachment:
info.stage = VK_PIPELINE_STAGE_2_COLOR_ATTACHMENT_OUTPUT_BIT;
info.access = VK_ACCESS_2_COLOR_ATTACHMENT_WRITE_BIT | VK_ACCESS_2_COLOR_ATTACHMENT_READ_BIT;
info.layout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL;
break;
case RGImageUsage::DepthAttachment:
info.stage = VK_PIPELINE_STAGE_2_EARLY_FRAGMENT_TESTS_BIT | VK_PIPELINE_STAGE_2_LATE_FRAGMENT_TESTS_BIT;
info.access = VK_ACCESS_2_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT |
VK_ACCESS_2_DEPTH_STENCIL_ATTACHMENT_READ_BIT;
info.layout = VK_IMAGE_LAYOUT_DEPTH_ATTACHMENT_OPTIMAL;
break;
case RGImageUsage::ComputeWrite:
info.stage = VK_PIPELINE_STAGE_2_COMPUTE_SHADER_BIT;
info.access = VK_ACCESS_2_SHADER_STORAGE_READ_BIT | VK_ACCESS_2_SHADER_STORAGE_WRITE_BIT;
info.layout = VK_IMAGE_LAYOUT_GENERAL;
break;
case RGImageUsage::Present:
info.stage = VK_PIPELINE_STAGE_2_BOTTOM_OF_PIPE_BIT;
info.access = VK_ACCESS_2_MEMORY_READ_BIT;
info.layout = VK_IMAGE_LAYOUT_PRESENT_SRC_KHR;
break;
default:
info.stage = VK_PIPELINE_STAGE_2_ALL_COMMANDS_BIT;
info.access = VK_ACCESS_2_MEMORY_READ_BIT | VK_ACCESS_2_MEMORY_WRITE_BIT;
info.layout = VK_IMAGE_LAYOUT_GENERAL;
break;
}
return info;
};
auto usage_info_buffer = [](RGBufferUsage usage) {
BufferUsageInfo info{};
switch (usage)
{
case RGBufferUsage::TransferSrc:
info.stage = VK_PIPELINE_STAGE_2_TRANSFER_BIT;
info.access = VK_ACCESS_2_TRANSFER_READ_BIT;
break;
case RGBufferUsage::TransferDst:
info.stage = VK_PIPELINE_STAGE_2_TRANSFER_BIT;
info.access = VK_ACCESS_2_TRANSFER_WRITE_BIT;
break;
case RGBufferUsage::VertexRead:
info.stage = VK_PIPELINE_STAGE_2_VERTEX_INPUT_BIT;
info.access = VK_ACCESS_2_VERTEX_ATTRIBUTE_READ_BIT;
break;
case RGBufferUsage::IndexRead:
info.stage = VK_PIPELINE_STAGE_2_INDEX_INPUT_BIT;
info.access = VK_ACCESS_2_INDEX_READ_BIT;
break;
case RGBufferUsage::UniformRead:
info.stage = VK_PIPELINE_STAGE_2_ALL_GRAPHICS_BIT | VK_PIPELINE_STAGE_2_COMPUTE_SHADER_BIT;
info.access = VK_ACCESS_2_UNIFORM_READ_BIT;
break;
case RGBufferUsage::StorageRead:
info.stage = VK_PIPELINE_STAGE_2_COMPUTE_SHADER_BIT | VK_PIPELINE_STAGE_2_FRAGMENT_SHADER_BIT;
info.access = VK_ACCESS_2_SHADER_STORAGE_READ_BIT;
break;
case RGBufferUsage::StorageReadWrite:
info.stage = VK_PIPELINE_STAGE_2_COMPUTE_SHADER_BIT | VK_PIPELINE_STAGE_2_FRAGMENT_SHADER_BIT;
info.access = VK_ACCESS_2_SHADER_STORAGE_READ_BIT | VK_ACCESS_2_SHADER_STORAGE_WRITE_BIT;
break;
case RGBufferUsage::IndirectArgs:
info.stage = VK_PIPELINE_STAGE_2_DRAW_INDIRECT_BIT;
info.access = VK_ACCESS_2_INDIRECT_COMMAND_READ_BIT;
break;
default:
info.stage = VK_PIPELINE_STAGE_2_ALL_COMMANDS_BIT;
info.access = VK_ACCESS_2_MEMORY_READ_BIT | VK_ACCESS_2_MEMORY_WRITE_BIT;
break;
}
return info;
};
auto buffer_usage_requires_flag = [](RGBufferUsage usage) -> VkBufferUsageFlags {
switch (usage)
{
case RGBufferUsage::TransferSrc: return VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
case RGBufferUsage::TransferDst: return VK_BUFFER_USAGE_TRANSFER_DST_BIT;
case RGBufferUsage::VertexRead: return VK_BUFFER_USAGE_VERTEX_BUFFER_BIT;
case RGBufferUsage::IndexRead: return VK_BUFFER_USAGE_INDEX_BUFFER_BIT;
case RGBufferUsage::UniformRead: return VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT;
case RGBufferUsage::StorageRead:
case RGBufferUsage::StorageReadWrite: return VK_BUFFER_USAGE_STORAGE_BUFFER_BIT;
case RGBufferUsage::IndirectArgs: return VK_BUFFER_USAGE_INDIRECT_BUFFER_BIT;
default: return 0;
}
};
const size_t imageCount = _resources.image_count();
const size_t bufferCount = _resources.buffer_count();
std::vector<ImageState> imageStates(imageCount);
std::vector<BufferState> bufferStates(bufferCount);
// Track first/last use for lifetime diagnostics and future aliasing
std::vector<int> imageFirst(imageCount, -1), imageLast(imageCount, -1);
std::vector<int> bufferFirst(bufferCount, -1), bufferLast(bufferCount, -1);
for (auto &pass: _passes)
{
pass.preImageBarriers.clear();
pass.preBufferBarriers.clear();
if (!pass.enabled) { continue; }
std::unordered_map<uint32_t, RGImageUsage> desiredImageUsages;
desiredImageUsages.reserve(pass.imageReads.size() + pass.imageWrites.size());
for (const auto &access: pass.imageReads)
{
if (!access.image.valid()) continue;
desiredImageUsages.emplace(access.image.id, access.usage);
if (access.image.id < imageCount)
{
if (imageFirst[access.image.id] == -1) imageFirst[access.image.id] = (int)(&pass - _passes.data());
imageLast[access.image.id] = (int)(&pass - _passes.data());
}
}
for (const auto &access: pass.imageWrites)
{
if (!access.image.valid()) continue;
desiredImageUsages[access.image.id] = access.usage;
if (access.image.id < imageCount)
{
if (imageFirst[access.image.id] == -1) imageFirst[access.image.id] = (int)(&pass - _passes.data());
imageLast[access.image.id] = (int)(&pass - _passes.data());
}
}
// Validation: basic layout/format/usage checks for images used by this pass
// Also build barriers
for (const auto &[id, usage]: desiredImageUsages)
{
if (id >= imageCount) continue;
ImageUsageInfo desired = usage_info_image(usage);
ImageState prev = imageStates[id];
VkImageLayout prevLayout = prev.initialized ? prev.layout : _resources.initial_layout(RGImageHandle{id});
VkPipelineStageFlags2 srcStage = prev.initialized
? prev.stage
: (prevLayout == VK_IMAGE_LAYOUT_UNDEFINED
? VK_PIPELINE_STAGE_2_TOP_OF_PIPE_BIT
: VK_PIPELINE_STAGE_2_ALL_COMMANDS_BIT);
VkAccessFlags2 srcAccess = prev.initialized
? prev.access
: (prevLayout == VK_IMAGE_LAYOUT_UNDEFINED
? VkAccessFlags2{0}
: (VK_ACCESS_2_MEMORY_READ_BIT | VK_ACCESS_2_MEMORY_WRITE_BIT));
bool needBarrier = !prev.initialized
|| prevLayout != desired.layout
|| prev.stage != desired.stage
|| prev.access != desired.access;
if (needBarrier)
{
VkImageMemoryBarrier2 barrier{.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER_2};
barrier.srcStageMask = srcStage;
barrier.srcAccessMask = srcAccess;
barrier.dstStageMask = desired.stage;
barrier.dstAccessMask = desired.access;
barrier.oldLayout = prevLayout;
barrier.newLayout = desired.layout;
barrier.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED;
barrier.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED;
const RGImageRecord *rec = _resources.get_image(RGImageHandle{id});
barrier.image = rec ? rec->image : VK_NULL_HANDLE;
VkImageAspectFlags aspect = VK_IMAGE_ASPECT_COLOR_BIT;
if (usage == RGImageUsage::DepthAttachment || (rec && is_depth_format(rec->format)))
{
aspect = VK_IMAGE_ASPECT_DEPTH_BIT;
}
barrier.subresourceRange = vkinit::image_subresource_range(aspect);
pass.preImageBarriers.push_back(barrier);
// Validation messages (debug-only style):
if (rec)
{
// Color attachments should not be depth formats and vice versa
if (usage == RGImageUsage::ColorAttachment && is_depth_format(rec->format))
{
fmt::println("[RG][Warn] Pass '{}' binds depth-format image '{}' as color attachment.",
pass.name, rec->name);
}
if (usage == RGImageUsage::DepthAttachment && !is_depth_format(rec->format))
{
fmt::println("[RG][Warn] Pass '{}' binds non-depth image '{}' as depth attachment.",
pass.name, rec->name);
}
// Usage flag sanity for transients we created
if (!rec->imported)
{
VkImageUsageFlags need = usage_requires_flag(usage);
if ((need & rec->creationUsage) != need)
{
fmt::println("[RG][Warn] Image '{}' used as '{}' but created without needed usage flags (0x{:x}).",
rec->name, (int)usage, (unsigned)need);
}
}
}
}
imageStates[id].initialized = true;
imageStates[id].layout = desired.layout;
imageStates[id].stage = desired.stage;
imageStates[id].access = desired.access;
}
if (bufferCount == 0) continue;
std::unordered_map<uint32_t, RGBufferUsage> desiredBufferUsages;
desiredBufferUsages.reserve(pass.bufferReads.size() + pass.bufferWrites.size());
for (const auto &access: pass.bufferReads)
{
if (!access.buffer.valid()) continue;
desiredBufferUsages.emplace(access.buffer.id, access.usage);
if (access.buffer.id < bufferCount)
{
if (bufferFirst[access.buffer.id] == -1) bufferFirst[access.buffer.id] = (int)(&pass - _passes.data());
bufferLast[access.buffer.id] = (int)(&pass - _passes.data());
}
}
for (const auto &access: pass.bufferWrites)
{
if (!access.buffer.valid()) continue;
desiredBufferUsages[access.buffer.id] = access.usage;
if (access.buffer.id < bufferCount)
{
if (bufferFirst[access.buffer.id] == -1) bufferFirst[access.buffer.id] = (int)(&pass - _passes.data());
bufferLast[access.buffer.id] = (int)(&pass - _passes.data());
}
}
for (const auto &[id, usage]: desiredBufferUsages)
{
if (id >= bufferCount) continue;
BufferUsageInfo desired = usage_info_buffer(usage);
BufferState prev = bufferStates[id];
VkPipelineStageFlags2 srcStage = prev.initialized
? prev.stage
: _resources.initial_stage(RGBufferHandle{id});
if (srcStage == VK_PIPELINE_STAGE_2_NONE)
{
srcStage = VK_PIPELINE_STAGE_2_TOP_OF_PIPE_BIT;
}
VkAccessFlags2 srcAccess = prev.initialized
? prev.access
: _resources.initial_access(RGBufferHandle{id});
bool needBarrier = !prev.initialized
|| prev.stage != desired.stage
|| prev.access != desired.access;
if (needBarrier)
{
VkBufferMemoryBarrier2 barrier{.sType = VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER_2};
barrier.srcStageMask = srcStage;
barrier.srcAccessMask = srcAccess;
barrier.dstStageMask = desired.stage;
barrier.dstAccessMask = desired.access;
barrier.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED;
barrier.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED;
const RGBufferRecord *rec = _resources.get_buffer(RGBufferHandle{id});
barrier.buffer = rec ? rec->buffer : VK_NULL_HANDLE;
barrier.offset = 0;
barrier.size = rec ? rec->size : VK_WHOLE_SIZE;
pass.preBufferBarriers.push_back(barrier);
if (rec && !rec->imported)
{
VkBufferUsageFlags need = buffer_usage_requires_flag(usage);
if ((need & rec->usage) != need)
{
fmt::println("[RG][Warn] Buffer '{}' used as '{}' but created without needed usage flags (0x{:x}).",
rec->name, (int)usage, (unsigned)need);
}
}
}
bufferStates[id].initialized = true;
bufferStates[id].stage = desired.stage;
bufferStates[id].access = desired.access;
}
}
// Store lifetimes into records for diagnostics/aliasing
for (size_t i = 0; i < imageCount; ++i)
{
if (auto *rec = _resources.get_image(RGImageHandle{static_cast<uint32_t>(i)}))
{
rec->firstUse = imageFirst[i];
rec->lastUse = imageLast[i];
}
}
for (size_t i = 0; i < bufferCount; ++i)
{
if (auto *rec = _resources.get_buffer(RGBufferHandle{static_cast<uint32_t>(i)}))
{
rec->firstUse = bufferFirst[i];
rec->lastUse = bufferLast[i];
}
}
return true;
}
void RenderGraph::execute(VkCommandBuffer cmd)
{
for (size_t passIndex = 0; passIndex < _passes.size(); ++passIndex)
{
auto &p = _passes[passIndex];
if (!p.enabled) continue;
// Debug label per pass
if (_context && _context->getDevice())
{
char labelName[128];
std::snprintf(labelName, sizeof(labelName), "RG: %s", p.name.c_str());
vkdebug::cmd_begin_label(_context->getDevice()->device(), cmd, labelName);
}
if (!p.preImageBarriers.empty() || !p.preBufferBarriers.empty())
{
VkDependencyInfo dep{.sType = VK_STRUCTURE_TYPE_DEPENDENCY_INFO};
dep.imageMemoryBarrierCount = static_cast<uint32_t>(p.preImageBarriers.size());
dep.pImageMemoryBarriers = p.preImageBarriers.empty() ? nullptr : p.preImageBarriers.data();
dep.bufferMemoryBarrierCount = static_cast<uint32_t>(p.preBufferBarriers.size());
dep.pBufferMemoryBarriers = p.preBufferBarriers.empty() ? nullptr : p.preBufferBarriers.data();
vkCmdPipelineBarrier2(cmd, &dep);
}
// Begin dynamic rendering if the pass declared attachments
bool doRendering = (!p.colorAttachments.empty() || p.hasDepth);
if (doRendering)
{
std::vector<VkRenderingAttachmentInfo> colorInfos;
colorInfos.reserve(p.colorAttachments.size());
VkRenderingAttachmentInfo depthInfo{};
bool hasDepth = false;
// Choose renderArea as the min of all attachment extents and the desired draw extent
VkExtent2D chosenExtent{_context->getDrawExtent()};
auto clamp_min = [](VkExtent2D a, VkExtent2D b) {
return VkExtent2D{std::min(a.width, b.width), std::min(a.height, b.height)};
};
// Resolve color attachments
VkExtent2D firstColorExtent{0,0};
bool warnedExtentMismatch = false;
for (const auto &a: p.colorAttachments)
{
const RGImageRecord *rec = _resources.get_image(a.image);
if (!rec || rec->imageView == VK_NULL_HANDLE) continue;
VkClearValue *pClear = nullptr;
VkClearValue clear = a.clear;
if (a.clearOnLoad) pClear = &clear;
VkRenderingAttachmentInfo info = vkinit::attachment_info(rec->imageView, pClear,
VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL);
if (!a.store) info.storeOp = VK_ATTACHMENT_STORE_OP_DONT_CARE;
colorInfos.push_back(info);
if (rec->extent.width && rec->extent.height) chosenExtent = clamp_min(chosenExtent, rec->extent);
if (firstColorExtent.width == 0 && firstColorExtent.height == 0)
{
firstColorExtent = rec->extent;
}
else if (!warnedExtentMismatch && (rec->extent.width != firstColorExtent.width || rec->extent.height != firstColorExtent.height))
{
fmt::println("[RG][Warn] Pass '{}' has color attachments with mismatched extents ({}x{} vs {}x{}). Using min().",
p.name,
firstColorExtent.width, firstColorExtent.height,
rec->extent.width, rec->extent.height);
warnedExtentMismatch = true;
}
}
if (p.hasDepth)
{
const RGImageRecord *rec = _resources.get_image(p.depthAttachment.image);
if (rec && rec->imageView != VK_NULL_HANDLE)
{
depthInfo = vkinit::depth_attachment_info(rec->imageView, VK_IMAGE_LAYOUT_DEPTH_ATTACHMENT_OPTIMAL);
if (p.depthAttachment.clearOnLoad) depthInfo.loadOp = VK_ATTACHMENT_LOAD_OP_CLEAR;
else depthInfo.loadOp = VK_ATTACHMENT_LOAD_OP_LOAD;
if (!p.depthAttachment.store) depthInfo.storeOp = VK_ATTACHMENT_STORE_OP_DONT_CARE;
hasDepth = true;
if (rec->extent.width && rec->extent.height) chosenExtent = clamp_min(chosenExtent, rec->extent);
}
}
VkRenderingInfo ri{};
ri.sType = VK_STRUCTURE_TYPE_RENDERING_INFO;
ri.renderArea = VkRect2D{VkOffset2D{0, 0}, chosenExtent};
ri.layerCount = 1;
ri.colorAttachmentCount = static_cast<uint32_t>(colorInfos.size());
ri.pColorAttachments = colorInfos.empty() ? nullptr : colorInfos.data();
ri.pDepthAttachment = hasDepth ? &depthInfo : nullptr;
ri.pStencilAttachment = nullptr;
vkCmdBeginRendering(cmd, &ri);
}
if (p.record)
{
RGPassResources res(&_resources);
p.record(cmd, res, _context);
}
if (doRendering)
{
vkCmdEndRendering(cmd);
}
if (_context && _context->getDevice())
{
vkdebug::cmd_end_label(_context->getDevice()->device(), cmd);
}
}
}
// --- Import helpers ---
void RenderGraph::add_present_chain(RGImageHandle sourceDraw,
RGImageHandle targetSwapchain,
std::function<void(RenderGraph &)> appendExtra)
{
if (!sourceDraw.valid() || !targetSwapchain.valid()) return;
add_pass(
"CopyToSwapchain",
RGPassType::Transfer,
[sourceDraw, targetSwapchain](RGPassBuilder &builder, EngineContext *) {
builder.read(sourceDraw, RGImageUsage::TransferSrc);
builder.write(targetSwapchain, RGImageUsage::TransferDst);
},
[sourceDraw, targetSwapchain](VkCommandBuffer cmd, const RGPassResources &res, EngineContext *ctx) {
VkImage src = res.image(sourceDraw);
VkImage dst = res.image(targetSwapchain);
if (src == VK_NULL_HANDLE || dst == VK_NULL_HANDLE) return;
vkutil::copy_image_to_image(cmd, src, dst, ctx->getDrawExtent(), ctx->getSwapchain()->swapchainExtent());
});
if (appendExtra)
{
appendExtra(*this);
}
add_pass(
"PreparePresent",
RGPassType::Transfer,
[targetSwapchain](RGPassBuilder &builder, EngineContext *) {
builder.write(targetSwapchain, RGImageUsage::Present);
},
[](VkCommandBuffer, const RGPassResources &, EngineContext *) {
});
}
RGImageHandle RenderGraph::import_draw_image()
{
RGImportedImageDesc d{};
d.name = "drawImage";
d.image = _context->getSwapchain()->drawImage().image;
d.imageView = _context->getSwapchain()->drawImage().imageView;
d.format = _context->getSwapchain()->drawImage().imageFormat;
d.extent = _context->getDrawExtent();
d.currentLayout = VK_IMAGE_LAYOUT_GENERAL;
return import_image(d);
}
// --- Debug helpers ---
void RenderGraph::debug_get_passes(std::vector<RGDebugPassInfo> &out) const
{
out.clear();
out.reserve(_passes.size());
for (const auto &p : _passes)
{
RGDebugPassInfo info{};
info.name = p.name;
info.type = p.type;
info.enabled = p.enabled;
info.imageReads = static_cast<uint32_t>(p.imageReads.size());
info.imageWrites = static_cast<uint32_t>(p.imageWrites.size());
info.bufferReads = static_cast<uint32_t>(p.bufferReads.size());
info.bufferWrites = static_cast<uint32_t>(p.bufferWrites.size());
info.colorAttachmentCount = static_cast<uint32_t>(p.colorAttachments.size());
info.hasDepth = p.hasDepth;
out.push_back(std::move(info));
}
}
void RenderGraph::debug_get_images(std::vector<RGDebugImageInfo> &out) const
{
out.clear();
out.reserve(_resources.image_count());
for (uint32_t i = 0; i < _resources.image_count(); ++i)
{
const RGImageRecord *rec = _resources.get_image(RGImageHandle{i});
if (!rec) continue;
RGDebugImageInfo info{};
info.id = i;
info.name = rec->name;
info.imported = rec->imported;
info.format = rec->format;
info.extent = rec->extent;
info.creationUsage = rec->creationUsage;
info.firstUse = rec->firstUse;
info.lastUse = rec->lastUse;
out.push_back(std::move(info));
}
}
void RenderGraph::debug_get_buffers(std::vector<RGDebugBufferInfo> &out) const
{
out.clear();
out.reserve(_resources.buffer_count());
for (uint32_t i = 0; i < _resources.buffer_count(); ++i)
{
const RGBufferRecord *rec = _resources.get_buffer(RGBufferHandle{i});
if (!rec) continue;
RGDebugBufferInfo info{};
info.id = i;
info.name = rec->name;
info.imported = rec->imported;
info.size = rec->size;
info.usage = rec->usage;
info.firstUse = rec->firstUse;
info.lastUse = rec->lastUse;
out.push_back(std::move(info));
}
}
RGImageHandle RenderGraph::import_depth_image()
{
RGImportedImageDesc d{};
d.name = "depthImage";
d.image = _context->getSwapchain()->depthImage().image;
d.imageView = _context->getSwapchain()->depthImage().imageView;
d.format = _context->getSwapchain()->depthImage().imageFormat;
d.extent = _context->getDrawExtent();
d.currentLayout = VK_IMAGE_LAYOUT_UNDEFINED;
return import_image(d);
}
RGImageHandle RenderGraph::import_gbuffer_position()
{
RGImportedImageDesc d{};
d.name = "gBuffer.position";
d.image = _context->getSwapchain()->gBufferPosition().image;
d.imageView = _context->getSwapchain()->gBufferPosition().imageView;
d.format = _context->getSwapchain()->gBufferPosition().imageFormat;
d.extent = _context->getDrawExtent();
d.currentLayout = VK_IMAGE_LAYOUT_UNDEFINED;
return import_image(d);
}
RGImageHandle RenderGraph::import_gbuffer_normal()
{
RGImportedImageDesc d{};
d.name = "gBuffer.normal";
d.image = _context->getSwapchain()->gBufferNormal().image;
d.imageView = _context->getSwapchain()->gBufferNormal().imageView;
d.format = _context->getSwapchain()->gBufferNormal().imageFormat;
d.extent = _context->getDrawExtent();
d.currentLayout = VK_IMAGE_LAYOUT_UNDEFINED;
return import_image(d);
}
RGImageHandle RenderGraph::import_gbuffer_albedo()
{
RGImportedImageDesc d{};
d.name = "gBuffer.albedo";
d.image = _context->getSwapchain()->gBufferAlbedo().image;
d.imageView = _context->getSwapchain()->gBufferAlbedo().imageView;
d.format = _context->getSwapchain()->gBufferAlbedo().imageFormat;
d.extent = _context->getDrawExtent();
d.currentLayout = VK_IMAGE_LAYOUT_UNDEFINED;
return import_image(d);
}
RGImageHandle RenderGraph::import_swapchain_image(uint32_t index)
{
RGImportedImageDesc d{};
d.name = "swapchain.image";
const auto &views = _context->getSwapchain()->swapchainImageViews();
const auto &imgs = _context->getSwapchain()->swapchainImages();
d.image = imgs[index];
d.imageView = views[index];
d.format = _context->getSwapchain()->swapchainImageFormat();
d.extent = _context->getSwapchain()->swapchainExtent();
d.currentLayout = VK_IMAGE_LAYOUT_PRESENT_SRC_KHR;
return import_image(d);
}

142
src/render/rg_graph.h Normal file
View File

@@ -0,0 +1,142 @@
#pragma once
#include <core/vk_types.h>
#include <render/rg_types.h>
#include <render/rg_resources.h>
#include <render/rg_builder.h>
#include <functional>
#include <string>
#include <unordered_map>
#include <vector>
class EngineContext;
class RenderGraph
{
public:
void init(EngineContext* ctx);
void clear();
// Import externally owned images (swapchain, drawImage, g-buffers)
RGImageHandle import_image(const RGImportedImageDesc& desc);
// Create transient images (not used in v1 skeleton; stubbed for future)
RGImageHandle create_image(const RGImageDesc& desc);
// Convenience: create a transient depth image suitable for shadow mapping or depth-only passes
// Format defaults to D32_SFLOAT; usage is depth attachment + sampled so it can be read later.
RGImageHandle create_depth_image(const char* name, VkExtent2D extent, VkFormat format = VK_FORMAT_D32_SFLOAT);
// Buffer import/create helpers
RGBufferHandle import_buffer(const RGImportedBufferDesc& desc);
RGBufferHandle create_buffer(const RGBufferDesc& desc);
// Pass builder API
struct Pass; // fwd
using RecordCallback = std::function<void(VkCommandBuffer cmd, const class RGPassResources& res, EngineContext* ctx)>;
using BuildCallback = std::function<void(class RGPassBuilder& b, EngineContext* ctx)>;
void add_pass(const char* name, RGPassType type, BuildCallback build, RecordCallback record);
// Legacy simple add
void add_pass(const char* name, RGPassType type, RecordCallback record);
// Build internal state for this frame (no-op in v1)
bool compile();
// Execute in insertion order (skips disabled passes)
void execute(VkCommandBuffer cmd);
// Convenience import helpers (read from EngineContext::swapchain)
RGImageHandle import_draw_image();
RGImageHandle import_depth_image();
RGImageHandle import_gbuffer_position();
RGImageHandle import_gbuffer_normal();
RGImageHandle import_gbuffer_albedo();
RGImageHandle import_swapchain_image(uint32_t index);
void add_present_chain(RGImageHandle sourceDraw,
RGImageHandle targetSwapchain,
std::function<void(RenderGraph&)> appendExtra = {});
// --- Debug helpers ---
struct RGDebugPassInfo
{
std::string name;
RGPassType type{};
bool enabled = true;
uint32_t imageReads = 0;
uint32_t imageWrites = 0;
uint32_t bufferReads = 0;
uint32_t bufferWrites = 0;
uint32_t colorAttachmentCount = 0;
bool hasDepth = false;
};
struct RGDebugImageInfo
{
uint32_t id{};
std::string name;
bool imported = true;
VkFormat format = VK_FORMAT_UNDEFINED;
VkExtent2D extent{0,0};
VkImageUsageFlags creationUsage = 0;
int firstUse = -1;
int lastUse = -1;
};
struct RGDebugBufferInfo
{
uint32_t id{};
std::string name;
bool imported = true;
VkDeviceSize size = 0;
VkBufferUsageFlags usage = 0;
int firstUse = -1;
int lastUse = -1;
};
size_t pass_count() const { return _passes.size(); }
const char* pass_name(size_t i) const { return i < _passes.size() ? _passes[i].name.c_str() : ""; }
bool pass_enabled(size_t i) const { return i < _passes.size() ? _passes[i].enabled : false; }
void set_pass_enabled(size_t i, bool e) { if (i < _passes.size()) _passes[i].enabled = e; }
void debug_get_passes(std::vector<RGDebugPassInfo>& out) const;
void debug_get_images(std::vector<RGDebugImageInfo>& out) const;
void debug_get_buffers(std::vector<RGDebugBufferInfo>& out) const;
private:
struct ImportedImage
{
RGImportedImageDesc desc;
RGImageHandle handle;
};
struct Pass
{
std::string name;
RGPassType type{};
RecordCallback record;
// Declarations
std::vector<RGPassImageAccess> imageReads;
std::vector<RGPassImageAccess> imageWrites;
std::vector<RGPassBufferAccess> bufferReads;
std::vector<RGPassBufferAccess> bufferWrites;
std::vector<RGAttachmentInfo> colorAttachments;
bool hasDepth = false;
RGAttachmentInfo depthAttachment{};
std::vector<VkImageMemoryBarrier2> preImageBarriers;
std::vector<VkBufferMemoryBarrier2> preBufferBarriers;
// Cached rendering info derived from declared attachments (filled at execute)
bool hasRendering = false;
VkExtent2D renderExtent{};
bool enabled = true;
};
EngineContext* _context = nullptr;
RGResourceRegistry _resources;
std::vector<Pass> _passes;
};

189
src/render/rg_resources.cpp Normal file
View File

@@ -0,0 +1,189 @@
#include <render/rg_resources.h>
#include <core/engine_context.h>
#include <core/vk_resource.h>
#include "frame_resources.h"
void RGResourceRegistry::reset()
{
_images.clear();
_buffers.clear();
_imageLookup.clear();
_bufferLookup.clear();
}
RGImageHandle RGResourceRegistry::add_imported(const RGImportedImageDesc& d)
{
// Deduplicate by VkImage
auto it = _imageLookup.find(d.image);
if (it != _imageLookup.end())
{
auto& rec = _images[it->second];
rec.name = d.name;
rec.image = d.image;
rec.imageView = d.imageView;
rec.format = d.format;
rec.extent = d.extent;
rec.initialLayout = d.currentLayout;
return RGImageHandle{it->second};
}
RGImageRecord rec{};
rec.name = d.name;
rec.imported = true;
rec.image = d.image;
rec.imageView = d.imageView;
rec.format = d.format;
rec.extent = d.extent;
rec.initialLayout = d.currentLayout;
_images.push_back(rec);
uint32_t id = static_cast<uint32_t>(_images.size() - 1);
if (d.image != VK_NULL_HANDLE) _imageLookup[d.image] = id;
return RGImageHandle{ id };
}
RGImageHandle RGResourceRegistry::add_transient(const RGImageDesc& d)
{
RGImageRecord rec{};
rec.name = d.name;
rec.imported = false;
rec.format = d.format;
rec.extent = d.extent;
rec.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
rec.creationUsage = d.usage;
VkExtent3D size{ d.extent.width, d.extent.height, 1 };
rec.allocation = _ctx->getResources()->create_image(size, d.format, d.usage);
rec.image = rec.allocation.image;
rec.imageView = rec.allocation.imageView;
// Cleanup at end of frame
if (_ctx && _ctx->currentFrame)
{
auto img = rec.allocation;
_ctx->currentFrame->_deletionQueue.push_function([ctx=_ctx, img]() {
ctx->getResources()->destroy_image(img);
});
}
_images.push_back(rec);
return RGImageHandle{ static_cast<uint32_t>(_images.size() - 1) };
}
RGBufferHandle RGResourceRegistry::add_imported(const RGImportedBufferDesc& d)
{
// Deduplicate by VkBuffer
auto it = _bufferLookup.find(d.buffer);
if (it != _bufferLookup.end())
{
auto& rec = _buffers[it->second];
rec.name = d.name;
rec.buffer = d.buffer;
rec.size = d.size;
// Keep the earliest known stage/access if set; otherwise record provided
if (rec.initialStage == VK_PIPELINE_STAGE_2_NONE) rec.initialStage = d.currentStage;
if (rec.initialAccess == 0) rec.initialAccess = d.currentAccess;
return RGBufferHandle{it->second};
}
RGBufferRecord rec{};
rec.name = d.name;
rec.imported = true;
rec.buffer = d.buffer;
rec.size = d.size;
rec.initialStage = d.currentStage;
rec.initialAccess = d.currentAccess;
_buffers.push_back(rec);
uint32_t id = static_cast<uint32_t>(_buffers.size() - 1);
if (d.buffer != VK_NULL_HANDLE) _bufferLookup[d.buffer] = id;
return RGBufferHandle{ id };
}
RGBufferHandle RGResourceRegistry::add_transient(const RGBufferDesc& d)
{
RGBufferRecord rec{};
rec.name = d.name;
rec.imported = false;
rec.size = d.size;
rec.usage = d.usage;
rec.initialStage = VK_PIPELINE_STAGE_2_TOP_OF_PIPE_BIT;
rec.initialAccess = 0;
rec.allocation = _ctx->getResources()->create_buffer(d.size, d.usage, d.memoryUsage);
rec.buffer = rec.allocation.buffer;
if (_ctx && _ctx->currentFrame)
{
auto buf = rec.allocation;
_ctx->currentFrame->_deletionQueue.push_function([ctx=_ctx, buf]() {
ctx->getResources()->destroy_buffer(buf);
});
}
_buffers.push_back(rec);
uint32_t id = static_cast<uint32_t>(_buffers.size() - 1);
if (rec.buffer != VK_NULL_HANDLE) _bufferLookup[rec.buffer] = id;
return RGBufferHandle{ id };
}
const RGImageRecord* RGResourceRegistry::get_image(RGImageHandle h) const
{
if (!h.valid() || h.id >= _images.size()) return nullptr;
return &_images[h.id];
}
RGImageRecord* RGResourceRegistry::get_image(RGImageHandle h)
{
if (!h.valid() || h.id >= _images.size()) return nullptr;
return &_images[h.id];
}
const RGBufferRecord* RGResourceRegistry::get_buffer(RGBufferHandle h) const
{
if (!h.valid() || h.id >= _buffers.size()) return nullptr;
return &_buffers[h.id];
}
RGBufferRecord* RGResourceRegistry::get_buffer(RGBufferHandle h)
{
if (!h.valid() || h.id >= _buffers.size()) return nullptr;
return &_buffers[h.id];
}
VkImageLayout RGResourceRegistry::initial_layout(RGImageHandle h) const
{
const RGImageRecord* rec = get_image(h);
return rec ? rec->initialLayout : VK_IMAGE_LAYOUT_UNDEFINED;
}
VkFormat RGResourceRegistry::image_format(RGImageHandle h) const
{
const RGImageRecord* rec = get_image(h);
return rec ? rec->format : VK_FORMAT_UNDEFINED;
}
VkPipelineStageFlags2 RGResourceRegistry::initial_stage(RGBufferHandle h) const
{
const RGBufferRecord* rec = get_buffer(h);
return rec ? rec->initialStage : VK_PIPELINE_STAGE_2_TOP_OF_PIPE_BIT;
}
VkAccessFlags2 RGResourceRegistry::initial_access(RGBufferHandle h) const
{
const RGBufferRecord* rec = get_buffer(h);
return rec ? rec->initialAccess : VkAccessFlags2{0};
}
RGBufferHandle RGResourceRegistry::find_buffer(VkBuffer buffer) const
{
auto it = _bufferLookup.find(buffer);
if (it == _bufferLookup.end()) return RGBufferHandle{};
return RGBufferHandle{it->second};
}
RGImageHandle RGResourceRegistry::find_image(VkImage image) const
{
auto it = _imageLookup.find(image);
if (it == _imageLookup.end()) return RGImageHandle{};
return RGImageHandle{it->second};
}

90
src/render/rg_resources.h Normal file
View File

@@ -0,0 +1,90 @@
#pragma once
#include <core/vk_types.h>
#include <render/rg_types.h>
#include <string>
#include <vector>
#include <unordered_map>
class EngineContext;
struct RGImageRecord
{
std::string name;
bool imported = true;
// Unified view for either imported or transient
VkImage image = VK_NULL_HANDLE;
VkImageView imageView = VK_NULL_HANDLE;
VkFormat format = VK_FORMAT_UNDEFINED;
VkExtent2D extent{0, 0};
VkImageLayout initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
VkImageUsageFlags creationUsage = 0; // if transient; 0 for imported
// If transient, keep allocation owner for cleanup
AllocatedImage allocation{};
// Lifetime indices within the compiled pass list (for aliasing/debug)
int firstUse = -1;
int lastUse = -1;
};
struct RGBufferRecord
{
std::string name;
bool imported = true;
VkBuffer buffer = VK_NULL_HANDLE;
VkDeviceSize size = 0;
VkBufferUsageFlags usage = 0;
VkPipelineStageFlags2 initialStage = VK_PIPELINE_STAGE_2_NONE;
VkAccessFlags2 initialAccess = 0;
AllocatedBuffer allocation{};
// Lifetime indices (for aliasing/debug)
int firstUse = -1;
int lastUse = -1;
};
class RGResourceRegistry
{
public:
void init(EngineContext* ctx) { _ctx = ctx; }
void reset();
RGImageHandle add_imported(const RGImportedImageDesc& d);
RGImageHandle add_transient(const RGImageDesc& d);
RGBufferHandle add_imported(const RGImportedBufferDesc& d);
RGBufferHandle add_transient(const RGBufferDesc& d);
// Lookup existing handles by raw Vulkan objects (deduplicates imports)
RGBufferHandle find_buffer(VkBuffer buffer) const;
RGImageHandle find_image(VkImage image) const;
const RGImageRecord* get_image(RGImageHandle h) const;
RGImageRecord* get_image(RGImageHandle h);
const RGBufferRecord* get_buffer(RGBufferHandle h) const;
RGBufferRecord* get_buffer(RGBufferHandle h);
size_t image_count() const { return _images.size(); }
size_t buffer_count() const { return _buffers.size(); }
VkImageLayout initial_layout(RGImageHandle h) const;
VkFormat image_format(RGImageHandle h) const;
VkPipelineStageFlags2 initial_stage(RGBufferHandle h) const;
VkAccessFlags2 initial_access(RGBufferHandle h) const;
private:
EngineContext* _ctx = nullptr;
std::vector<RGImageRecord> _images;
std::vector<RGBufferRecord> _buffers;
// Reverse lookup to avoid duplicate imports of the same VkBuffer/VkImage
std::unordered_map<VkImage, uint32_t> _imageLookup;
std::unordered_map<VkBuffer, uint32_t> _bufferLookup;
};

101
src/render/rg_types.h Normal file
View File

@@ -0,0 +1,101 @@
#pragma once
#include <core/vk_types.h>
#include <string>
#include <vector>
// Lightweight, initial Render Graph types. These will expand as we migrate passes.
enum class RGPassType
{
Graphics,
Compute,
Transfer
};
enum class RGImageUsage
{
// Read usages
SampledFragment,
SampledCompute,
TransferSrc,
// Write usages
ColorAttachment,
DepthAttachment,
ComputeWrite,
TransferDst,
// Terminal
Present
};
enum class RGBufferUsage
{
TransferSrc,
TransferDst,
VertexRead,
IndexRead,
UniformRead,
StorageRead,
StorageReadWrite,
IndirectArgs
};
struct RGImageHandle
{
uint32_t id = 0xFFFFFFFFu;
bool valid() const { return id != 0xFFFFFFFFu; }
explicit operator bool() const { return valid(); }
};
struct RGBufferHandle
{
uint32_t id = 0xFFFFFFFFu;
bool valid() const { return id != 0xFFFFFFFFu; }
explicit operator bool() const { return valid(); }
};
struct RGImportedImageDesc
{
std::string name;
VkImage image = VK_NULL_HANDLE;
VkImageView imageView = VK_NULL_HANDLE;
VkFormat format = VK_FORMAT_UNDEFINED;
VkExtent2D extent{0, 0};
VkImageLayout currentLayout = VK_IMAGE_LAYOUT_UNDEFINED; // layout at graph begin
};
struct RGImportedBufferDesc
{
std::string name;
VkBuffer buffer = VK_NULL_HANDLE;
VkDeviceSize size = 0;
VkPipelineStageFlags2 currentStage = VK_PIPELINE_STAGE_2_NONE;
VkAccessFlags2 currentAccess = 0;
};
struct RGImageDesc
{
std::string name;
VkFormat format = VK_FORMAT_UNDEFINED;
VkExtent2D extent{0, 0};
VkImageUsageFlags usage = 0; // creation usage mask; graph sets layouts per-pass
};
struct RGBufferDesc
{
std::string name;
VkDeviceSize size = 0;
VkBufferUsageFlags usage = 0;
VmaMemoryUsage memoryUsage = VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE;
};
// Simple attachment info for dynamic rendering; expanded later for load/store.
struct RGAttachmentInfo
{
RGImageHandle image;
VkClearValue clear{}; // default 0
bool clearOnLoad = false; // if true, use clear; else load
bool store = true; // store results
};

126
src/render/vk_materials.cpp Normal file
View File

@@ -0,0 +1,126 @@
#include "vk_materials.h"
#include "core/vk_engine.h"
#include "render/vk_pipelines.h"
#include "core/vk_initializers.h"
#include "core/vk_pipeline_manager.h"
#include "core/asset_manager.h"
namespace vkutil { bool load_shader_module(const char*, VkDevice, VkShaderModule*); }
void GLTFMetallic_Roughness::build_pipelines(VulkanEngine *engine)
{
VkPushConstantRange matrixRange{};
matrixRange.offset = 0;
matrixRange.size = sizeof(GPUDrawPushConstants);
matrixRange.stageFlags = VK_SHADER_STAGE_VERTEX_BIT;
DescriptorLayoutBuilder layoutBuilder;
layoutBuilder.add_binding(0, VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER);
layoutBuilder.add_binding(1, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER);
layoutBuilder.add_binding(2, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER);
materialLayout = layoutBuilder.build(engine->_deviceManager->device(),
VK_SHADER_STAGE_VERTEX_BIT | VK_SHADER_STAGE_FRAGMENT_BIT);
VkDescriptorSetLayout layouts[] = {
engine->_descriptorManager->gpuSceneDataLayout(),
materialLayout
};
// Register pipelines with the central PipelineManager
GraphicsPipelineCreateInfo opaqueInfo{};
opaqueInfo.vertexShaderPath = engine->_context->getAssets()->shaderPath("mesh.vert.spv");
opaqueInfo.fragmentShaderPath = engine->_context->getAssets()->shaderPath("mesh.frag.spv");
opaqueInfo.setLayouts.assign(std::begin(layouts), std::end(layouts));
opaqueInfo.pushConstants = {matrixRange};
opaqueInfo.configure = [engine](PipelineBuilder &b) {
b.set_input_topology(VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST);
b.set_polygon_mode(VK_POLYGON_MODE_FILL);
b.set_cull_mode(VK_CULL_MODE_NONE, VK_FRONT_FACE_CLOCKWISE);
b.set_multisampling_none();
b.disable_blending();
// Reverse-Z depth test configuration
b.enable_depthtest(true, VK_COMPARE_OP_GREATER_OR_EQUAL);
b.set_color_attachment_format(engine->_swapchainManager->drawImage().imageFormat);
b.set_depth_format(engine->_swapchainManager->depthImage().imageFormat);
};
engine->_pipelineManager->registerGraphics("mesh.opaque", opaqueInfo);
GraphicsPipelineCreateInfo transparentInfo = opaqueInfo;
transparentInfo.configure = [engine](PipelineBuilder &b) {
b.set_input_topology(VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST);
b.set_polygon_mode(VK_POLYGON_MODE_FILL);
b.set_cull_mode(VK_CULL_MODE_NONE, VK_FRONT_FACE_CLOCKWISE);
b.set_multisampling_none();
// Physically-based transparency uses standard alpha blending
b.enable_blending_alphablend();
// Transparent pass: keep reverse-Z test (no writes)
b.enable_depthtest(false, VK_COMPARE_OP_GREATER_OR_EQUAL);
b.set_color_attachment_format(engine->_swapchainManager->drawImage().imageFormat);
b.set_depth_format(engine->_swapchainManager->depthImage().imageFormat);
};
engine->_pipelineManager->registerGraphics("mesh.transparent", transparentInfo);
GraphicsPipelineCreateInfo gbufferInfo{};
gbufferInfo.vertexShaderPath = engine->_context->getAssets()->shaderPath("mesh.vert.spv");
gbufferInfo.fragmentShaderPath = engine->_context->getAssets()->shaderPath("gbuffer.frag.spv");
gbufferInfo.setLayouts.assign(std::begin(layouts), std::end(layouts));
gbufferInfo.pushConstants = {matrixRange};
gbufferInfo.configure = [engine](PipelineBuilder &b) {
b.set_input_topology(VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST);
b.set_polygon_mode(VK_POLYGON_MODE_FILL);
b.set_cull_mode(VK_CULL_MODE_NONE, VK_FRONT_FACE_CLOCKWISE);
b.set_multisampling_none();
b.disable_blending();
// GBuffer uses reverse-Z depth
b.enable_depthtest(true, VK_COMPARE_OP_GREATER_OR_EQUAL);
VkFormat gFormats[] = {
engine->_swapchainManager->gBufferPosition().imageFormat,
engine->_swapchainManager->gBufferNormal().imageFormat,
engine->_swapchainManager->gBufferAlbedo().imageFormat
};
b.set_color_attachment_formats(std::span<VkFormat>(gFormats, 3));
b.set_depth_format(engine->_swapchainManager->depthImage().imageFormat);
};
engine->_pipelineManager->registerGraphics("mesh.gbuffer", gbufferInfo);
engine->_pipelineManager->getMaterialPipeline("mesh.opaque", opaquePipeline);
engine->_pipelineManager->getMaterialPipeline("mesh.transparent", transparentPipeline);
engine->_pipelineManager->getMaterialPipeline("mesh.gbuffer", gBufferPipeline);
}
void GLTFMetallic_Roughness::clear_resources(VkDevice device) const
{
vkDestroyDescriptorSetLayout(device, materialLayout, nullptr);
}
MaterialInstance GLTFMetallic_Roughness::write_material(VkDevice device, MaterialPass pass,
const MaterialResources &resources,
DescriptorAllocatorGrowable &descriptorAllocator)
{
MaterialInstance matData{};
matData.passType = pass;
if (pass == MaterialPass::Transparent)
{
matData.pipeline = &transparentPipeline;
}
else
{
matData.pipeline = &gBufferPipeline;
}
matData.materialSet = descriptorAllocator.allocate(device, materialLayout);
writer.clear();
writer.write_buffer(0, resources.dataBuffer, sizeof(MaterialConstants), resources.dataBufferOffset,
VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER);
writer.write_image(1, resources.colorImage.imageView, resources.colorSampler,
VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER);
writer.write_image(2, resources.metalRoughImage.imageView, resources.metalRoughSampler,
VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER);
writer.update_set(device, matData.materialSet);
return matData;
}

42
src/render/vk_materials.h Normal file
View File

@@ -0,0 +1,42 @@
#pragma once
#include <core/vk_types.h>
#include <core/vk_descriptors.h>
class VulkanEngine;
struct GLTFMetallic_Roughness
{
MaterialPipeline opaquePipeline;
MaterialPipeline transparentPipeline;
MaterialPipeline gBufferPipeline;
VkDescriptorSetLayout materialLayout;
struct MaterialConstants
{
glm::vec4 colorFactors;
glm::vec4 metal_rough_factors;
glm::vec4 extra[14];
};
struct MaterialResources
{
AllocatedImage colorImage;
VkSampler colorSampler;
AllocatedImage metalRoughImage;
VkSampler metalRoughSampler;
VkBuffer dataBuffer;
uint32_t dataBufferOffset;
};
DescriptorWriter writer;
void build_pipelines(VulkanEngine *engine);
void clear_resources(VkDevice device) const;
MaterialInstance write_material(VkDevice device, MaterialPass pass, const MaterialResources &resources,
DescriptorAllocatorGrowable &descriptorAllocator);
};

244
src/render/vk_pipelines.cpp Normal file
View File

@@ -0,0 +1,244 @@
#include <render/vk_pipelines.h>
#include <fstream>
#include <core/vk_initializers.h>
bool vkutil::load_shader_module(const char *filePath, VkDevice device, VkShaderModule *outShaderModule)
{
std::ifstream file(filePath, std::ios::ate | std::ios::binary);
if (!file.is_open())
{
return false;
}
size_t fileSize = (size_t) file.tellg();
std::vector<uint32_t> buffer(fileSize / sizeof(uint32_t));
file.seekg(0);
file.read((char *) buffer.data(), fileSize);
file.close();
VkShaderModuleCreateInfo createInfo = {};
createInfo.sType = VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO;
createInfo.pNext = nullptr;
createInfo.codeSize = buffer.size() * sizeof(uint32_t);
createInfo.pCode = buffer.data();
VkShaderModule shaderModule;
if (vkCreateShaderModule(device, &createInfo, nullptr, &shaderModule) != VK_SUCCESS)
{
return false;
}
*outShaderModule = shaderModule;
return true;
}
void PipelineBuilder::clear()
{
_inputAssembly = {.sType=VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO};
_rasterizer = {.sType = VK_STRUCTURE_TYPE_PIPELINE_RASTERIZATION_STATE_CREATE_INFO};
_colorBlendAttachment = {};
_multisampling = {.sType = VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO};
_pipelineLayout = {};
_depthStencil = {.sType = VK_STRUCTURE_TYPE_PIPELINE_DEPTH_STENCIL_STATE_CREATE_INFO};
_renderInfo = {.sType = VK_STRUCTURE_TYPE_PIPELINE_RENDERING_CREATE_INFO};
_shaderStages.clear();
_colorAttachmentFormats.clear();
}
VkPipeline PipelineBuilder::build_pipeline(VkDevice device)
{
VkPipelineViewportStateCreateInfo viewportState = {};
viewportState.sType = VK_STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO;
viewportState.pNext = nullptr;
viewportState.viewportCount = 1;
viewportState.scissorCount = 1;
VkPipelineColorBlendStateCreateInfo colorBlending = {};
colorBlending.sType = VK_STRUCTURE_TYPE_PIPELINE_COLOR_BLEND_STATE_CREATE_INFO;
colorBlending.pNext = nullptr;
colorBlending.logicOpEnable = VK_FALSE;
colorBlending.logicOp = VK_LOGIC_OP_COPY;
// For multiple color attachments (e.g., G-Buffer), we must provide one blend state per attachment.
// Depth-only pipelines are allowed (0 color attachments).
std::vector<VkPipelineColorBlendAttachmentState> blendAttachments;
uint32_t colorAttachmentCount = (uint32_t)_colorAttachmentFormats.size();
if (colorAttachmentCount > 0)
{
blendAttachments.assign(colorAttachmentCount, _colorBlendAttachment);
colorBlending.attachmentCount = colorAttachmentCount;
colorBlending.pAttachments = blendAttachments.data();
}
else
{
colorBlending.attachmentCount = 0;
colorBlending.pAttachments = nullptr;
}
VkPipelineVertexInputStateCreateInfo _vertexInputInfo = {.sType = VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO};
VkGraphicsPipelineCreateInfo pipelineInfo = {.sType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO};
pipelineInfo.pNext = &_renderInfo;
pipelineInfo.stageCount = (uint32_t) _shaderStages.size();
pipelineInfo.pStages = _shaderStages.data();
pipelineInfo.pVertexInputState = &_vertexInputInfo;
pipelineInfo.pInputAssemblyState = &_inputAssembly;
pipelineInfo.pViewportState = &viewportState;
pipelineInfo.pRasterizationState = &_rasterizer;
pipelineInfo.pMultisampleState = &_multisampling;
pipelineInfo.pColorBlendState = &colorBlending;
pipelineInfo.pDepthStencilState = &_depthStencil;
pipelineInfo.layout = _pipelineLayout;
VkDynamicState state[] = {VK_DYNAMIC_STATE_VIEWPORT, VK_DYNAMIC_STATE_SCISSOR};
VkPipelineDynamicStateCreateInfo dynamicInfo = {.sType = VK_STRUCTURE_TYPE_PIPELINE_DYNAMIC_STATE_CREATE_INFO};
dynamicInfo.pDynamicStates = &state[0];
dynamicInfo.dynamicStateCount = 2;
pipelineInfo.pDynamicState = &dynamicInfo;
VkPipeline newPipeline;
if (vkCreateGraphicsPipelines(device, VK_NULL_HANDLE, 1, &pipelineInfo,
nullptr, &newPipeline)
!= VK_SUCCESS)
{
fmt::println("failed to create pipeline");
return VK_NULL_HANDLE;
}
else
{
return newPipeline;
}
}
void PipelineBuilder::set_shaders(VkShaderModule vertexShader, VkShaderModule fragmentShader)
{
_shaderStages.clear();
_shaderStages.push_back(
vkinit::pipeline_shader_stage_create_info(VK_SHADER_STAGE_VERTEX_BIT, vertexShader));
_shaderStages.push_back(
vkinit::pipeline_shader_stage_create_info(VK_SHADER_STAGE_FRAGMENT_BIT, fragmentShader));
}
void PipelineBuilder::set_input_topology(VkPrimitiveTopology topology)
{
_inputAssembly.topology = topology;
_inputAssembly.primitiveRestartEnable = VK_FALSE;
}
void PipelineBuilder::set_polygon_mode(VkPolygonMode mode)
{
_rasterizer.polygonMode = mode;
_rasterizer.lineWidth = 1.f;
}
void PipelineBuilder::set_cull_mode(VkCullModeFlags cullMode, VkFrontFace frontFace)
{
_rasterizer.cullMode = cullMode;
_rasterizer.frontFace = frontFace;
}
void PipelineBuilder::set_multisampling_none()
{
_multisampling.sampleShadingEnable = VK_FALSE;
_multisampling.rasterizationSamples = VK_SAMPLE_COUNT_1_BIT;
_multisampling.minSampleShading = 1.0f;
_multisampling.pSampleMask = nullptr;
_multisampling.alphaToCoverageEnable = VK_FALSE;
_multisampling.alphaToOneEnable = VK_FALSE;
}
void PipelineBuilder::disable_blending()
{
_colorBlendAttachment.colorWriteMask = VK_COLOR_COMPONENT_R_BIT | VK_COLOR_COMPONENT_G_BIT | VK_COLOR_COMPONENT_B_BIT | VK_COLOR_COMPONENT_A_BIT;
_colorBlendAttachment.blendEnable = VK_FALSE;
}
void PipelineBuilder::enable_blending_additive()
{
_colorBlendAttachment.colorWriteMask = VK_COLOR_COMPONENT_R_BIT | VK_COLOR_COMPONENT_G_BIT | VK_COLOR_COMPONENT_B_BIT | VK_COLOR_COMPONENT_A_BIT;
_colorBlendAttachment.blendEnable = VK_TRUE;
_colorBlendAttachment.srcColorBlendFactor = VK_BLEND_FACTOR_SRC_ALPHA;
_colorBlendAttachment.dstColorBlendFactor = VK_BLEND_FACTOR_ONE;
_colorBlendAttachment.colorBlendOp = VK_BLEND_OP_ADD;
_colorBlendAttachment.srcAlphaBlendFactor = VK_BLEND_FACTOR_ONE;
_colorBlendAttachment.dstAlphaBlendFactor = VK_BLEND_FACTOR_ZERO;
_colorBlendAttachment.alphaBlendOp = VK_BLEND_OP_ADD;
}
void PipelineBuilder::enable_blending_alphablend()
{
_colorBlendAttachment.colorWriteMask = VK_COLOR_COMPONENT_R_BIT | VK_COLOR_COMPONENT_G_BIT | VK_COLOR_COMPONENT_B_BIT | VK_COLOR_COMPONENT_A_BIT;
_colorBlendAttachment.blendEnable = VK_TRUE;
_colorBlendAttachment.srcColorBlendFactor = VK_BLEND_FACTOR_SRC_ALPHA;
_colorBlendAttachment.dstColorBlendFactor = VK_BLEND_FACTOR_ONE_MINUS_SRC_ALPHA;
_colorBlendAttachment.colorBlendOp = VK_BLEND_OP_ADD;
_colorBlendAttachment.srcAlphaBlendFactor = VK_BLEND_FACTOR_ONE;
_colorBlendAttachment.dstAlphaBlendFactor = VK_BLEND_FACTOR_ZERO;
_colorBlendAttachment.alphaBlendOp = VK_BLEND_OP_ADD;
}
void PipelineBuilder::set_color_attachment_format(VkFormat format)
{
_colorAttachmentFormats.clear();
_colorAttachmentFormats.push_back(format);
_renderInfo.colorAttachmentCount = 1;
_renderInfo.pColorAttachmentFormats = _colorAttachmentFormats.data();
}
void PipelineBuilder::set_color_attachment_formats(std::span<VkFormat> formats)
{
_colorAttachmentFormats.assign(formats.begin(), formats.end());
_renderInfo.colorAttachmentCount = (uint32_t)_colorAttachmentFormats.size();
_renderInfo.pColorAttachmentFormats = _colorAttachmentFormats.data();
}
void PipelineBuilder::set_depth_format(VkFormat format)
{
_renderInfo.depthAttachmentFormat = format;
}
void PipelineBuilder::disable_depthtest()
{
_depthStencil.depthTestEnable = VK_FALSE;
_depthStencil.depthWriteEnable = VK_FALSE;
_depthStencil.depthCompareOp = VK_COMPARE_OP_NEVER;
_depthStencil.depthBoundsTestEnable = VK_FALSE;
_depthStencil.stencilTestEnable = VK_FALSE;
_depthStencil.front = {};
_depthStencil.back = {};
_depthStencil.minDepthBounds = 0.f;
_depthStencil.maxDepthBounds = 1.f;
}
void PipelineBuilder::enable_depthtest(bool depthWriteEnable, VkCompareOp op)
{
_depthStencil.depthTestEnable = VK_TRUE;
_depthStencil.depthWriteEnable = depthWriteEnable;
_depthStencil.depthCompareOp = op;
_depthStencil.depthBoundsTestEnable = VK_FALSE;
_depthStencil.stencilTestEnable = VK_FALSE;
_depthStencil.front = {};
_depthStencil.back = {};
_depthStencil.minDepthBounds = 0.f;
_depthStencil.maxDepthBounds = 1.f;
}

59
src/render/vk_pipelines.h Normal file
View File

@@ -0,0 +1,59 @@
#pragma once
#include <core/vk_types.h>
#include <fstream>
#include <core/vk_initializers.h>
namespace vkutil
{
bool load_shader_module(const char *filePath, VkDevice device, VkShaderModule *outShaderModule);
};
class PipelineBuilder
{
public:
std::vector<VkPipelineShaderStageCreateInfo> _shaderStages;
VkPipelineInputAssemblyStateCreateInfo _inputAssembly;
VkPipelineRasterizationStateCreateInfo _rasterizer;
VkPipelineColorBlendAttachmentState _colorBlendAttachment;
VkPipelineMultisampleStateCreateInfo _multisampling;
VkPipelineLayout _pipelineLayout;
VkPipelineDepthStencilStateCreateInfo _depthStencil;
VkPipelineRenderingCreateInfo _renderInfo;
VkFormat _colorAttachmentformat;
std::vector<VkFormat> _colorAttachmentFormats;
PipelineBuilder()
{ clear(); }
void clear();
VkPipeline build_pipeline(VkDevice device);
void set_shaders(VkShaderModule vertexShader, VkShaderModule fragmentShader);
void set_input_topology(VkPrimitiveTopology topology);
void set_polygon_mode(VkPolygonMode mode);
void set_cull_mode(VkCullModeFlags cullMode, VkFrontFace frontFace);
void set_multisampling_none();
void disable_blending();
void enable_blending_additive();
void enable_blending_alphablend();
void set_color_attachment_format(VkFormat format);
void set_color_attachment_formats(std::span<VkFormat> formats);
void set_depth_format(VkFormat format);
void enable_depthtest(bool depthWriteEnable,VkCompareOp op);
void disable_depthtest();
};

View File

@@ -0,0 +1,74 @@
#include "vk_renderpass.h"
#include "vk_renderpass_background.h"
#include "vk_renderpass_geometry.h"
#include "vk_renderpass_imgui.h"
#include "vk_renderpass_lighting.h"
#include "vk_renderpass_transparent.h"
#include "vk_renderpass_tonemap.h"
#include "vk_renderpass_shadow.h"
void RenderPassManager::init(EngineContext *context)
{
_context = context;
auto backgroundPass = std::make_unique<BackgroundPass>();
backgroundPass->init(context);
addPass(std::move(backgroundPass));
// Shadow map pass comes early in the frame
auto shadowPass = std::make_unique<ShadowPass>();
shadowPass->init(context);
addPass(std::move(shadowPass));
auto geometryPass = std::make_unique<GeometryPass>();
geometryPass->init(context);
addPass(std::move(geometryPass));
auto lightingPass = std::make_unique<LightingPass>();
lightingPass->init(context);
addPass(std::move(lightingPass));
auto transparentPass = std::make_unique<TransparentPass>();
transparentPass->init(context);
addPass(std::move(transparentPass));
auto tonemapPass = std::make_unique<TonemapPass>();
tonemapPass->init(context);
addPass(std::move(tonemapPass));
}
void RenderPassManager::cleanup()
{
for (auto &pass: _passes)
{
pass->cleanup();
}
if (_imguiPass)
{
_imguiPass->cleanup();
}
fmt::print("RenderPassManager::cleanup()\n");
_passes.clear();
_imguiPass.reset();
}
void RenderPassManager::addPass(std::unique_ptr<IRenderPass> pass)
{
_passes.push_back(std::move(pass));
}
void RenderPassManager::setImGuiPass(std::unique_ptr<IRenderPass> imguiPass)
{
_imguiPass = std::move(imguiPass);
if (_imguiPass)
{
_imguiPass->init(_context);
}
}
ImGuiPass *RenderPassManager::getImGuiPass()
{
if (!_imguiPass) return nullptr;
return dynamic_cast<ImGuiPass *>(_imguiPass.get());
}

View File

@@ -0,0 +1,54 @@
#pragma once
#include <core/vk_types.h>
#include <vector>
#include <memory>
#include <functional>
class EngineContext;
class ImGuiPass;
class IRenderPass
{
public:
virtual ~IRenderPass() = default;
virtual void init(EngineContext *context) = 0;
virtual void cleanup() = 0;
virtual void execute(VkCommandBuffer cmd) = 0;
virtual const char *getName() const = 0;
};
class RenderPassManager
{
public:
void init(EngineContext *context);
void cleanup();
void addPass(std::unique_ptr<IRenderPass> pass);
void setImGuiPass(std::unique_ptr<IRenderPass> imguiPass);
ImGuiPass *getImGuiPass();
template<typename T>
T *getPass()
{
for (auto &pass: _passes)
{
if (T *typedPass = dynamic_cast<T *>(pass.get()))
{
return typedPass;
}
}
return nullptr;
}
private:
EngineContext *_context = nullptr;
std::vector<std::unique_ptr<IRenderPass> > _passes;
std::unique_ptr<IRenderPass> _imguiPass = nullptr;
};

View File

@@ -0,0 +1,99 @@
#include "vk_renderpass_background.h"
#include <string_view>
#include "vk_swapchain.h"
#include "core/engine_context.h"
#include "core/vk_resource.h"
#include "core/vk_pipeline_manager.h"
#include "core/asset_manager.h"
#include "render/rg_graph.h"
void BackgroundPass::init(EngineContext *context)
{
_context = context;
init_background_pipelines();
}
void BackgroundPass::init_background_pipelines()
{
ComputePipelineCreateInfo createInfo{};
createInfo.shaderPath = _context->getAssets()->shaderPath("gradient_color.comp.spv");
createInfo.descriptorTypes = {VK_DESCRIPTOR_TYPE_STORAGE_IMAGE};
createInfo.pushConstantSize = sizeof(ComputePushConstants);
_context->pipelines->createComputePipeline("gradient", createInfo);
createInfo.shaderPath = _context->getAssets()->shaderPath("sky.comp.spv");
_context->pipelines->createComputePipeline("sky", createInfo);
_context->pipelines->createComputeInstance("background.gradient", "gradient");
_context->pipelines->createComputeInstance("background.sky", "sky");
_context->pipelines->setComputeInstanceStorageImage("background.gradient", 0,
_context->getSwapchain()->drawImage().imageView);
_context->pipelines->setComputeInstanceStorageImage("background.sky", 0,
_context->getSwapchain()->drawImage().imageView);
ComputeEffect gradient{};
gradient.name = "gradient";
gradient.data.data1 = glm::vec4(1, 0, 0, 1);
gradient.data.data2 = glm::vec4(0, 0, 1, 1);
ComputeEffect sky{};
sky.name = "sky";
sky.data.data1 = glm::vec4(0.1, 0.2, 0.4, 0.97);
_backgroundEffects.push_back(gradient);
_backgroundEffects.push_back(sky);
}
void BackgroundPass::execute(VkCommandBuffer)
{
// Background is executed via the render graph now.
}
void BackgroundPass::register_graph(RenderGraph *graph, RGImageHandle drawHandle, RGImageHandle depthHandle)
{
(void) depthHandle; // Reserved for future depth transitions.
if (!graph || !drawHandle.valid() || !_context) return;
if (_backgroundEffects.empty()) return;
graph->add_pass(
"Background",
RGPassType::Compute,
[drawHandle](RGPassBuilder &builder, EngineContext *) {
builder.write(drawHandle, RGImageUsage::ComputeWrite);
},
[this, drawHandle](VkCommandBuffer cmd, const RGPassResources &res, EngineContext *ctx) {
VkImageView drawView = res.image_view(drawHandle);
if (drawView != VK_NULL_HANDLE)
{
_context->pipelines->setComputeInstanceStorageImage("background.gradient", 0, drawView);
_context->pipelines->setComputeInstanceStorageImage("background.sky", 0, drawView);
}
ComputeEffect &effect = _backgroundEffects[_currentEffect];
ComputeDispatchInfo dispatchInfo = ComputeManager::createDispatch2D(
ctx->getDrawExtent().width, ctx->getDrawExtent().height);
dispatchInfo.pushConstants = &effect.data;
dispatchInfo.pushConstantSize = sizeof(ComputePushConstants);
const char *instanceName = (std::string_view(effect.name) == std::string_view("gradient"))
? "background.gradient"
: "background.sky";
ctx->pipelines->dispatchComputeInstance(cmd, instanceName, dispatchInfo);
}
);
}
void BackgroundPass::cleanup()
{
if (_context && _context->pipelines)
{
_context->pipelines->destroyComputeInstance("background.gradient");
_context->pipelines->destroyComputeInstance("background.sky");
_context->pipelines->destroyComputePipeline("gradient");
_context->pipelines->destroyComputePipeline("sky");
}
fmt::print("RenderPassManager::cleanup()\n");
_backgroundEffects.clear();
}

View File

@@ -0,0 +1,28 @@
#pragma once
#include "vk_renderpass.h"
#include "compute/vk_compute.h"
#include "render/rg_types.h"
class RenderGraph;
class BackgroundPass : public IRenderPass
{
public:
void init(EngineContext *context) override;
void cleanup() override;
void execute(VkCommandBuffer cmd) override;
const char *getName() const override { return "Background"; }
void register_graph(RenderGraph *graph, RGImageHandle drawHandle, RGImageHandle depthHandle);
void setCurrentEffect(int index) { _currentEffect = index; }
std::vector<ComputeEffect> &getEffects() { return _backgroundEffects; }
std::vector<ComputeEffect> _backgroundEffects;
int _currentEffect = 0;
private:
EngineContext *_context = nullptr;
void init_background_pipelines();
};

View File

@@ -0,0 +1,290 @@
#include "vk_renderpass_geometry.h"
#include <chrono>
#include <unordered_set>
#include "frame_resources.h"
#include "vk_descriptor_manager.h"
#include "vk_device.h"
#include "core/engine_context.h"
#include "core/vk_initializers.h"
#include "core/vk_resource.h"
#include "vk_mem_alloc.h"
#include "vk_scene.h"
#include "vk_swapchain.h"
#include "render/rg_graph.h"
bool is_visible(const RenderObject &obj, const glm::mat4 &viewproj)
{
const std::array<glm::vec3, 8> corners{
glm::vec3{+1, +1, +1}, glm::vec3{+1, +1, -1}, glm::vec3{+1, -1, +1}, glm::vec3{+1, -1, -1},
glm::vec3{-1, +1, +1}, glm::vec3{-1, +1, -1}, glm::vec3{-1, -1, +1}, glm::vec3{-1, -1, -1},
};
const glm::vec3 o = obj.bounds.origin;
const glm::vec3 e = obj.bounds.extents;
const glm::mat4 m = viewproj * obj.transform; // world -> clip
glm::vec4 clip[8];
for (int i = 0; i < 8; ++i)
{
const glm::vec3 p = o + corners[i] * e;
clip[i] = m * glm::vec4(p, 1.f);
}
auto all_out = [&](auto pred) {
for (int i = 0; i < 8; ++i)
{
if (!pred(clip[i])) return false;
}
return true;
};
// Clip volume in Vulkan (ZO): -w<=x<=w, -w<=y<=w, 0<=z<=w
if (all_out([](const glm::vec4 &v) { return v.x < -v.w; })) return false; // left
if (all_out([](const glm::vec4 &v) { return v.x > v.w; })) return false; // right
if (all_out([](const glm::vec4 &v) { return v.y < -v.w; })) return false; // bottom
if (all_out([](const glm::vec4 &v) { return v.y > v.w; })) return false; // top
if (all_out([](const glm::vec4 &v) { return v.z < 0.0f; })) return false; // near (ZO)
if (all_out([](const glm::vec4 &v) { return v.z > v.w; })) return false; // far
return true; // intersects or is fully inside
}
void GeometryPass::init(EngineContext *context)
{
_context = context;
}
void GeometryPass::execute(VkCommandBuffer)
{
// Geometry is executed via the render graph now.
}
void GeometryPass::register_graph(RenderGraph *graph,
RGImageHandle gbufferPosition,
RGImageHandle gbufferNormal,
RGImageHandle gbufferAlbedo,
RGImageHandle depthHandle)
{
if (!graph || !gbufferPosition.valid() || !gbufferNormal.valid() || !gbufferAlbedo.valid() || !depthHandle.valid())
{
return;
}
graph->add_pass(
"Geometry",
RGPassType::Graphics,
[gbufferPosition, gbufferNormal, gbufferAlbedo, depthHandle](RGPassBuilder &builder, EngineContext *ctx)
{
VkClearValue clear{};
clear.color = {{0.f, 0.f, 0.f, 0.f}};
builder.write_color(gbufferPosition, true, clear);
builder.write_color(gbufferNormal, true, clear);
builder.write_color(gbufferAlbedo, true, clear);
// Reverse-Z: clear depth to 0.0
VkClearValue depthClear{};
depthClear.depthStencil = {0.f, 0};
builder.write_depth(depthHandle, true, depthClear);
// Register read buffers used by all draw calls (index + vertex SSBO)
if (ctx)
{
const DrawContext &dc = ctx->getMainDrawContext();
// Collect unique buffers to avoid duplicates
std::unordered_set<VkBuffer> indexSet;
std::unordered_set<VkBuffer> vertexSet;
indexSet.reserve(dc.OpaqueSurfaces.size() + dc.TransparentSurfaces.size());
vertexSet.reserve(dc.OpaqueSurfaces.size() + dc.TransparentSurfaces.size());
auto collect = [&](const std::vector<RenderObject>& v){
for (const auto &r : v)
{
if (r.indexBuffer) indexSet.insert(r.indexBuffer);
if (r.vertexBuffer) vertexSet.insert(r.vertexBuffer);
}
};
collect(dc.OpaqueSurfaces);
collect(dc.TransparentSurfaces);
for (VkBuffer b : indexSet)
builder.read_buffer(b, RGBufferUsage::IndexRead, 0, "geom.index");
for (VkBuffer b : vertexSet)
builder.read_buffer(b, RGBufferUsage::StorageRead, 0, "geom.vertex");
}
},
[this, gbufferPosition, gbufferNormal, gbufferAlbedo, depthHandle](VkCommandBuffer cmd,
const RGPassResources &res,
EngineContext *ctx)
{
draw_geometry(cmd, ctx, res, gbufferPosition, gbufferNormal, gbufferAlbedo, depthHandle);
});
}
void GeometryPass::draw_geometry(VkCommandBuffer cmd,
EngineContext *context,
const RGPassResources &resources,
RGImageHandle gbufferPosition,
RGImageHandle gbufferNormal,
RGImageHandle gbufferAlbedo,
RGImageHandle depthHandle) const
{
EngineContext *ctxLocal = context ? context : _context;
if (!ctxLocal || !ctxLocal->currentFrame) return;
ResourceManager *resourceManager = ctxLocal->getResources();
DeviceManager *deviceManager = ctxLocal->getDevice();
DescriptorManager *descriptorLayouts = ctxLocal->getDescriptorLayouts();
if (!resourceManager || !deviceManager || !descriptorLayouts) return;
VkImageView positionView = resources.image_view(gbufferPosition);
VkImageView normalView = resources.image_view(gbufferNormal);
VkImageView albedoView = resources.image_view(gbufferAlbedo);
VkImageView depthView = resources.image_view(depthHandle);
if (positionView == VK_NULL_HANDLE || normalView == VK_NULL_HANDLE ||
albedoView == VK_NULL_HANDLE || depthView == VK_NULL_HANDLE)
{
return;
}
const auto& mainDrawContext = ctxLocal->getMainDrawContext();
const auto& sceneData = ctxLocal->getSceneData();
VkExtent2D drawExtent = ctxLocal->getDrawExtent();
auto start = std::chrono::system_clock::now();
std::vector<uint32_t> opaque_draws;
opaque_draws.reserve(mainDrawContext.OpaqueSurfaces.size());
for (int i = 0; i < mainDrawContext.OpaqueSurfaces.size(); i++)
{
if (is_visible(mainDrawContext.OpaqueSurfaces[i], sceneData.viewproj))
{
opaque_draws.push_back(i);
}
}
std::sort(opaque_draws.begin(), opaque_draws.end(), [&](const auto &iA, const auto &iB)
{
const RenderObject &A = mainDrawContext.OpaqueSurfaces[iA];
const RenderObject &B = mainDrawContext.OpaqueSurfaces[iB];
if (A.material == B.material)
{
return A.indexBuffer < B.indexBuffer;
}
return A.material < B.material;
});
// Dynamic rendering is now begun by the RenderGraph using the declared attachments.
AllocatedBuffer gpuSceneDataBuffer = resourceManager->create_buffer(sizeof(GPUSceneData),
VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT,
VMA_MEMORY_USAGE_CPU_TO_GPU);
ctxLocal->currentFrame->_deletionQueue.push_function([resourceManager, gpuSceneDataBuffer]()
{
resourceManager->destroy_buffer(gpuSceneDataBuffer);
});
VmaAllocationInfo allocInfo{};
vmaGetAllocationInfo(deviceManager->allocator(), gpuSceneDataBuffer.allocation, &allocInfo);
auto *sceneUniformData = static_cast<GPUSceneData *>(allocInfo.pMappedData);
*sceneUniformData = sceneData;
vmaFlushAllocation(deviceManager->allocator(), gpuSceneDataBuffer.allocation, 0, sizeof(GPUSceneData));
VkDescriptorSet globalDescriptor = ctxLocal->currentFrame->_frameDescriptors.allocate(
deviceManager->device(), descriptorLayouts->gpuSceneDataLayout());
DescriptorWriter writer;
writer.write_buffer(0, gpuSceneDataBuffer.buffer, sizeof(GPUSceneData), 0, VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER);
writer.update_set(deviceManager->device(), globalDescriptor);
MaterialPipeline *lastPipeline = nullptr;
MaterialInstance *lastMaterial = nullptr;
VkBuffer lastIndexBuffer = VK_NULL_HANDLE;
auto draw = [&](const RenderObject &r)
{
if (r.material != lastMaterial)
{
lastMaterial = r.material;
if (r.material->pipeline != lastPipeline)
{
lastPipeline = r.material->pipeline;
vkCmdBindPipeline(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, r.material->pipeline->pipeline);
vkCmdBindDescriptorSets(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, r.material->pipeline->layout, 0, 1,
&globalDescriptor, 0, nullptr);
VkViewport viewport{};
viewport.x = 0;
viewport.y = 0;
viewport.width = static_cast<float>(drawExtent.width);
viewport.height = static_cast<float>(drawExtent.height);
viewport.minDepth = 0.f;
viewport.maxDepth = 1.f;
vkCmdSetViewport(cmd, 0, 1, &viewport);
VkRect2D scissor{};
scissor.offset.x = 0;
scissor.offset.y = 0;
scissor.extent.width = drawExtent.width;
scissor.extent.height = drawExtent.height;
vkCmdSetScissor(cmd, 0, 1, &scissor);
}
vkCmdBindDescriptorSets(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, r.material->pipeline->layout, 1, 1,
&r.material->materialSet, 0, nullptr);
}
if (r.indexBuffer != lastIndexBuffer)
{
lastIndexBuffer = r.indexBuffer;
vkCmdBindIndexBuffer(cmd, r.indexBuffer, 0, VK_INDEX_TYPE_UINT32);
}
GPUDrawPushConstants push_constants{};
push_constants.worldMatrix = r.transform;
push_constants.vertexBuffer = r.vertexBufferAddress;
vkCmdPushConstants(cmd, r.material->pipeline->layout, VK_SHADER_STAGE_VERTEX_BIT, 0,
sizeof(GPUDrawPushConstants), &push_constants);
vkCmdDrawIndexed(cmd, r.indexCount, 1, r.firstIndex, 0, 0);
if (ctxLocal->stats)
{
ctxLocal->stats->drawcall_count++;
ctxLocal->stats->triangle_count += r.indexCount / 3;
}
};
if (ctxLocal->stats)
{
ctxLocal->stats->drawcall_count = 0;
ctxLocal->stats->triangle_count = 0;
}
for (auto &r: opaque_draws)
{
draw(mainDrawContext.OpaqueSurfaces[r]);
}
// Transparent surfaces are rendered in a separate Transparent pass after lighting.
// RenderGraph will end dynamic rendering for this pass.
auto end = std::chrono::system_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::microseconds>(end - start);
if (ctxLocal->stats)
{
ctxLocal->stats->mesh_draw_time = elapsed.count() / 1000.f;
}
}
void GeometryPass::cleanup()
{
fmt::print("GeometryPass::cleanup()\n");
}

View File

@@ -0,0 +1,33 @@
#pragma once
#include "vk_renderpass.h"
#include <render/rg_types.h>
class SwapchainManager;
class RenderGraph;
class GeometryPass : public IRenderPass
{
public:
void init(EngineContext *context) override;
void cleanup() override;
void execute(VkCommandBuffer cmd) override;
const char *getName() const override { return "Geometry"; }
void register_graph(RenderGraph *graph,
RGImageHandle gbufferPosition,
RGImageHandle gbufferNormal,
RGImageHandle gbufferAlbedo,
RGImageHandle depthHandle);
private:
EngineContext *_context = nullptr;
void draw_geometry(VkCommandBuffer cmd,
EngineContext *context,
const class RGPassResources &resources,
RGImageHandle gbufferPosition,
RGImageHandle gbufferNormal,
RGImageHandle gbufferAlbedo,
RGImageHandle depthHandle) const;
};

View File

@@ -0,0 +1,113 @@
#include "vk_renderpass_imgui.h"
#include "imgui.h"
#include "imgui_impl_sdl2.h"
#include "imgui_impl_vulkan.h"
#include "vk_device.h"
#include "vk_swapchain.h"
#include "core/vk_initializers.h"
#include "core/engine_context.h"
#include "render/rg_graph.h"
void ImGuiPass::init(EngineContext *context)
{
_context = context;
VkDescriptorPoolSize pool_sizes[] = {
{VK_DESCRIPTOR_TYPE_SAMPLER, 1000},
{VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, 1000},
{VK_DESCRIPTOR_TYPE_SAMPLED_IMAGE, 1000},
{VK_DESCRIPTOR_TYPE_STORAGE_IMAGE, 1000},
{VK_DESCRIPTOR_TYPE_UNIFORM_TEXEL_BUFFER, 1000},
{VK_DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER, 1000},
{VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, 1000},
{VK_DESCRIPTOR_TYPE_STORAGE_BUFFER, 1000},
{VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC, 1000},
{VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC, 1000},
{VK_DESCRIPTOR_TYPE_INPUT_ATTACHMENT, 1000}
};
VkDescriptorPoolCreateInfo pool_info = {};
pool_info.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO;
pool_info.flags = VK_DESCRIPTOR_POOL_CREATE_FREE_DESCRIPTOR_SET_BIT;
pool_info.maxSets = 1000;
pool_info.poolSizeCount = (uint32_t) std::size(pool_sizes);
pool_info.pPoolSizes = pool_sizes;
VkDescriptorPool imguiPool;
VK_CHECK(vkCreateDescriptorPool(_context->device->device(), &pool_info, nullptr, &imguiPool));
ImGui::CreateContext();
ImGui_ImplSDL2_InitForVulkan(_context->window);
ImGui_ImplVulkan_InitInfo init_info = {};
init_info.Instance = _context->getDevice()->instance();
init_info.PhysicalDevice = _context->getDevice()->physicalDevice();
init_info.Device = _context->getDevice()->device();
init_info.Queue = _context->getDevice()->graphicsQueue();
init_info.DescriptorPool = imguiPool;
init_info.MinImageCount = 3;
init_info.ImageCount = 3;
init_info.UseDynamicRendering = true;
init_info.PipelineRenderingCreateInfo = {.sType = VK_STRUCTURE_TYPE_PIPELINE_RENDERING_CREATE_INFO};
init_info.PipelineRenderingCreateInfo.colorAttachmentCount = 1;
auto _swapchainImageFormat = _context->getSwapchain()->swapchainImageFormat();
init_info.PipelineRenderingCreateInfo.pColorAttachmentFormats = &_swapchainImageFormat;
init_info.MSAASamples = VK_SAMPLE_COUNT_1_BIT;
ImGui_ImplVulkan_Init(&init_info);
ImGui_ImplVulkan_CreateFontsTexture();
// add the destroy the imgui created structures
_deletionQueue.push_function([=]() {
ImGui_ImplVulkan_Shutdown();
vkDestroyDescriptorPool(_context->getDevice()->device(), imguiPool, nullptr);
});
}
void ImGuiPass::cleanup()
{
fmt::print("ImGuiPass::cleanup()\n");
_deletionQueue.flush();
}
void ImGuiPass::execute(VkCommandBuffer)
{
// ImGui is executed via the render graph now.
}
void ImGuiPass::register_graph(RenderGraph *graph, RGImageHandle swapchainHandle)
{
if (!graph || !swapchainHandle.valid()) return;
graph->add_pass(
"ImGui",
RGPassType::Graphics,
[swapchainHandle](RGPassBuilder &builder, EngineContext *)
{
builder.write_color(swapchainHandle, false, {});
},
[this, swapchainHandle](VkCommandBuffer cmd, const RGPassResources &res, EngineContext *ctx)
{
draw_imgui(cmd, ctx, res, swapchainHandle);
});
}
void ImGuiPass::draw_imgui(VkCommandBuffer cmd,
EngineContext *context,
const RGPassResources &resources,
RGImageHandle targetHandle) const
{
EngineContext *ctxLocal = context ? context : _context;
if (!ctxLocal) return;
VkImageView targetImageView = resources.image_view(targetHandle);
if (targetImageView == VK_NULL_HANDLE) return;
// Dynamic rendering is handled by the RenderGraph; just render draw data.
ImGui_ImplVulkan_RenderDrawData(ImGui::GetDrawData(), cmd);
}

View File

@@ -0,0 +1,29 @@
#pragma once
#include "vk_renderpass.h"
#include "core/vk_types.h"
#include <render/rg_types.h>
class ImGuiPass : public IRenderPass
{
public:
void init(EngineContext *context) override;
void cleanup() override;
void execute(VkCommandBuffer cmd) override;
const char *getName() const override { return "ImGui"; }
void register_graph(class RenderGraph *graph,
RGImageHandle swapchainHandle);
private:
EngineContext *_context = nullptr;
void draw_imgui(VkCommandBuffer cmd,
EngineContext *context,
const class RGPassResources &resources,
RGImageHandle targetHandle) const;
DeletionQueue _deletionQueue;
};

View File

@@ -0,0 +1,208 @@
#include "vk_renderpass_lighting.h"
#include "frame_resources.h"
#include "vk_descriptor_manager.h"
#include "vk_device.h"
#include "core/engine_context.h"
#include "core/vk_initializers.h"
#include "core/vk_resource.h"
#include "render/vk_pipelines.h"
#include "core/vk_pipeline_manager.h"
#include "core/asset_manager.h"
#include "core/vk_descriptors.h"
#include "vk_mem_alloc.h"
#include "vk_sampler_manager.h"
#include "vk_swapchain.h"
#include "render/rg_graph.h"
void LightingPass::init(EngineContext *context)
{
_context = context;
// Build descriptor layout for GBuffer inputs
{
DescriptorLayoutBuilder builder;
builder.add_binding(0, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER);
builder.add_binding(1, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER);
builder.add_binding(2, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER);
_gBufferInputDescriptorLayout = builder.build(_context->getDevice()->device(), VK_SHADER_STAGE_FRAGMENT_BIT);
}
// Allocate and write GBuffer descriptor set
_gBufferInputDescriptorSet = _context->getDescriptors()->allocate(
_context->getDevice()->device(), _gBufferInputDescriptorLayout);
{
DescriptorWriter writer;
writer.write_image(0, _context->getSwapchain()->gBufferPosition().imageView, _context->getSamplers()->defaultLinear(),
VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER);
writer.write_image(1, _context->getSwapchain()->gBufferNormal().imageView, _context->getSamplers()->defaultLinear(),
VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER);
writer.write_image(2, _context->getSwapchain()->gBufferAlbedo().imageView, _context->getSamplers()->defaultLinear(),
VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER);
writer.update_set(_context->getDevice()->device(), _gBufferInputDescriptorSet);
}
// Shadow map descriptor layout (set = 2, updated per-frame)
{
DescriptorLayoutBuilder builder;
builder.add_binding(0, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER);
_shadowDescriptorLayout = builder.build(_context->getDevice()->device(), VK_SHADER_STAGE_FRAGMENT_BIT);
}
// Build lighting pipeline through PipelineManager
VkDescriptorSetLayout layouts[] = {
_context->getDescriptorLayouts()->gpuSceneDataLayout(),
_gBufferInputDescriptorLayout,
_shadowDescriptorLayout
};
GraphicsPipelineCreateInfo info{};
info.vertexShaderPath = _context->getAssets()->shaderPath("fullscreen.vert.spv");
info.fragmentShaderPath = _context->getAssets()->shaderPath("deferred_lighting.frag.spv");
info.setLayouts.assign(std::begin(layouts), std::end(layouts));
info.configure = [this](PipelineBuilder &b) {
b.set_input_topology(VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST);
b.set_polygon_mode(VK_POLYGON_MODE_FILL);
b.set_cull_mode(VK_CULL_MODE_NONE, VK_FRONT_FACE_CLOCKWISE);
b.set_multisampling_none();
b.enable_blending_alphablend();
b.disable_depthtest();
b.set_color_attachment_format(_context->getSwapchain()->drawImage().imageFormat);
};
_context->pipelines->createGraphicsPipeline("deferred_lighting", info);
// fetch the handles so current frame uses latest versions
MaterialPipeline mp{};
_context->pipelines->getMaterialPipeline("deferred_lighting", mp);
_pipeline = mp.pipeline;
_pipelineLayout = mp.layout;
_deletionQueue.push_function([&]() {
// Pipelines are owned by PipelineManager; only destroy our local descriptor set layout
vkDestroyDescriptorSetLayout(_context->getDevice()->device(), _gBufferInputDescriptorLayout, nullptr);
vkDestroyDescriptorSetLayout(_context->getDevice()->device(), _shadowDescriptorLayout, nullptr);
});
}
void LightingPass::execute(VkCommandBuffer)
{
// Lighting is executed via the render graph now.
}
void LightingPass::register_graph(RenderGraph *graph,
RGImageHandle drawHandle,
RGImageHandle gbufferPosition,
RGImageHandle gbufferNormal,
RGImageHandle gbufferAlbedo,
RGImageHandle shadowDepth)
{
if (!graph || !drawHandle.valid() || !gbufferPosition.valid() || !gbufferNormal.valid() || !gbufferAlbedo.valid() || !shadowDepth.valid())
{
return;
}
graph->add_pass(
"Lighting",
RGPassType::Graphics,
[drawHandle, gbufferPosition, gbufferNormal, gbufferAlbedo, shadowDepth](RGPassBuilder &builder, EngineContext *)
{
builder.read(gbufferPosition, RGImageUsage::SampledFragment);
builder.read(gbufferNormal, RGImageUsage::SampledFragment);
builder.read(gbufferAlbedo, RGImageUsage::SampledFragment);
builder.read(shadowDepth, RGImageUsage::SampledFragment);
builder.write_color(drawHandle);
},
[this, drawHandle, shadowDepth](VkCommandBuffer cmd, const RGPassResources &res, EngineContext *ctx)
{
draw_lighting(cmd, ctx, res, drawHandle, shadowDepth);
});
}
void LightingPass::draw_lighting(VkCommandBuffer cmd,
EngineContext *context,
const RGPassResources &resources,
RGImageHandle drawHandle,
RGImageHandle shadowDepth)
{
EngineContext *ctxLocal = context ? context : _context;
if (!ctxLocal || !ctxLocal->currentFrame) return;
ResourceManager *resourceManager = ctxLocal->getResources();
DeviceManager *deviceManager = ctxLocal->getDevice();
DescriptorManager *descriptorLayouts = ctxLocal->getDescriptorLayouts();
PipelineManager *pipelineManager = ctxLocal->pipelines;
if (!resourceManager || !deviceManager || !descriptorLayouts || !pipelineManager) return;
VkImageView drawView = resources.image_view(drawHandle);
if (drawView == VK_NULL_HANDLE) return;
// Re-fetch pipeline in case it was hot-reloaded
pipelineManager->getGraphics("deferred_lighting", _pipeline, _pipelineLayout);
// Dynamic rendering is handled by the RenderGraph using the declared draw attachment.
AllocatedBuffer gpuSceneDataBuffer = resourceManager->create_buffer(
sizeof(GPUSceneData), VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT,
VMA_MEMORY_USAGE_CPU_TO_GPU);
ctxLocal->currentFrame->_deletionQueue.push_function([resourceManager, gpuSceneDataBuffer]()
{
resourceManager->destroy_buffer(gpuSceneDataBuffer);
});
VmaAllocationInfo allocInfo{};
vmaGetAllocationInfo(deviceManager->allocator(), gpuSceneDataBuffer.allocation, &allocInfo);
auto *sceneUniformData = static_cast<GPUSceneData *>(allocInfo.pMappedData);
*sceneUniformData = ctxLocal->getSceneData();
vmaFlushAllocation(deviceManager->allocator(), gpuSceneDataBuffer.allocation, 0, sizeof(GPUSceneData));
VkDescriptorSet globalDescriptor = ctxLocal->currentFrame->_frameDescriptors.allocate(
deviceManager->device(), descriptorLayouts->gpuSceneDataLayout());
DescriptorWriter writer;
writer.write_buffer(0, gpuSceneDataBuffer.buffer, sizeof(GPUSceneData), 0, VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER);
writer.update_set(deviceManager->device(), globalDescriptor);
vkCmdBindPipeline(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, _pipeline);
vkCmdBindDescriptorSets(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, _pipelineLayout, 0, 1, &globalDescriptor, 0,
nullptr);
vkCmdBindDescriptorSets(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, _pipelineLayout, 1, 1,
&_gBufferInputDescriptorSet, 0, nullptr);
// Allocate and write shadow descriptor set for this frame (set = 2)
VkDescriptorSet shadowSet = ctxLocal->currentFrame->_frameDescriptors.allocate(
deviceManager->device(), _shadowDescriptorLayout);
{
VkImageView shadowView = resources.image_view(shadowDepth);
DescriptorWriter writer2;
writer2.write_image(0, shadowView, ctxLocal->getSamplers()->defaultLinear(),
VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL,
VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER);
writer2.update_set(deviceManager->device(), shadowSet);
}
vkCmdBindDescriptorSets(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, _pipelineLayout, 2, 1, &shadowSet, 0, nullptr);
VkViewport viewport{};
viewport.x = 0;
viewport.y = 0;
viewport.width = static_cast<float>(ctxLocal->getDrawExtent().width);
viewport.height = static_cast<float>(ctxLocal->getDrawExtent().height);
viewport.minDepth = 0.f;
viewport.maxDepth = 1.f;
vkCmdSetViewport(cmd, 0, 1, &viewport);
VkRect2D scissor{};
scissor.offset = {0, 0};
scissor.extent = {ctxLocal->getDrawExtent().width, ctxLocal->getDrawExtent().height};
vkCmdSetScissor(cmd, 0, 1, &scissor);
vkCmdDraw(cmd, 3, 1, 0, 0);
// RenderGraph ends rendering.
}
void LightingPass::cleanup()
{
_deletionQueue.flush();
fmt::print("LightingPass::cleanup()\n");
}

View File

@@ -0,0 +1,40 @@
#pragma once
#include "vk_renderpass.h"
#include <render/rg_types.h>
class LightingPass : public IRenderPass
{
public:
void init(EngineContext *context) override;
void cleanup() override;
void execute(VkCommandBuffer cmd) override;
const char *getName() const override { return "Lighting"; }
void register_graph(class RenderGraph *graph,
RGImageHandle drawHandle,
RGImageHandle gbufferPosition,
RGImageHandle gbufferNormal,
RGImageHandle gbufferAlbedo,
RGImageHandle shadowDepth);
private:
EngineContext *_context = nullptr;
VkDescriptorSetLayout _gBufferInputDescriptorLayout = VK_NULL_HANDLE;
VkDescriptorSet _gBufferInputDescriptorSet = VK_NULL_HANDLE;
VkDescriptorSetLayout _shadowDescriptorLayout = VK_NULL_HANDLE; // set=2
VkPipelineLayout _pipelineLayout = VK_NULL_HANDLE;
VkPipeline _pipeline = VK_NULL_HANDLE;
void draw_lighting(VkCommandBuffer cmd,
EngineContext *context,
const class RGPassResources &resources,
RGImageHandle drawHandle,
RGImageHandle shadowDepth);
DeletionQueue _deletionQueue;
};

View File

@@ -0,0 +1,184 @@
#include "vk_renderpass_shadow.h"
#include <unordered_set>
#include "core/engine_context.h"
#include "render/rg_graph.h"
#include "render/rg_builder.h"
#include "vk_swapchain.h"
#include "vk_scene.h"
#include "frame_resources.h"
#include "vk_descriptor_manager.h"
#include "vk_device.h"
#include "vk_resource.h"
#include "core/vk_initializers.h"
#include "core/vk_pipeline_manager.h"
#include "core/asset_manager.h"
#include "render/vk_pipelines.h"
#include "core/vk_types.h"
void ShadowPass::init(EngineContext *context)
{
_context = context;
if (!_context || !_context->pipelines) return;
// Build a depth-only graphics pipeline for shadow map rendering
VkPushConstantRange pc{};
pc.offset = 0;
pc.size = sizeof(GPUDrawPushConstants);
pc.stageFlags = VK_SHADER_STAGE_VERTEX_BIT;
GraphicsPipelineCreateInfo info{};
info.vertexShaderPath = _context->getAssets()->shaderPath("shadow.vert.spv");
info.fragmentShaderPath = _context->getAssets()->shaderPath("shadow.frag.spv");
info.setLayouts = { _context->getDescriptorLayouts()->gpuSceneDataLayout() };
info.pushConstants = { pc };
info.configure = [this](PipelineBuilder &b) {
b.set_input_topology(VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST);
b.set_polygon_mode(VK_POLYGON_MODE_FILL);
b.set_cull_mode(VK_CULL_MODE_BACK_BIT, VK_FRONT_FACE_CLOCKWISE);
b.set_multisampling_none();
b.disable_blending();
// Reverse-Z depth test & depth-only pipeline
b.enable_depthtest(true, VK_COMPARE_OP_GREATER_OR_EQUAL);
b.set_depth_format(VK_FORMAT_D32_SFLOAT);
// Static depth bias to help with surface acne (will tune later)
b._rasterizer.depthBiasEnable = VK_TRUE;
b._rasterizer.depthBiasConstantFactor = 2.0f;
b._rasterizer.depthBiasSlopeFactor = 2.0f;
b._rasterizer.depthBiasClamp = 0.0f;
};
_context->pipelines->createGraphicsPipeline("mesh.shadow", info);
}
void ShadowPass::cleanup()
{
// Nothing yet; pipelines and descriptors will be added later
fmt::print("ShadowPass::cleanup()\n");
}
void ShadowPass::execute(VkCommandBuffer)
{
// Shadow rendering is done via the RenderGraph registration.
}
void ShadowPass::register_graph(RenderGraph *graph, RGImageHandle shadowDepth, VkExtent2D extent)
{
if (!graph || !shadowDepth.valid()) return;
graph->add_pass(
"ShadowMap",
RGPassType::Graphics,
[shadowDepth](RGPassBuilder &builder, EngineContext *ctx)
{
// Reverse-Z depth clear to 0.0
VkClearValue clear{}; clear.depthStencil = {0.f, 0};
builder.write_depth(shadowDepth, true, clear);
// Ensure index/vertex buffers are tracked as reads (like Geometry)
if (ctx)
{
const DrawContext &dc = ctx->getMainDrawContext();
std::unordered_set<VkBuffer> indexSet;
std::unordered_set<VkBuffer> vertexSet;
auto collect = [&](const std::vector<RenderObject> &v)
{
for (const auto &r : v)
{
if (r.indexBuffer) indexSet.insert(r.indexBuffer);
if (r.vertexBuffer) vertexSet.insert(r.vertexBuffer);
}
};
collect(dc.OpaqueSurfaces);
// Transparent surfaces are ignored for shadow map in this simple pass
for (VkBuffer b : indexSet)
builder.read_buffer(b, RGBufferUsage::IndexRead, 0, "shadow.index");
for (VkBuffer b : vertexSet)
builder.read_buffer(b, RGBufferUsage::StorageRead, 0, "shadow.vertex");
}
},
[this, shadowDepth, extent](VkCommandBuffer cmd, const RGPassResources &res, EngineContext *ctx)
{
draw_shadow(cmd, ctx, res, shadowDepth, extent);
});
}
void ShadowPass::draw_shadow(VkCommandBuffer cmd,
EngineContext *context,
const RGPassResources &/*resources*/,
RGImageHandle /*shadowDepth*/,
VkExtent2D extent) const
{
EngineContext *ctxLocal = context ? context : _context;
if (!ctxLocal || !ctxLocal->currentFrame) return;
ResourceManager *resourceManager = ctxLocal->getResources();
DeviceManager *deviceManager = ctxLocal->getDevice();
DescriptorManager *descriptorLayouts = ctxLocal->getDescriptorLayouts();
PipelineManager *pipelineManager = ctxLocal->pipelines;
if (!resourceManager || !deviceManager || !descriptorLayouts || !pipelineManager) return;
VkPipeline pipeline{}; VkPipelineLayout layout{};
if (!pipelineManager->getGraphics("mesh.shadow", pipeline, layout)) return;
// Create and upload per-pass scene UBO
AllocatedBuffer gpuSceneDataBuffer = resourceManager->create_buffer(
sizeof(GPUSceneData), VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT,
VMA_MEMORY_USAGE_CPU_TO_GPU);
ctxLocal->currentFrame->_deletionQueue.push_function([resourceManager, gpuSceneDataBuffer]()
{
resourceManager->destroy_buffer(gpuSceneDataBuffer);
});
VmaAllocationInfo allocInfo{};
vmaGetAllocationInfo(deviceManager->allocator(), gpuSceneDataBuffer.allocation, &allocInfo);
auto *sceneUniformData = static_cast<GPUSceneData *>(allocInfo.pMappedData);
*sceneUniformData = ctxLocal->getSceneData();
vmaFlushAllocation(deviceManager->allocator(), gpuSceneDataBuffer.allocation, 0, sizeof(GPUSceneData));
VkDescriptorSet globalDescriptor = ctxLocal->currentFrame->_frameDescriptors.allocate(
deviceManager->device(), descriptorLayouts->gpuSceneDataLayout());
DescriptorWriter writer;
writer.write_buffer(0, gpuSceneDataBuffer.buffer, sizeof(GPUSceneData), 0, VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER);
writer.update_set(deviceManager->device(), globalDescriptor);
vkCmdBindPipeline(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, pipeline);
vkCmdBindDescriptorSets(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, layout, 0, 1, &globalDescriptor, 0, nullptr);
VkViewport viewport{};
viewport.x = 0;
viewport.y = 0;
viewport.width = static_cast<float>(extent.width);
viewport.height = static_cast<float>(extent.height);
viewport.minDepth = 0.f;
viewport.maxDepth = 1.f;
vkCmdSetViewport(cmd, 0, 1, &viewport);
VkRect2D scissor{};
scissor.offset = {0, 0};
scissor.extent = extent;
vkCmdSetScissor(cmd, 0, 1, &scissor);
const DrawContext &dc = ctxLocal->getMainDrawContext();
VkBuffer lastIndexBuffer = VK_NULL_HANDLE;
for (const auto &r : dc.OpaqueSurfaces)
{
if (r.indexBuffer != lastIndexBuffer)
{
lastIndexBuffer = r.indexBuffer;
vkCmdBindIndexBuffer(cmd, r.indexBuffer, 0, VK_INDEX_TYPE_UINT32);
}
GPUDrawPushConstants pc{};
pc.worldMatrix = r.transform;
pc.vertexBuffer = r.vertexBufferAddress;
vkCmdPushConstants(cmd, layout, VK_SHADER_STAGE_VERTEX_BIT, 0, sizeof(GPUDrawPushConstants), &pc);
vkCmdDrawIndexed(cmd, r.indexCount, 1, r.firstIndex, 0, 0);
}
}

View File

@@ -0,0 +1,33 @@
#pragma once
#include "vk_renderpass.h"
#include <render/rg_types.h>
class RenderGraph;
class EngineContext;
class RGPassResources;
// Depth-only directional shadow map pass (skeleton)
// - Writes a depth image using reversed-Z (clear=0)
// - Draw function will be filled in a later step
class ShadowPass : public IRenderPass
{
public:
void init(EngineContext *context) override;
void cleanup() override;
void execute(VkCommandBuffer cmd) override;
const char *getName() const override { return "ShadowMap"; }
// Register the depth-only pass into the render graph
void register_graph(RenderGraph *graph, RGImageHandle shadowDepth, VkExtent2D extent);
private:
EngineContext *_context = nullptr;
void draw_shadow(VkCommandBuffer cmd,
EngineContext *context,
const RGPassResources &resources,
RGImageHandle shadowDepth,
VkExtent2D extent) const;
};

View File

@@ -0,0 +1,122 @@
#include "vk_renderpass_tonemap.h"
#include <core/engine_context.h>
#include <core/vk_descriptors.h>
#include <core/vk_descriptor_manager.h>
#include <core/vk_pipeline_manager.h>
#include <core/asset_manager.h>
#include <core/vk_device.h>
#include <core/vk_resource.h>
#include <vk_sampler_manager.h>
#include <render/rg_graph.h>
#include <render/rg_resources.h>
#include "frame_resources.h"
struct TonemapPush
{
float exposure;
int mode;
};
void TonemapPass::init(EngineContext *context)
{
_context = context;
_inputSetLayout = _context->getDescriptorLayouts()->singleImageLayout();
GraphicsPipelineCreateInfo info{};
info.vertexShaderPath = _context->getAssets()->shaderPath("fullscreen.vert.spv");
info.fragmentShaderPath = _context->getAssets()->shaderPath("tonemap.frag.spv");
info.setLayouts = { _inputSetLayout };
VkPushConstantRange pcr{};
pcr.stageFlags = VK_SHADER_STAGE_FRAGMENT_BIT;
pcr.offset = 0;
pcr.size = sizeof(TonemapPush);
info.pushConstants = { pcr };
info.configure = [this](PipelineBuilder &b) {
b.set_input_topology(VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST);
b.set_polygon_mode(VK_POLYGON_MODE_FILL);
b.set_cull_mode(VK_CULL_MODE_NONE, VK_FRONT_FACE_CLOCKWISE);
b.set_multisampling_none();
b.disable_depthtest();
b.disable_blending();
b.set_color_attachment_format(VK_FORMAT_R8G8B8A8_UNORM);
};
_context->pipelines->createGraphicsPipeline("tonemap", info);
MaterialPipeline mp{};
_context->pipelines->getMaterialPipeline("tonemap", mp);
_pipeline = mp.pipeline;
_pipelineLayout = mp.layout;
}
void TonemapPass::cleanup()
{
_deletionQueue.flush();
}
void TonemapPass::execute(VkCommandBuffer)
{
// Executed via render graph.
}
RGImageHandle TonemapPass::register_graph(RenderGraph *graph, RGImageHandle hdrInput)
{
if (!graph || !hdrInput.valid()) return {};
RGImageDesc desc{};
desc.name = "ldr.tonemap";
desc.format = VK_FORMAT_R8G8B8A8_UNORM;
desc.extent = _context->getDrawExtent();
desc.usage = VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT | VK_IMAGE_USAGE_TRANSFER_SRC_BIT;
RGImageHandle ldr = graph->create_image(desc);
graph->add_pass(
"Tonemap",
RGPassType::Graphics,
[hdrInput, ldr](RGPassBuilder &builder, EngineContext *) {
builder.read(hdrInput, RGImageUsage::SampledFragment);
builder.write_color(ldr, true /*clear*/);
},
[this, hdrInput](VkCommandBuffer cmd, const RGPassResources &res, EngineContext *ctx) {
draw_tonemap(cmd, ctx, res, hdrInput);
}
);
return ldr;
}
void TonemapPass::draw_tonemap(VkCommandBuffer cmd, EngineContext *ctx, const RGPassResources &res,
RGImageHandle hdrInput)
{
if (!ctx || !ctx->currentFrame) return;
VkDevice device = ctx->getDevice()->device();
VkImageView hdrView = res.image_view(hdrInput);
if (hdrView == VK_NULL_HANDLE) return;
VkDescriptorSet set = ctx->currentFrame->_frameDescriptors.allocate(device, _inputSetLayout);
DescriptorWriter writer;
writer.write_image(0, hdrView, ctx->getSamplers()->defaultLinear(),
VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER);
writer.update_set(device, set);
ctx->pipelines->getGraphics("tonemap", _pipeline, _pipelineLayout);
vkCmdBindPipeline(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, _pipeline);
vkCmdBindDescriptorSets(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, _pipelineLayout, 0, 1, &set, 0, nullptr);
TonemapPush push{_exposure, _mode};
vkCmdPushConstants(cmd, _pipelineLayout, VK_SHADER_STAGE_FRAGMENT_BIT, 0, sizeof(TonemapPush), &push);
VkExtent2D extent = ctx->getDrawExtent();
VkViewport vp{0.f, 0.f, (float)extent.width, (float)extent.height, 0.f, 1.f};
VkRect2D sc{{0,0}, extent};
vkCmdSetViewport(cmd, 0, 1, &vp);
vkCmdSetScissor(cmd, 0, 1, &sc);
vkCmdDraw(cmd, 3, 1, 0, 0);
}

View File

@@ -0,0 +1,43 @@
#pragma once
#include <core/vk_types.h>
#include <render/vk_renderpass.h>
#include <render/rg_types.h>
class EngineContext;
class RenderGraph;
class RGPassResources;
class TonemapPass final : public IRenderPass
{
public:
void init(EngineContext *context) override;
void cleanup() override;
void execute(VkCommandBuffer) override; // Not used directly; executed via render graph
const char *getName() const override { return "Tonemap"; }
// Register pass in the render graph. Returns the LDR output image handle.
RGImageHandle register_graph(RenderGraph *graph, RGImageHandle hdrInput);
// Runtime parameters
void setExposure(float e) { _exposure = e; }
float exposure() const { return _exposure; }
void setMode(int m) { _mode = m; }
int mode() const { return _mode; }
private:
void draw_tonemap(VkCommandBuffer cmd, EngineContext *ctx, const RGPassResources &res,
RGImageHandle hdrInput);
EngineContext *_context = nullptr;
VkPipeline _pipeline = VK_NULL_HANDLE;
VkPipelineLayout _pipelineLayout = VK_NULL_HANDLE;
VkDescriptorSetLayout _inputSetLayout = VK_NULL_HANDLE;
float _exposure = 1.0f;
int _mode = 1; // default to ACES
DeletionQueue _deletionQueue;
};

View File

@@ -0,0 +1,159 @@
#include "vk_renderpass_transparent.h"
#include <algorithm>
#include <unordered_set>
#include "vk_scene.h"
#include "vk_swapchain.h"
#include "core/engine_context.h"
#include "core/vk_resource.h"
#include "core/vk_device.h"
#include "core/vk_descriptor_manager.h"
#include "core/frame_resources.h"
#include "render/rg_graph.h"
void TransparentPass::init(EngineContext *context)
{
_context = context;
}
void TransparentPass::execute(VkCommandBuffer)
{
// Executed through render graph.
}
void TransparentPass::register_graph(RenderGraph *graph, RGImageHandle drawHandle, RGImageHandle depthHandle)
{
if (!graph || !drawHandle.valid() || !depthHandle.valid()) return;
graph->add_pass(
"Transparent",
RGPassType::Graphics,
[drawHandle, depthHandle](RGPassBuilder &builder, EngineContext *ctx) {
// Draw transparent to the HDR target with depth testing against the existing depth buffer.
builder.write_color(drawHandle);
builder.write_depth(depthHandle, false /*load existing depth*/);
// Register external buffers used by draws
if (ctx)
{
const DrawContext &dc = ctx->getMainDrawContext();
std::unordered_set<VkBuffer> indexSet;
std::unordered_set<VkBuffer> vertexSet;
auto collect = [&](const std::vector<RenderObject> &v) {
for (const auto &r: v)
{
if (r.indexBuffer) indexSet.insert(r.indexBuffer);
if (r.vertexBuffer) vertexSet.insert(r.vertexBuffer);
}
};
collect(dc.TransparentSurfaces);
for (VkBuffer b: indexSet) builder.read_buffer(b, RGBufferUsage::IndexRead, 0, "trans.index");
for (VkBuffer b: vertexSet) builder.read_buffer(b, RGBufferUsage::StorageRead, 0, "trans.vertex");
}
},
[this, drawHandle, depthHandle](VkCommandBuffer cmd, const RGPassResources &res, EngineContext *ctx) {
draw_transparent(cmd, ctx, res, drawHandle, depthHandle);
}
);
}
void TransparentPass::draw_transparent(VkCommandBuffer cmd,
EngineContext *context,
const RGPassResources &resources,
RGImageHandle /*drawHandle*/,
RGImageHandle /*depthHandle*/) const
{
EngineContext *ctxLocal = context ? context : _context;
if (!ctxLocal || !ctxLocal->currentFrame) return;
ResourceManager *resourceManager = ctxLocal->getResources();
DeviceManager *deviceManager = ctxLocal->getDevice();
DescriptorManager *descriptorLayouts = ctxLocal->getDescriptorLayouts();
if (!resourceManager || !deviceManager || !descriptorLayouts) return;
const auto &dc = ctxLocal->getMainDrawContext();
const auto &sceneData = ctxLocal->getSceneData();
// Prepare per-frame scene UBO
AllocatedBuffer gpuSceneDataBuffer = resourceManager->create_buffer(
sizeof(GPUSceneData), VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT, VMA_MEMORY_USAGE_CPU_TO_GPU);
ctxLocal->currentFrame->_deletionQueue.push_function([resourceManager, gpuSceneDataBuffer]() {
resourceManager->destroy_buffer(gpuSceneDataBuffer);
});
VmaAllocationInfo allocInfo{};
vmaGetAllocationInfo(deviceManager->allocator(), gpuSceneDataBuffer.allocation, &allocInfo);
auto *sceneUniformData = static_cast<GPUSceneData *>(allocInfo.pMappedData);
*sceneUniformData = sceneData;
vmaFlushAllocation(deviceManager->allocator(), gpuSceneDataBuffer.allocation, 0, sizeof(GPUSceneData));
VkDescriptorSet globalDescriptor = ctxLocal->currentFrame->_frameDescriptors.allocate(
deviceManager->device(), descriptorLayouts->gpuSceneDataLayout());
DescriptorWriter writer;
writer.write_buffer(0, gpuSceneDataBuffer.buffer, sizeof(GPUSceneData), 0, VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER);
writer.update_set(deviceManager->device(), globalDescriptor);
// Sort transparent back-to-front using camera-space depth
std::vector<const RenderObject *> draws;
draws.reserve(dc.TransparentSurfaces.size());
for (const auto &r: dc.TransparentSurfaces) draws.push_back(&r);
auto view = sceneData.view; // world -> view
auto depthOf = [&](const RenderObject *r) {
glm::vec4 c = r->transform * glm::vec4(r->bounds.origin, 1.f);
float z = (view * c).z;
return -z; // positive depth; larger = further
};
std::sort(draws.begin(), draws.end(), [&](const RenderObject *A, const RenderObject *B) {
return depthOf(A) > depthOf(B); // far to near
});
VkExtent2D extent = ctxLocal->getDrawExtent();
VkViewport viewport{0.f, 0.f, (float) extent.width, (float) extent.height, 0.f, 1.f};
vkCmdSetViewport(cmd, 0, 1, &viewport);
VkRect2D scissor{{0, 0}, extent};
vkCmdSetScissor(cmd, 0, 1, &scissor);
MaterialPipeline *lastPipeline = nullptr;
MaterialInstance *lastMaterial = nullptr;
VkBuffer lastIndexBuffer = VK_NULL_HANDLE;
auto draw = [&](const RenderObject &r) {
if (r.material != lastMaterial)
{
lastMaterial = r.material;
if (r.material->pipeline != lastPipeline)
{
lastPipeline = r.material->pipeline;
vkCmdBindPipeline(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, r.material->pipeline->pipeline);
vkCmdBindDescriptorSets(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, r.material->pipeline->layout, 0, 1,
&globalDescriptor, 0, nullptr);
}
vkCmdBindDescriptorSets(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, r.material->pipeline->layout, 1, 1,
&r.material->materialSet, 0, nullptr);
}
if (r.indexBuffer != lastIndexBuffer)
{
lastIndexBuffer = r.indexBuffer;
vkCmdBindIndexBuffer(cmd, r.indexBuffer, 0, VK_INDEX_TYPE_UINT32);
}
GPUDrawPushConstants push{};
push.worldMatrix = r.transform;
push.vertexBuffer = r.vertexBufferAddress;
vkCmdPushConstants(cmd, r.material->pipeline->layout, VK_SHADER_STAGE_VERTEX_BIT, 0,
sizeof(GPUDrawPushConstants), &push);
vkCmdDrawIndexed(cmd, r.indexCount, 1, r.firstIndex, 0, 0);
if (ctxLocal->stats)
{
ctxLocal->stats->drawcall_count++;
ctxLocal->stats->triangle_count += r.indexCount / 3;
}
};
for (auto *pObj: draws) draw(*pObj);
}
void TransparentPass::cleanup()
{
fmt::print("TransparentPass::cleanup()\n");
}

View File

@@ -0,0 +1,28 @@
#pragma once
#include "vk_renderpass.h"
#include "render/rg_types.h"
class TransparentPass : public IRenderPass
{
public:
void init(EngineContext *context) override;
void execute(VkCommandBuffer cmd) override;
void cleanup() override;
const char *getName() const override { return "Transparent"; }
// RenderGraph wiring
void register_graph(class RenderGraph *graph,
RGImageHandle drawHandle,
RGImageHandle depthHandle);
private:
void draw_transparent(VkCommandBuffer cmd,
EngineContext *context,
const class RGPassResources &resources,
RGImageHandle drawHandle,
RGImageHandle depthHandle) const;
EngineContext *_context{};
};

83
src/scene/camera.cpp Normal file
View File

@@ -0,0 +1,83 @@
#include "camera.h"
#include <glm/gtx/transform.hpp>
#include <glm/gtx/quaternion.hpp>
#include <SDL2/SDL.h>
#include <algorithm>
#include <cmath>
void Camera::update()
{
glm::mat4 cameraRotation = getRotationMatrix();
position += glm::vec3(cameraRotation * glm::vec4(velocity * moveSpeed, 0.f));
}
void Camera::processSDLEvent(SDL_Event& e)
{
if (e.type == SDL_KEYDOWN) {
// Camera uses -Z forward convention (right-handed)
if (e.key.keysym.sym == SDLK_w) { velocity.z = -1; }
if (e.key.keysym.sym == SDLK_s) { velocity.z = 1; }
if (e.key.keysym.sym == SDLK_a) { velocity.x = -1; }
if (e.key.keysym.sym == SDLK_d) { velocity.x = 1; }
}
if (e.type == SDL_KEYUP) {
if (e.key.keysym.sym == SDLK_w) { velocity.z = 0; }
if (e.key.keysym.sym == SDLK_s) { velocity.z = 0; }
if (e.key.keysym.sym == SDLK_a) { velocity.x = 0; }
if (e.key.keysym.sym == SDLK_d) { velocity.x = 0; }
}
if (e.type == SDL_MOUSEBUTTONDOWN && e.button.button == SDL_BUTTON_RIGHT) {
rmbDown = true;
SDL_SetRelativeMouseMode(SDL_TRUE);
}
if (e.type == SDL_MOUSEBUTTONUP && e.button.button == SDL_BUTTON_RIGHT) {
rmbDown = false;
SDL_SetRelativeMouseMode(SDL_FALSE);
}
if (e.type == SDL_MOUSEMOTION && rmbDown) {
// Mouse right (xrel > 0) turns view right with -Z-forward
yaw += (float)e.motion.xrel * lookSensitivity; // axis = +Y
// Mouse up (yrel < 0) looks up with -Z-forward
pitch -= (float)e.motion.yrel * lookSensitivity;
}
if (e.type == SDL_MOUSEWHEEL) {
// Ctrl modifies FOV, otherwise adjust move speed
const bool ctrl = (SDL_GetModState() & KMOD_CTRL) != 0;
const int steps = e.wheel.y; // positive = wheel up
if (ctrl) {
// Wheel up -> zoom in (smaller FOV)
fovDegrees -= steps * 2.0f;
fovDegrees = std::clamp(fovDegrees, 30.0f, 110.0f);
} else {
// Exponential scale for pleasant feel
float factor = std::pow(1.15f, (float)steps);
moveSpeed = std::clamp(moveSpeed * factor, 0.001f, 5.0f);
}
}
}
glm::mat4 Camera::getViewMatrix()
{
// to create a correct model view, we need to move the world in opposite
// direction to the camera
// so we will create the camera model matrix and invert
glm::mat4 cameraTranslation = glm::translate(glm::mat4(1.f), position);
glm::mat4 cameraRotation = getRotationMatrix();
return glm::inverse(cameraTranslation * cameraRotation);
}
glm::mat4 Camera::getRotationMatrix()
{
// fairly typical FPS style camera. we join the pitch and yaw rotations into
// the final rotation matrix
glm::quat pitchRotation = glm::angleAxis(pitch, glm::vec3 { 1.f, 0.f, 0.f });
// Yaw around +Y keeps mouse-right -> turn-right with -Z-forward
glm::quat yawRotation = glm::angleAxis(yaw, glm::vec3 { 0.f, 1.f, 0.f });
return glm::toMat4(yawRotation) * glm::toMat4(pitchRotation);
}

31
src/scene/camera.h Normal file
View File

@@ -0,0 +1,31 @@
#pragma once
#include <core/vk_types.h>
#include <SDL_events.h>
#include "glm/vec3.hpp"
class Camera {
public:
glm::vec3 velocity;
glm::vec3 position;
// vertical rotation
float pitch { 0.f };
// horizontal rotation
float yaw { 0.f };
// Movement/look tuning
float moveSpeed { 0.03f };
float lookSensitivity { 0.0020f };
bool rmbDown { false };
// Field of view in degrees for projection
float fovDegrees { 70.f };
glm::mat4 getViewMatrix();
glm::mat4 getRotationMatrix();
void processSDLEvent(SDL_Event& e);
void update();
};

600
src/scene/vk_loader.cpp Normal file
View File

@@ -0,0 +1,600 @@
#include "stb_image.h"
#include <iostream>
#include "vk_loader.h"
#include "core/vk_engine.h"
#include "render/vk_materials.h"
#include "core/vk_initializers.h"
#include "core/vk_types.h"
#include <glm/gtx/quaternion.hpp>
#include <fastgltf/glm_element_traits.hpp>
#include <fastgltf/parser.hpp>
#include <fastgltf/tools.hpp>
#include <fastgltf/util.hpp>
#include <optional>
//> loadimg
std::optional<AllocatedImage> load_image(VulkanEngine *engine, fastgltf::Asset &asset, fastgltf::Image &image, bool srgb)
{
AllocatedImage newImage{};
int width, height, nrChannels;
std::visit(
fastgltf::visitor{
[](auto &arg) {
},
[&](fastgltf::sources::URI &filePath) {
assert(filePath.fileByteOffset == 0); // We don't support offsets with stbi.
assert(filePath.uri.isLocalPath()); // We're only capable of loading
// local files.
const std::string path(filePath.uri.path().begin(),
filePath.uri.path().end()); // Thanks C++.
unsigned char *data = stbi_load(path.c_str(), &width, &height, &nrChannels, 4);
if (data)
{
VkExtent3D imagesize;
imagesize.width = width;
imagesize.height = height;
imagesize.depth = 1;
VkFormat fmt = srgb ? VK_FORMAT_R8G8B8A8_SRGB : VK_FORMAT_R8G8B8A8_UNORM;
newImage = engine->_resourceManager->create_image(
data, imagesize, fmt, VK_IMAGE_USAGE_SAMPLED_BIT, false);
stbi_image_free(data);
}
},
[&](fastgltf::sources::Vector &vector) {
unsigned char *data = stbi_load_from_memory(vector.bytes.data(), static_cast<int>(vector.bytes.size()),
&width, &height, &nrChannels, 4);
if (data)
{
VkExtent3D imagesize;
imagesize.width = width;
imagesize.height = height;
imagesize.depth = 1;
VkFormat fmt = srgb ? VK_FORMAT_R8G8B8A8_SRGB : VK_FORMAT_R8G8B8A8_UNORM;
newImage = engine->_resourceManager->create_image(
data, imagesize, fmt, VK_IMAGE_USAGE_SAMPLED_BIT, false);
stbi_image_free(data);
}
},
[&](fastgltf::sources::BufferView &view) {
auto &bufferView = asset.bufferViews[view.bufferViewIndex];
auto &buffer = asset.buffers[bufferView.bufferIndex];
std::visit(fastgltf::visitor{
// We only care about VectorWithMime here, because we
// specify LoadExternalBuffers, meaning all buffers
// are already loaded into a vector.
[](auto &arg) {
},
[&](fastgltf::sources::Vector &vector) {
unsigned char *data = stbi_load_from_memory(
vector.bytes.data() + bufferView.byteOffset,
static_cast<int>(bufferView.byteLength),
&width, &height, &nrChannels, 4);
if (data)
{
VkExtent3D imagesize;
imagesize.width = width;
imagesize.height = height;
imagesize.depth = 1;
VkFormat fmt = srgb ? VK_FORMAT_R8G8B8A8_SRGB : VK_FORMAT_R8G8B8A8_UNORM;
newImage = engine->_resourceManager->create_image(
data, imagesize, fmt, VK_IMAGE_USAGE_SAMPLED_BIT, false);
stbi_image_free(data);
}
}
},
buffer.data);
},
},
image.data);
// if any of the attempts to load the data failed, we havent written the image
// so handle is null
if (newImage.image == VK_NULL_HANDLE)
{
return {};
}
else
{
return newImage;
}
}
//< loadimg
//> filters
VkFilter extract_filter(fastgltf::Filter filter)
{
switch (filter)
{
// nearest samplers
case fastgltf::Filter::Nearest:
case fastgltf::Filter::NearestMipMapNearest:
case fastgltf::Filter::NearestMipMapLinear:
return VK_FILTER_NEAREST;
// linear samplers
case fastgltf::Filter::Linear:
case fastgltf::Filter::LinearMipMapNearest:
case fastgltf::Filter::LinearMipMapLinear:
default:
return VK_FILTER_LINEAR;
}
}
VkSamplerMipmapMode extract_mipmap_mode(fastgltf::Filter filter)
{
switch (filter)
{
case fastgltf::Filter::NearestMipMapNearest:
case fastgltf::Filter::LinearMipMapNearest:
return VK_SAMPLER_MIPMAP_MODE_NEAREST;
case fastgltf::Filter::NearestMipMapLinear:
case fastgltf::Filter::LinearMipMapLinear:
default:
return VK_SAMPLER_MIPMAP_MODE_LINEAR;
}
}
//< filters
std::optional<std::shared_ptr<LoadedGLTF> > loadGltf(VulkanEngine *engine, std::string_view filePath)
{
//> load_1
fmt::print("Loading GLTF: {}", filePath);
std::shared_ptr<LoadedGLTF> scene = std::make_shared<LoadedGLTF>();
scene->creator = engine;
LoadedGLTF &file = *scene.get();
fastgltf::Parser parser{};
constexpr auto gltfOptions = fastgltf::Options::DontRequireValidAssetMember | fastgltf::Options::AllowDouble |
fastgltf::Options::LoadGLBBuffers | fastgltf::Options::LoadExternalBuffers;
// fastgltf::Options::LoadExternalImages;
fastgltf::GltfDataBuffer data;
data.loadFromFile(filePath);
fastgltf::Asset gltf;
std::filesystem::path path = filePath;
auto type = fastgltf::determineGltfFileType(&data);
if (type == fastgltf::GltfType::glTF)
{
auto load = parser.loadGLTF(&data, path.parent_path(), gltfOptions);
if (load)
{
gltf = std::move(load.get());
}
else
{
std::cerr << "Failed to load glTF: " << fastgltf::to_underlying(load.error()) << std::endl;
return {};
}
}
else if (type == fastgltf::GltfType::GLB)
{
auto load = parser.loadBinaryGLTF(&data, path.parent_path(), gltfOptions);
if (load)
{
gltf = std::move(load.get());
}
else
{
std::cerr << "Failed to load glTF: " << fastgltf::to_underlying(load.error()) << std::endl;
return {};
}
}
else
{
std::cerr << "Failed to determine glTF container" << std::endl;
return {};
}
//< load_1
//> load_2
// we can stimate the descriptors we will need accurately
std::vector<DescriptorAllocatorGrowable::PoolSizeRatio> sizes = {
{VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, 3},
{VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, 3},
{VK_DESCRIPTOR_TYPE_STORAGE_BUFFER, 1}
};
file.descriptorPool.init(engine->_deviceManager->device(), gltf.materials.size(), sizes);
//< load_2
//> load_samplers
// load samplers
for (fastgltf::Sampler &sampler: gltf.samplers)
{
VkSamplerCreateInfo sampl = {.sType = VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO, .pNext = nullptr};
sampl.maxLod = VK_LOD_CLAMP_NONE;
sampl.minLod = 0.0f;
sampl.magFilter = extract_filter(sampler.magFilter.value_or(fastgltf::Filter::Nearest));
sampl.minFilter = extract_filter(sampler.minFilter.value_or(fastgltf::Filter::Nearest));
sampl.mipmapMode = extract_mipmap_mode(sampler.minFilter.value_or(fastgltf::Filter::Nearest));
// Address modes: default to glTF Repeat
auto toAddress = [](fastgltf::Wrap w) -> VkSamplerAddressMode {
switch (w) {
case fastgltf::Wrap::ClampToEdge: return VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE;
case fastgltf::Wrap::MirroredRepeat: return VK_SAMPLER_ADDRESS_MODE_MIRRORED_REPEAT;
case fastgltf::Wrap::Repeat:
default: return VK_SAMPLER_ADDRESS_MODE_REPEAT;
}
};
// fastgltf::Sampler::wrapS/wrapT are non-optional and already default to Repeat
sampl.addressModeU = toAddress(sampler.wrapS);
sampl.addressModeV = toAddress(sampler.wrapT);
sampl.addressModeW = VK_SAMPLER_ADDRESS_MODE_REPEAT;
sampl.unnormalizedCoordinates = VK_FALSE;
VkSampler newSampler;
vkCreateSampler(engine->_deviceManager->device(), &sampl, nullptr, &newSampler);
file.samplers.push_back(newSampler);
}
//< load_samplers
//> load_arrays
// temporal arrays for all the objects to use while creating the GLTF data
std::vector<std::shared_ptr<MeshAsset> > meshes;
std::vector<std::shared_ptr<Node> > nodes;
std::vector<AllocatedImage> images;
std::vector<std::shared_ptr<GLTFMaterial> > materials;
//< load_arrays
// load all textures
for (fastgltf::Image &image: gltf.images)
{
// Default-load GLTF images as linear; baseColor is reloaded as sRGB when bound
std::optional<AllocatedImage> img = load_image(engine, gltf, image, false);
if (img.has_value())
{
images.push_back(*img);
file.images[image.name.c_str()] = *img;
}
else
{
// we failed to load, so lets give the slot a default white texture to not
// completely break loading
images.push_back(engine->_errorCheckerboardImage);
std::cout << "gltf failed to load texture " << image.name << std::endl;
}
}
//> load_buffer
// create buffer to hold the material data
file.materialDataBuffer = engine->_resourceManager->create_buffer(
sizeof(GLTFMetallic_Roughness::MaterialConstants) * gltf.materials.size(),
VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT, VMA_MEMORY_USAGE_CPU_TO_GPU);
int data_index = 0;
GLTFMetallic_Roughness::MaterialConstants *sceneMaterialConstants = (GLTFMetallic_Roughness::MaterialConstants *)
file.materialDataBuffer.info.pMappedData;
//< load_buffer
//
//> load_material
for (fastgltf::Material &mat: gltf.materials)
{
std::shared_ptr<GLTFMaterial> newMat = std::make_shared<GLTFMaterial>();
materials.push_back(newMat);
file.materials[mat.name.c_str()] = newMat;
GLTFMetallic_Roughness::MaterialConstants constants;
constants.colorFactors.x = mat.pbrData.baseColorFactor[0];
constants.colorFactors.y = mat.pbrData.baseColorFactor[1];
constants.colorFactors.z = mat.pbrData.baseColorFactor[2];
constants.colorFactors.w = mat.pbrData.baseColorFactor[3];
constants.metal_rough_factors.x = mat.pbrData.metallicFactor;
constants.metal_rough_factors.y = mat.pbrData.roughnessFactor;
// write material parameters to buffer
sceneMaterialConstants[data_index] = constants;
MaterialPass passType = MaterialPass::MainColor;
if (mat.alphaMode == fastgltf::AlphaMode::Blend)
{
passType = MaterialPass::Transparent;
}
GLTFMetallic_Roughness::MaterialResources materialResources;
// default the material textures
materialResources.colorImage = engine->_whiteImage;
materialResources.colorSampler = engine->_samplerManager->defaultLinear();
materialResources.metalRoughImage = engine->_whiteImage;
materialResources.metalRoughSampler = engine->_samplerManager->defaultLinear();
// set the uniform buffer for the material data
materialResources.dataBuffer = file.materialDataBuffer.buffer;
materialResources.dataBufferOffset = data_index * sizeof(GLTFMetallic_Roughness::MaterialConstants);
// grab textures from gltf file
if (mat.pbrData.baseColorTexture.has_value())
{
const auto &tex = gltf.textures[mat.pbrData.baseColorTexture.value().textureIndex];
size_t imgIndex = tex.imageIndex.value();
// Sampler is optional in glTF; fall back to default if missing
bool hasSampler = tex.samplerIndex.has_value();
size_t sampler = hasSampler ? tex.samplerIndex.value() : SIZE_MAX;
// Reload albedo as sRGB, independent of the global image cache
if (imgIndex < gltf.images.size())
{
auto albedoImg = load_image(engine, gltf, gltf.images[imgIndex], true);
if (albedoImg.has_value())
{
materialResources.colorImage = *albedoImg;
// Track for cleanup using a unique key
std::string key = std::string("albedo_") + mat.name.c_str() + "_" + std::to_string(imgIndex);
file.images[key] = *albedoImg;
}
else
{
materialResources.colorImage = images[imgIndex];
}
}
else
{
materialResources.colorImage = engine->_errorCheckerboardImage;
}
materialResources.colorSampler = hasSampler ? file.samplers[sampler]
: engine->_samplerManager->defaultLinear();
}
// Metallic-Roughness texture
if (mat.pbrData.metallicRoughnessTexture.has_value())
{
const auto &tex = gltf.textures[mat.pbrData.metallicRoughnessTexture.value().textureIndex];
size_t imgIndex = tex.imageIndex.value();
bool hasSampler = tex.samplerIndex.has_value();
size_t sampler = hasSampler ? tex.samplerIndex.value() : SIZE_MAX;
if (imgIndex < images.size())
{
materialResources.metalRoughImage = images[imgIndex];
materialResources.metalRoughSampler = hasSampler ? file.samplers[sampler]
: engine->_samplerManager->defaultLinear();
}
}
// build material
newMat->data = engine->metalRoughMaterial.write_material(engine->_deviceManager->device(), passType, materialResources,
file.descriptorPool);
data_index++;
}
//< load_material
// Flush material constants buffer so GPU sees updated data on non-coherent memory
if (!gltf.materials.empty())
{
VkDeviceSize totalSize = sizeof(GLTFMetallic_Roughness::MaterialConstants) * gltf.materials.size();
vmaFlushAllocation(engine->_deviceManager->allocator(), file.materialDataBuffer.allocation, 0, totalSize);
}
// use the same vectors for all meshes so that the memory doesnt reallocate as
// often
std::vector<uint32_t> indices;
std::vector<Vertex> vertices;
for (fastgltf::Mesh &mesh: gltf.meshes)
{
std::shared_ptr<MeshAsset> newmesh = std::make_shared<MeshAsset>();
meshes.push_back(newmesh);
file.meshes[mesh.name.c_str()] = newmesh;
newmesh->name = mesh.name;
// clear the mesh arrays each mesh, we dont want to merge them by error
indices.clear();
vertices.clear();
for (auto &&p: mesh.primitives)
{
GeoSurface newSurface;
newSurface.startIndex = (uint32_t) indices.size();
newSurface.count = (uint32_t) gltf.accessors[p.indicesAccessor.value()].count;
size_t initial_vtx = vertices.size();
// load indexes
{
fastgltf::Accessor &indexaccessor = gltf.accessors[p.indicesAccessor.value()];
indices.reserve(indices.size() + indexaccessor.count);
fastgltf::iterateAccessor<std::uint32_t>(gltf, indexaccessor,
[&](std::uint32_t idx) {
indices.push_back(idx + initial_vtx);
});
}
// load vertex positions
{
fastgltf::Accessor &posAccessor = gltf.accessors[p.findAttribute("POSITION")->second];
vertices.resize(vertices.size() + posAccessor.count);
fastgltf::iterateAccessorWithIndex<glm::vec3>(gltf, posAccessor,
[&](glm::vec3 v, size_t index) {
Vertex newvtx;
newvtx.position = v;
newvtx.normal = {1, 0, 0};
newvtx.color = glm::vec4{1.f};
newvtx.uv_x = 0;
newvtx.uv_y = 0;
vertices[initial_vtx + index] = newvtx;
});
}
// load vertex normals
auto normals = p.findAttribute("NORMAL");
if (normals != p.attributes.end())
{
fastgltf::iterateAccessorWithIndex<glm::vec3>(gltf, gltf.accessors[(*normals).second],
[&](glm::vec3 v, size_t index) {
vertices[initial_vtx + index].normal = v;
});
}
// load UVs
auto uv = p.findAttribute("TEXCOORD_0");
if (uv != p.attributes.end())
{
fastgltf::iterateAccessorWithIndex<glm::vec2>(gltf, gltf.accessors[(*uv).second],
[&](glm::vec2 v, size_t index) {
vertices[initial_vtx + index].uv_x = v.x;
vertices[initial_vtx + index].uv_y = v.y;
});
}
// load vertex colors
auto colors = p.findAttribute("COLOR_0");
if (colors != p.attributes.end())
{
fastgltf::iterateAccessorWithIndex<glm::vec4>(gltf, gltf.accessors[(*colors).second],
[&](glm::vec4 v, size_t index) {
vertices[initial_vtx + index].color = v;
});
}
if (p.materialIndex.has_value())
{
newSurface.material = materials[p.materialIndex.value()];
}
else
{
newSurface.material = materials[0];
}
glm::vec3 minpos = vertices[initial_vtx].position;
glm::vec3 maxpos = vertices[initial_vtx].position;
for (int i = initial_vtx; i < vertices.size(); i++)
{
minpos = glm::min(minpos, vertices[i].position);
maxpos = glm::max(maxpos, vertices[i].position);
}
newSurface.bounds.origin = (maxpos + minpos) / 2.f;
newSurface.bounds.extents = (maxpos - minpos) / 2.f;
newSurface.bounds.sphereRadius = glm::length(newSurface.bounds.extents);
newmesh->surfaces.push_back(newSurface);
}
newmesh->meshBuffers = engine->_resourceManager->uploadMesh(indices, vertices);
}
//> load_nodes
// load all nodes and their meshes
for (fastgltf::Node &node: gltf.nodes)
{
std::shared_ptr<Node> newNode;
// find if the node has a mesh, and if it does hook it to the mesh pointer and allocate it with the meshnode class
if (node.meshIndex.has_value())
{
newNode = std::make_shared<MeshNode>();
static_cast<MeshNode *>(newNode.get())->mesh = meshes[*node.meshIndex];
}
else
{
newNode = std::make_shared<Node>();
}
nodes.push_back(newNode);
file.nodes[node.name.c_str()];
std::visit(fastgltf::visitor{
[&](fastgltf::Node::TransformMatrix matrix) {
memcpy(&newNode->localTransform, matrix.data(), sizeof(matrix));
},
[&](fastgltf::Node::TRS transform) {
glm::vec3 tl(transform.translation[0], transform.translation[1],
transform.translation[2]);
glm::quat rot(transform.rotation[3], transform.rotation[0], transform.rotation[1],
transform.rotation[2]);
glm::vec3 sc(transform.scale[0], transform.scale[1], transform.scale[2]);
glm::mat4 tm = glm::translate(glm::mat4(1.f), tl);
glm::mat4 rm = glm::toMat4(rot);
glm::mat4 sm = glm::scale(glm::mat4(1.f), sc);
newNode->localTransform = tm * rm * sm;
}
},
node.transform);
}
//< load_nodes
//> load_graph
// run loop again to setup transform hierarchy
for (int i = 0; i < gltf.nodes.size(); i++)
{
fastgltf::Node &node = gltf.nodes[i];
std::shared_ptr<Node> &sceneNode = nodes[i];
for (auto &c: node.children)
{
sceneNode->children.push_back(nodes[c]);
nodes[c]->parent = sceneNode;
}
}
// find the top nodes, with no parents
for (auto &node: nodes)
{
if (node->parent.lock() == nullptr)
{
file.topNodes.push_back(node);
node->refreshTransform(glm::mat4{1.f});
}
}
return scene;
//< load_graph
}
void LoadedGLTF::Draw(const glm::mat4 &topMatrix, DrawContext &ctx)
{
// create renderables from the scenenodes
for (auto &n: topNodes)
{
n->Draw(topMatrix, ctx);
}
}
void LoadedGLTF::clearAll()
{
VkDevice dv = creator->_deviceManager->device();
for (auto &[k, v]: meshes)
{
creator->_resourceManager->destroy_buffer(v->meshBuffers.indexBuffer);
creator->_resourceManager->destroy_buffer(v->meshBuffers.vertexBuffer);
}
for (auto &[k, v]: images)
{
if (v.image == creator->_errorCheckerboardImage.image)
{
// dont destroy the default images
continue;
}
creator->_resourceManager->destroy_image(v);
}
for (auto &sampler: samplers)
{
vkDestroySampler(dv, sampler, nullptr);
}
auto materialBuffer = materialDataBuffer;
auto samplersToDestroy = samplers;
descriptorPool.destroy_pools(dv);
creator->_resourceManager->destroy_buffer(materialBuffer);
}

72
src/scene/vk_loader.h Normal file
View File

@@ -0,0 +1,72 @@
// vulkan_engine.h : Include file for standard system include files,
// or project specific include files.
#pragma once
#include <core/vk_types.h>
#include "core/vk_descriptors.h"
#include <unordered_map>
#include <filesystem>
class VulkanEngine;
struct Bounds
{
glm::vec3 origin;
float sphereRadius;
glm::vec3 extents;
};
struct GLTFMaterial
{
MaterialInstance data;
};
struct GeoSurface
{
uint32_t startIndex;
uint32_t count;
Bounds bounds;
std::shared_ptr<GLTFMaterial> material;
};
struct MeshAsset
{
std::string name;
std::vector<GeoSurface> surfaces;
GPUMeshBuffers meshBuffers;
};
struct LoadedGLTF : public IRenderable
{
// storage for all the data on a given gltf file
std::unordered_map<std::string, std::shared_ptr<MeshAsset> > meshes;
std::unordered_map<std::string, std::shared_ptr<Node> > nodes;
std::unordered_map<std::string, AllocatedImage> images;
std::unordered_map<std::string, std::shared_ptr<GLTFMaterial> > materials;
// nodes that dont have a parent, for iterating through the file in tree order
std::vector<std::shared_ptr<Node> > topNodes;
std::vector<VkSampler> samplers;
DescriptorAllocatorGrowable descriptorPool;
AllocatedBuffer materialDataBuffer;
VulkanEngine *creator;
~LoadedGLTF() { clearAll(); };
void clearMeshes(){ clearAll(); };
virtual void Draw(const glm::mat4 &topMatrix, DrawContext &ctx);
private:
void clearAll();
};
std::optional<std::shared_ptr<LoadedGLTF> > loadGltf(VulkanEngine *engine, std::string_view filePath);

187
src/scene/vk_scene.cpp Normal file
View File

@@ -0,0 +1,187 @@
#include "vk_scene.h"
#include <utility>
#include "vk_swapchain.h"
#include "core/engine_context.h"
#include "glm/gtx/transform.hpp"
#include <glm/gtc/matrix_transform.hpp>
#include "glm/gtx/norm.inl"
void SceneManager::init(EngineContext *context)
{
_context = context;
mainCamera.velocity = glm::vec3(0.f);
mainCamera.position = glm::vec3(30.f, -00.f, 85.f);
mainCamera.pitch = 0;
mainCamera.yaw = 0;
sceneData.ambientColor = glm::vec4(0.1f, 0.1f, 0.1f, 1.0f);
sceneData.sunlightDirection = glm::vec4(-1.0f, -1.0f, -1.0f, 1.0f);
sceneData.sunlightColor = glm::vec4(1.0f, 1.0f, 1.0f, 3.0f);
}
void SceneManager::update_scene()
{
auto start = std::chrono::system_clock::now();
mainDrawContext.OpaqueSurfaces.clear();
mainDrawContext.TransparentSurfaces.clear();
mainCamera.update();
if (loadedScenes.find("structure") != loadedScenes.end())
{
loadedScenes["structure"]->Draw(glm::mat4{1.f}, mainDrawContext);
}
// dynamic GLTF instances
for (const auto &kv: dynamicGLTFInstances)
{
const GLTFInstance &inst = kv.second;
if (inst.scene)
{
inst.scene->Draw(inst.transform, mainDrawContext);
}
}
// Default primitives are added as dynamic instances by the engine.
// dynamic mesh instances
for (const auto &kv: dynamicMeshInstances)
{
const MeshInstance &inst = kv.second;
if (!inst.mesh || inst.mesh->surfaces.empty()) continue;
for (const auto &surf: inst.mesh->surfaces)
{
RenderObject obj{};
obj.indexCount = surf.count;
obj.firstIndex = surf.startIndex;
obj.indexBuffer = inst.mesh->meshBuffers.indexBuffer.buffer;
obj.vertexBuffer = inst.mesh->meshBuffers.vertexBuffer.buffer;
obj.vertexBufferAddress = inst.mesh->meshBuffers.vertexBufferAddress;
obj.material = &surf.material->data;
obj.bounds = surf.bounds;
obj.transform = inst.transform;
if (obj.material->passType == MaterialPass::Transparent)
{
mainDrawContext.TransparentSurfaces.push_back(obj);
}
else
{
mainDrawContext.OpaqueSurfaces.push_back(obj);
}
}
}
glm::mat4 view = mainCamera.getViewMatrix();
// Use reversed infinite-Z projection (right-handed, -Z forward) to avoid far-plane clipping
// on very large scenes. Vulkan clip space is 0..1 (GLM_FORCE_DEPTH_ZERO_TO_ONE) and requires Y flip.
auto makeReversedInfinitePerspective = [](float fovyRadians, float aspect, float zNear) {
// Column-major matrix; indices are [column][row]
float f = 1.0f / tanf(fovyRadians * 0.5f);
glm::mat4 m(0.0f);
m[0][0] = f / aspect;
m[1][1] = f;
m[2][2] = 0.0f;
m[2][3] = -1.0f; // w = -z_eye (right-handed)
m[3][2] = zNear; // maps near -> 1, far -> 0 (reversed-Z)
return m;
};
const float fov = glm::radians(70.f);
const float aspect = (float) _context->getSwapchain()->windowExtent().width /
(float) _context->getSwapchain()->windowExtent().height;
const float nearPlane = 0.1f;
glm::mat4 projection = makeReversedInfinitePerspective(fov, aspect, nearPlane);
// Vulkan NDC has inverted Y.
projection[1][1] *= -1.0f;
sceneData.view = view;
sceneData.proj = projection;
sceneData.viewproj = projection * view;
// Build a simple directional light view-projection (reversed-Z orthographic)
// Centered around the camera for now (non-cascaded, non-stabilized)
{
const glm::vec3 camPos = glm::vec3(glm::inverse(view)[3]);
glm::vec3 L = glm::normalize(-glm::vec3(sceneData.sunlightDirection));
if (glm::length(L) < 1e-5f) L = glm::vec3(0.0f, -1.0f, 0.0f);
const glm::vec3 worldUp(0.0f, 1.0f, 0.0f);
glm::vec3 right = glm::normalize(glm::cross(worldUp, L));
glm::vec3 up = glm::normalize(glm::cross(L, right));
if (glm::length2(right) < 1e-6f)
{
right = glm::vec3(1, 0, 0);
up = glm::normalize(glm::cross(L, right));
}
const float orthoRange = 40.0f; // XY half-extent
const float nearDist = 0.1f;
const float farDist = 200.0f;
const glm::vec3 lightPos = camPos - L * 100.0f;
glm::mat4 viewLight = glm::lookAtRH(lightPos, camPos, up);
// Standard RH ZO ortho with near<far, then explicitly flip Z to reversed-Z
glm::mat4 projLight = glm::orthoRH_ZO(-orthoRange, orthoRange, -orthoRange, orthoRange,
nearDist, farDist);
sceneData.lightViewProj = projLight * viewLight;
}
auto end = std::chrono::system_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::microseconds>(end - start);
stats.scene_update_time = elapsed.count() / 1000.f;
}
void SceneManager::loadScene(const std::string &name, std::shared_ptr<LoadedGLTF> scene)
{
loadedScenes[name] = std::move(scene);
}
std::shared_ptr<LoadedGLTF> SceneManager::getScene(const std::string &name)
{
auto it = loadedScenes.find(name);
return (it != loadedScenes.end()) ? it->second : nullptr;
}
void SceneManager::cleanup()
{
loadedScenes.clear();
loadedNodes.clear();
}
void SceneManager::addMeshInstance(const std::string &name, std::shared_ptr<MeshAsset> mesh, const glm::mat4 &transform)
{
if (!mesh) return;
dynamicMeshInstances[name] = MeshInstance{std::move(mesh), transform};
}
bool SceneManager::removeMeshInstance(const std::string &name)
{
return dynamicMeshInstances.erase(name) > 0;
}
void SceneManager::clearMeshInstances()
{
dynamicMeshInstances.clear();
}
void SceneManager::addGLTFInstance(const std::string &name, std::shared_ptr<LoadedGLTF> scene,
const glm::mat4 &transform)
{
if (!scene) return;
dynamicGLTFInstances[name] = GLTFInstance{std::move(scene), transform};
}
bool SceneManager::removeGLTFInstance(const std::string &name)
{
return dynamicGLTFInstances.erase(name) > 0;
}
void SceneManager::clearGLTFInstances()
{
dynamicGLTFInstances.clear();
}

87
src/scene/vk_scene.h Normal file
View File

@@ -0,0 +1,87 @@
#pragma once
#include <core/vk_types.h>
#include <scene/camera.h>
#include <unordered_map>
#include <memory>
#include "scene/vk_loader.h"
class EngineContext;
struct RenderObject
{
uint32_t indexCount;
uint32_t firstIndex;
VkBuffer indexBuffer;
VkBuffer vertexBuffer; // for RG buffer tracking (device-address path still used in shader)
MaterialInstance *material;
Bounds bounds;
glm::mat4 transform;
VkDeviceAddress vertexBufferAddress;
};
struct DrawContext
{
std::vector<RenderObject> OpaqueSurfaces;
std::vector<RenderObject> TransparentSurfaces;
};
class SceneManager
{
public:
void init(EngineContext *context);
void cleanup();
void update_scene();
Camera &getMainCamera() { return mainCamera; }
const GPUSceneData &getSceneData() const { return sceneData; }
DrawContext &getMainDrawContext() { return mainDrawContext; }
void loadScene(const std::string &name, std::shared_ptr<LoadedGLTF> scene);
std::shared_ptr<LoadedGLTF> getScene(const std::string &name);
// Dynamic renderables API
struct MeshInstance
{
std::shared_ptr<MeshAsset> mesh;
glm::mat4 transform{1.f};
};
void addMeshInstance(const std::string &name, std::shared_ptr<MeshAsset> mesh,
const glm::mat4 &transform = glm::mat4(1.f));
bool removeMeshInstance(const std::string &name);
void clearMeshInstances();
// GLTF instances (runtime-spawned scenes with transforms)
struct GLTFInstance
{
std::shared_ptr<LoadedGLTF> scene;
glm::mat4 transform{1.f};
};
void addGLTFInstance(const std::string &name, std::shared_ptr<LoadedGLTF> scene,
const glm::mat4 &transform = glm::mat4(1.f));
bool removeGLTFInstance(const std::string &name);
void clearGLTFInstances();
struct SceneStats
{
float scene_update_time = 0.f;
} stats;
private:
EngineContext *_context = nullptr;
Camera mainCamera = {};
GPUSceneData sceneData = {};
DrawContext mainDrawContext;
std::unordered_map<std::string, std::shared_ptr<LoadedGLTF> > loadedScenes;
std::unordered_map<std::string, std::shared_ptr<Node> > loadedNodes;
std::unordered_map<std::string, MeshInstance> dynamicMeshInstances;
std::unordered_map<std::string, GLTFInstance> dynamicGLTFInstances;
};

2
src/vma_impl.cpp Normal file
View File

@@ -0,0 +1,2 @@
#define VMA_IMPLEMENTATION
#include <vk_mem_alloc.h>

52
third_party/CMakeLists.txt vendored Normal file
View File

@@ -0,0 +1,52 @@
find_package(Vulkan REQUIRED)
add_library(vkbootstrap STATIC)
add_library(glm INTERFACE)
add_library(vma INTERFACE)
add_library(stb_image INTERFACE)
add_subdirectory(fastgltf)
add_subdirectory(fmt EXCLUDE_FROM_ALL)
add_subdirectory(SDL EXCLUDE_FROM_ALL)
target_sources(vkbootstrap PRIVATE
vkbootstrap/VkBootstrap.h
vkbootstrap/VkBootstrap.cpp
)
target_include_directories(vkbootstrap PUBLIC vkbootstrap)
target_link_libraries(vkbootstrap PUBLIC Vulkan::Vulkan $<$<BOOL:UNIX>:${CMAKE_DL_LIBS}>)
set_property(TARGET vkbootstrap PROPERTY CXX_STANDARD 20)
#both vma and glm and header only libs so we only need the include path
target_include_directories(vma INTERFACE vma)
target_include_directories(glm INTERFACE glm)
#add_library(sdl2 INTERFACE)
#target_include_directories(sdl2 INTERFACE $ENV{VULKAN_SDK}/Include/SDL2 )
#target_link_directories(sdl2 INTERFACE $ENV{VULKAN_SDK}/Lib )
#target_link_libraries(sdl2 INTERFACE SDL2 SDL2main)
add_library(imgui STATIC)
target_include_directories(imgui PUBLIC imgui)
target_sources(imgui PRIVATE
imgui/imgui.h
imgui/imgui.cpp
imgui/imgui_demo.cpp
imgui/imgui_draw.cpp
imgui/imgui_widgets.cpp
imgui/imgui_tables.cpp
imgui/imgui_impl_vulkan.cpp
imgui/imgui_impl_sdl2.cpp
)
target_link_libraries(imgui PUBLIC Vulkan::Vulkan SDL2::SDL2)
target_include_directories(stb_image INTERFACE stb_image)

90
third_party/SDL/.clang-format vendored Normal file
View File

@@ -0,0 +1,90 @@
---
AlignConsecutiveMacros: Consecutive
AlignConsecutiveAssignments: None
AlignConsecutiveBitFields: None
AlignConsecutiveDeclarations: None
AlignEscapedNewlines: Right
AlignOperands: Align
AlignTrailingComments: true
AllowAllArgumentsOnNextLine: true
AllowAllParametersOfDeclarationOnNextLine: true
AllowShortEnumsOnASingleLine: true
AllowShortBlocksOnASingleLine: Never
AllowShortCaseLabelsOnASingleLine: false
AllowShortFunctionsOnASingleLine: All
AllowShortIfStatementsOnASingleLine: Never
AllowShortLoopsOnASingleLine: false
AlwaysBreakAfterDefinitionReturnType: None
AlwaysBreakAfterReturnType: None
AlwaysBreakBeforeMultilineStrings: false
AlwaysBreakTemplateDeclarations: MultiLine
# Custom brace breaking
BreakBeforeBraces: Custom
BraceWrapping:
AfterCaseLabel: true
AfterClass: true
AfterControlStatement: Never
AfterEnum: true
AfterFunction: true
AfterNamespace: true
AfterObjCDeclaration: true
AfterStruct: true
AfterUnion: true
AfterExternBlock: false
BeforeElse: false
BeforeWhile: false
IndentBraces: false
SplitEmptyFunction: true
SplitEmptyRecord: true
# Make the closing brace of container literals go to a new line
Cpp11BracedListStyle: false
# Never format includes
IncludeBlocks: Preserve
# clang-format version 4.0 through 12.0:
#SortIncludes: false
# clang-format version 13.0+:
#SortIncludes: Never
# No length limit, in case it breaks macros, you can
# disable it with /* clang-format off/on */ comments
ColumnLimit: 0
IndentWidth: 4
ContinuationIndentWidth: 4
IndentCaseLabels: false
IndentCaseBlocks: false
IndentGotoLabels: true
IndentPPDirectives: None
IndentExternBlock: NoIndent
PointerAlignment: Right
SpaceAfterCStyleCast: false
SpacesInCStyleCastParentheses: false
SpacesInConditionalStatement: false
SpacesInContainerLiterals: true
SpaceBeforeAssignmentOperators: true
SpaceBeforeCaseColon: false
SpaceBeforeParens: ControlStatements
SpaceAroundPointerQualifiers: Default
SpaceInEmptyBlock: false
SpaceInEmptyParentheses: false
UseCRLF: false
UseTab: Never
ForEachMacros:
[
"spa_list_for_each",
"spa_list_for_each_safe",
"wl_list_for_each",
"wl_array_for_each",
"udev_list_entry_foreach",
]
---

79
third_party/SDL/.editorconfig vendored Normal file
View File

@@ -0,0 +1,79 @@
# For format see editorconfig.org
# Copyright 2022 Collabora Ltd.
# SPDX-License-Identifier: Zlib
root = true
[*.{c,cg,cpp,gradle,h,java,m,metal,pl,py,S,sh,txt}]
indent_size = 4
indent_style = space
insert_final_newline = true
trim_trailing_whitespace = true
[*.{html,js,json,m4,yml,yaml,vcxproj,vcxproj.filters}]
indent_size = 2
indent_style = space
[*.xml]
indent_size = 4
indent_style = space
[{CMakeLists.txt,sdl2-config*.cmake.in,cmake/*.cmake}]
indent_size = 2
indent_style = space
[{cmake_uninstall.cmake.in,test/CMakeLists.txt}]
indent_size = 4
indent_style = space
[configure.ac]
# Inconsistently 2-, 4- or occasionally 3-space indented, but mostly 4,
# so let's use 4 for new code
indent_size = 4
indent_style = space
[{Makefile.*,*.mk,*.sln,*.pbxproj,*.plist}]
indent_size = 8
indent_style = tab
tab_width = 8
[Makefile.os2]
indent_size = 4
indent_style = space
[test/Makefile.os2]
indent_size = 2
indent_style = space
[{src/core/os2/geniconv/makefile,src/core/os2/geniconv/os2cp.c}]
indent_size = 2
indent_style = space
[src/joystick/controller_type.*]
indent_style = tab
[src/joystick/hidapi/steam/*.h]
indent_style = tab
[src/libm/*.c]
indent_style = tab
[src/test/SDL_test_{crc32,md5,random}.c]
indent_size = 2
indent_style = space
[src/video/yuv2rgb/*.{c,h}]
indent_style = tab
[wayland-protocols/*.xml]
indent_size = 2
indent_style = space
[*.{markdown,md}]
indent_size = 4
indent_style = space
# Markdown syntax treats tabs as 4 spaces
tab_width = 4
[{*.bat,*.rc}]
end_of_line = crlf

View File

@@ -0,0 +1,7 @@
<!--- Provide a general summary of your changes in the Title above -->
## Description
<!--- Describe your changes in detail -->
## Existing Issue(s)
<!--- If it fixes an open issue, please link to the issue here. -->

View File

@@ -0,0 +1,16 @@
cmake_minimum_required(VERSION 3.0...3.5)
project(ci_utils C CXX)
set(txt "CC=${CMAKE_C_COMPILER}
CXX=${CMAKE_CXX_COMPILER}
CFLAGS=${CMAKE_C_FLAGS}
CXXFLAGS=${CMAKE_CXX_FLAGS}
LDFLAGS=${CMAKE_EXE_LINKER_FLAGS} ${CMAKE_C_STANDARD_LIBRARIES}
")
message("${txt}")
set(VAR_PATH "/tmp/env.txt" CACHE PATH "Where to write environment file")
message(STATUS "Writing CC/CXX/CFLAGS/CXXFLAGS/LDFLAGS environment to ${VAR_PATH}")
file(WRITE "${VAR_PATH}" "${txt}")

View File

@@ -0,0 +1,81 @@
name: Build (Android)
on: [push, pull_request]
jobs:
android:
name: ${{ matrix.platform.name }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
platform:
- { name: Android.mk }
- { name: CMake, cmake: 1, android_abi: "arm64-v8a", android_platform: 23, arch: "aarch64" }
steps:
- uses: actions/checkout@v3
- uses: nttld/setup-ndk@v1
id: setup_ndk
with:
ndk-version: r21e
- name: Build (Android.mk)
if: ${{ matrix.platform.name == 'Android.mk' }}
run: |
./build-scripts/androidbuildlibs.sh
- name: Setup (CMake)
if: ${{ matrix.platform.name == 'CMake' }}
run: |
sudo apt-get update
sudo apt-get install ninja-build pkg-config
- name: Configure (CMake)
if: ${{ matrix.platform.name == 'CMake' }}
run: |
cmake -B build \
-DCMAKE_TOOLCHAIN_FILE=${{ steps.setup_ndk.outputs.ndk-path }}/build/cmake/android.toolchain.cmake \
-DSDL_WERROR=ON \
-DANDROID_PLATFORM=${{ matrix.platform.android_platform }} \
-DANDROID_ABI=${{ matrix.platform.android_abi }} \
-DSDL_STATIC_PIC=ON \
-DSDL_VENDOR_INFO="Github Workflow" \
-DCMAKE_INSTALL_PREFIX=prefix \
-DCMAKE_BUILD_TYPE=Release \
-GNinja
- name: Build (CMake)
if: ${{ matrix.platform.name == 'CMake' }}
run: |
cmake --build build --config Release --parallel --verbose
- name: Install (CMake)
if: ${{ matrix.platform.name == 'CMake' }}
run: |
cmake --install build --config Release
echo "SDL2_DIR=$(pwd)/prefix" >> $GITHUB_ENV
( cd prefix; find ) | LC_ALL=C sort -u
- name: Verify CMake configuration files
if: ${{ matrix.platform.name == 'CMake' }}
run: |
cmake -S cmake/test -B cmake_config_build -G Ninja \
-DCMAKE_TOOLCHAIN_FILE=${{ steps.setup_ndk.outputs.ndk-path }}/build/cmake/android.toolchain.cmake \
-DANDROID_PLATFORM=${{ matrix.platform.android_platform }} \
-DANDROID_ABI=${{ matrix.platform.android_abi }} \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_PREFIX_PATH=${{ env.SDL2_DIR }}
cmake --build cmake_config_build --verbose
- name: Verify sdl2-config
if: ${{ matrix.platform.name == 'CMake' }}
run: |
export CC="${{ steps.setup_ndk.outputs.ndk-path }}/toolchains/llvm/prebuilt/linux-x86_64/bin/clang --target=${{ matrix.platform.arch }}-none-linux-androideabi${{ matrix.platform.android_platform }}"
export PATH=${{ env.SDL2_DIR }}/bin:$PATH
cmake/test/test_sdlconfig.sh
- name: Verify sdl2.pc
if: ${{ matrix.platform.name == 'CMake' }}
run: |
export CC="${{ steps.setup_ndk.outputs.ndk-path }}/toolchains/llvm/prebuilt/linux-x86_64/bin/clang --target=${{ matrix.platform.arch }}-none-linux-androideabi${{ matrix.platform.android_platform }}"
export PKG_CONFIG_PATH=${{ env.SDL2_DIR }}/lib/pkgconfig
cmake/test/test_pkgconfig.sh
- name: Verify Android.mk
if: ${{ matrix.platform.name == 'CMake' }}
run: |
export NDK_MODULE_PATH=${{ env.SDL2_DIR }}/share/ndk-modules
ndk-build -C ${{ github.workspace }}/cmake/test APP_PLATFORM=android-${{ matrix.platform.android_platform }} APP_ABI=${{ matrix.platform.android_abi }} NDK_OUT=$PWD NDK_LIBS_OUT=$PWD V=1

Some files were not shown because too many files have changed in this diff Show More