Skip to content

Orillusion engine is a lightweight rendering engine that fully supports the WebGPU standard. Based on the latest Web graphics API standard, we have made a lot of exploration and attempts to implement many techniques and functions that were previously difficult or impossible to achieve on the web. We have summarized the architecture and feature characteristics of the engine from the following aspects.

WebGPU Support

The engine's underlying design completely follows the latest WebGPU standard without considering compatibility with the existing WebGL standard. With the continuous development of the WebGPU API and WGSL , we will also rapidly update and iterate the computing and rendering capabilities of the engine's underlying WebGPU , enhancing the engine's performance advantages.

ECS Component-Based System

As the engine framework has developed, the industry has generally begun to adopt the development design principle of composition over inheritance. Therefore, we have abandoned an inheritance-based architecture and chosen the latest ECS component-based architecture as the design philosophy for the engine's body. By eliminating the complexity of inheritance chains and interwoven functionality in the inheritance model and redesigning through decoupling, encapsulation, and modularization, developers can flexibly combine and extend functionality.

Data-Oriented Design

The strict ECS architecture requires that Entity, Component , and System be completely independently separated. This design paradigm can result in greater improvements in data optimization and performance. However, it also brings significant negative issues, namely, high development costs and difficulties. Therefore, considering the difficulty of developers' use and the development habits of web developers, we have adopted the core Data Oritented (DO) concept in ECS and implemented an on-demand DO structure. The current usage is to create continuous memory in the GPU and implement efficient data transfer between the CPU and GPU through memory mapping, reducing the waiting time and frequency of data exchange between the CPU and GPU. This approach can improve cache hit rates, improve performance, and ensure the overall ease of development and use of the engine.

Clustered Light Culling

This is the light culling scheme in Clustered Forward Rendering. The spatial division is performed in two dimensions (Tile) and three dimensions (Cluster), and only the light sources that contribute to the block space are calculated to remove ineffective light sources and improve computational efficiency. Due to the limitations of WebGL's Uniform Buffer, the number of supported light sources is relatively small, usually within 10. With the introduction of Storage Buffer in WebGPU, it basically directly targets the limit of GPU memory. As long as self-memory management and optimization are done well, the power of the GPU can be fully utilized to achieve multi-light source rendering.

Physics Simulation System

We first implemented ammo.js as the basic physics simulation on the CPU side. At the same time, we are building a GPU-based physics simulation engine based on Compute Shader, including particles, fluids, soft bodies, rigid bodies, clothing, etc. During the WebGL era, only the data structure of vertices and textures could be used for the corresponding calculation process, which was complex and inefficient. With the Compute Shader in WebGPU, memory and data structures are more flexible, giving us a lot of room for imagination. Currently, many excellent physics simulation cases have been implemented, and more powerful physics simulation functions are being rapidly iterated.

Physics-Based Material Rendering

We have implemented the most basic Blinn-phong model material rendering. In order to achieve better realism rendering effects, we rely on HDR Light and have also implemented material rendering based on PBR (Physically-based rendering). This is currently a standard feature in mainstream engines and is a common basic engine requirement.

Dynamic Diffuse Global Illumination (DDGI)

The DDGI (Dynamic Diffuse Global Illumination) algorithm is a global illumination algorithm based on Probe. A large number of Probe need to be placed in space and grouped together to form a DDGI Volume. A Compute Shader is used to calculate the irradiance (light information) and G-buffer (geometry information) of each Probe, which are stored by mapping from a sphere to an octahedron and then to a square. When shading, only the light and geometry information stored in the surrounding probe need to be accessed to calculate the shading information of the shading point. Binding the Volume to the camera and moving with it will apply indirect lighting to objects within the Volume. Currently, we set a maximum of 32 indirect light sources, based on overall rendering performance.

Rich Post-Processing Effects

Post-processing effects are an important way to enhance the atmosphere of rendered content. Based on the compute shader of WebGPU, we have currently implemented commonly used post-processing effects such as HDR Bloom, Screen Space Reflections, and Ambient Occlusion. By relying on the general computing capabilities of WebGPU, we can more efficiently utilize the computational advantages of the GPU and achieve very good results.

For example, Screen Space Reflections (SSR) is a reflection effect implemented based on the size of the screen space. Compared to planar reflections, it can achieve reflections on any surface in the scene without additional DrawCall, and is a very popular real-time reflection technique. First, each pixel of the screen space object needs to calculate its reflection vector. Then, it is necessary to determine whether the depth of the screen space Ray Marching coordinate and the depth of the object stored in the depth buffer intersect. Finally, adjust the roughness appropriately, and use the intersection point color as the reflection color to complete the shading. We implement all the calculations in this process using the Compute Shader of WebGPU, avoiding the consumption of CPU. In the end, we can present very good reflection effects in the browser.

For more extended post-processing effects, please refer to PostEffects

Released under the MIT License