Brahma is an innovative game engine being written from scratch in C++, based on NIPSYS64 framework I'm also working on. This is a 3D engine with a special lean towards retro-feel first-person games, intended for small studios and indie game development. It's peculiar in that it evolves in a different way than other modern engines, rejecting BSP-based portal systems, Z-buffering, floating-point coordinates, and most of the lame screen-space effects in favor of clever and efficient know-how techniques, what potentially enables for lots of things previously deemed to be impossible to implement in a realtime renderer. The engine is non-Euclidean capable to some degree; also it supports true displacement mapping (that represents actual level geometry affecting game physics, not just a visual quirk) and correct non-opaque surface order. The engine is also carefully designed to be easy and convenient to develop for, yet versatile and adaptive to any needs.
I've always dreamed of making computer games powered by my own 3D engine. I've been learning computer programming since 1999, coding things inspired by my favorite games. My first attempts to create an engine were pointless garbage, athough some of my results were promising, and I remember coming up with my own vector format and a simple editor back in 2001.
Things have changed a lot in 2006 when I've moved my development to Visual C++. I was a fan of the classic Build engine games such as Duke Nukem 3D and Shadow Warrior, and since I had some great ideas for my own games, I was striving for independent game development using my technology which has made a start in 2013. The first stone of the 3D part was laid in late 2017 when I tried to replicate the sector-based approach that was used by Build engine.
I was studying in a technical college once, but dropped out in 2012 to devote more time for projects and self-education. At the same time I felt the urge to learn about artistic, music and design related topics, becoming a rare type of an artist/musician who also writes software for his purposes.
The reason why I've chosen to make a new engine is that I adhere to a different philosophy than one prevalent in game industry, giving the engineering part both an aesthetic and pragmatic value, looking for elegant ways to achieve more with simple means, appreciating diversity and uniqueness. Beside that, I wish to learn how games are created from basic algorithmic structures, go to the basics and experiment with various approaches, in order to discover gems that were overlooked by others. While doing this, I have strong intuitions and views on what directions to prioritize. I believe that having a stable high framerate, a low input lag, and never seeing any loading screens is more important for delivering a great immersive gaming experience than usage of complex shaders and high polygon count, for instance. Also, I'm convinced that mesh support should not be limited to triangles and the collision model should normally stay on par with the visuals, avoiding any simplified proxy models for handling collisions in order to implement an authentic "what you see it what you interact with" approach. As a sidenote, if a small indie game is built with use of a modern bloatware tech, it typically feels kind of faked and generic, tends to use excessive disk space and memory, as the oldschool look and feel is only imitated, not being such "under the hood". My engine is an example of an integrated toolset that has facilities to automatize content generation and management, making games take up less space, load quicker and be easier to create at the same time.
The entire process of rendering of a 3D scene is in fact rather sequential by its nature, although it's deemed to be a massively-parallel task, that is exploited by dedicated graphic hardware with a specialized architecture that performs many independent tasks more efficiently. Unless you are using a culling technique like Z-buffering, you can't draw an object if you have to draw anything behind it first. So, although a simple convex set of polygons can be filled in any order, the order one has to draw a complex scene is determined by its depth relations. As for the general hidden surface culling algorithm, it basically works by means of progressive disclosure of parts of the level which are visible from the preceding render nodes. This is also a sequential iterative task, no matter if you choose to use a BSP tree or sectors. It turns out that only certain specialized types of rendering tasks are easily parallelizable by dedicated ASIC hardware (e.g. vertex transformations, textured polygon fill, shading, postprocessing or some forms of raytracing). The amount of causal dependencies pervading the way from a bunch of 3D geometry data to a complete rendered frame is such that it's only partly parallelizable, requiring some careful effort to keep your execution units synchronized. This and seeking for maximum flexibility is the main reason why my renderer is implemented in software; also any accelerated versions are expected to use this renderer as a reference.
Despite being written in C++17, the code relies on generic programming rather than object-oriented programming, with minimum amount of machine abstraction. Keeping the code as generic as possible helps greatly reduce its size (and subsequent bugs) and improve maintainability without sacrificing much speed. To ensure best performance, I've developed special built-in facilities for fixed-point arithmetics based on look-up tables which dramatically speed up the calculation of logarithms, square roots and other functions, yet maintain a reasonable precision.
In early 2018, I've learned how to scan sector-based maps like in Build engine, and began experimenting with various CPU-based rendering techniques. As a result, in terms of visuals my engine extends far beyond DOS Build capabilities, allowing for true planar reflections, native sector over sector, free look, anti-aliasing, a limited form of HDR rendering and a lot more neat things. And with the multithreading support, the engine easily surpasses Build's performance on many-core processors. I'm still yet to do the physics part and proper 3D sound though.
The map format is sector-based like good ol' Build engine, but has evolved its principles far beyond with a ton of new features like heightmapping, multiple planar reflections, physically-based HDR. Proposing a support for a variety of rendering techniques not limited to just flat triangles, the engine also doesn't use Z-buffer, using span records instead, what makes rendering very fast even without specific hardware acceleration (which normally requires to trade flexibility for speed). Having a high degree of module integration and transparency between engine parts, Brahma engine is flexible and adjustable to various needs. With it, one can create very dynamic games with an oldschool look and feel.
Now I'm developing a working prototype that features a renderer fully implemented in software (thus not restricted to specific hardware features), somewhat inspired by Ken Silverman's Build engine but written from scratch. Further iterations will bring even more capabilities and speed through integration with CUDA, Direct2D, ASIO or other powerful APIs. Eventually the engine (along with the underlying framework) could be ported to other environments based on x86-64 ISA if there's enough interest in doing that.
The Z-buffering technique has already been used in realtime 3D computer graphics for decades, proved itself as a viable way to solve the visibility problem on the level of individual pixels. Since the introduction of hardware 3D acceleration, the mainstream gaming industry has adopted this method to aid Z-culling, which can't be done consistently with simple polygon sort by their distance (the so-called painter's algorithm). Remembering a depth value for each screen pixel also enables for lots of useful post-effects, such as screen-space reflections and ambient occlusion modern engines can benefit from. However, as only one depth value is stored per pixel, the workability of Z-buffer is limited to the opaque geometry, and it turns to be a major downside, as various non-opaque stuff is becoming increasingly common in games.
To reduce the overdraw of opaque objects, one should normally sort the polygons from nearest to furthest one. The idea is that pixels behind ones already drawn will be discarded by Z-testing, and the earlier we plot the closest pixels, the more hidden stuff we will discard subsequently. However, as non-opaque polygons can be viewed through, they must be drawn in the far to near order after all the opaque geometry is done to minimize unwanted artifacts. And in cases of overlapping or intersecting non-opaque (translucent or alpha-channeled) objects, you're likely to get multiple depth conflicts within the same pixel, and that's where conventional Z-buffering always fails. Imagine a lengthy alpha-channeled projectile flying through an alpha-channeled obstacle such that only a part of it is closer to the viewer. There are possible workarounds such as depth peeling, which requires two Z-buffers and takes multiple passes to render everything behind transparent polygons separately, and the more layers of transparency you have got, the more passes it will require to yield the correct look. Needless to say how inefficient it can turn out for scenes complex enough.
Not being able to maintain the correct order of transparency is the fundamental deficiency of Z-buffering, and it is responsible for some ugly artifacts we encounter in lots of game products. Z-fighting is another nasty thing that keeps pursuing game developers. When working on my Brahma engine prototype renderer, I was confident about drawing everything along lines of constant depth, what seemed to be the optimal strategy for a software implementation with a slew of depth-dependent things such as fog and mipmapping. At some point I thought that if we augment this approach with processing all sprites and masks in the view at the same time, this will naturally grant us the ability to draw everything in a correct order without the need for any Z tests, allowing just to sort the objects according to their furthest point from the image plane.
Of course, this approach requires a more elaborate renderer pipeline which could keep track of a variable-size bunch of objects being concurrently rasterized. But as long as I already had some facility for multitexturing, extending it to support multiple objects was a reasonable and logical step. I'm still yet to make the masks and voxel sprites rendered in the same fashion, but you can watch this video to get a clue on how does it work in a real engine, and compare my results with the same map being run in EDuke32.
No articles were found matching the criteria specified. We suggest you try the article list with no filter applied, to browse all available. Post article and help us achieve our mission of showcasing the best content from all developers. Join now to share your own content, we welcome creators and consumers alike and look forward to your comments.