An all-new game engine being written in C++; based on the NipSys64 framework I'm also working on. This is a retro gaming oriented 3D engine intended for indie game development. It's peculiar in that it evolves in a different way than other modern engines, rejecting the BSP-based portal system, Z-buffering, floating-point coordinates, and most lame screen-space effects in favor of clever and efficient know-how techniques. As a result, the engine is non-Euclidean capable, deals with large open spaces with ease, features true displacement maps and correct non-opaque surface order, what potentially enables for lots of powerful techniques previously deemed to be nearly impossible to implement in a realtime renderer. The engine is also carefully designed to be easy and convenient to develop for, yet versatile and adaptive to any needs.
I've always dreamed of making computer games powered by my own 3D engine. I've been learning computer programming since 1999, and although my first attempts to make an engine were pointless garbage, things have changed a lot in 2006 when I've moved my development to Visual C++. I was a fan of the classic Build engine games such as Duke Nukem 3D and Shadow Warrior, had some great ideas for my own games, and was striving for independent game development using my technology which has made a start in 2013. The first stone of the 3D part was laid in late 2017.
I was studying in a technical college once, but dropped out in 2012 to devote more time for projects and self-education. At the same time I felt the urge to learn about artistic, music and design related topics, becoming a rare type of an artist/musician who also writes software for his purposes.
The reason why I've chosen to make a new engine is that I adhere to a different philosophy than one prevalent in game industry, giving the engineering part both an aesthetic and pragmatic value, looking for elegant ways to achieve more with simple means. Beside that, I wish to learn how the games are created from the basic algorithmic structures, and be free to experiment with them. Doing this, I have strong intuitions and views on what to prioritize. I believe that having a stable high framerate, a low input lag, and never seeing any loading screens is more important for delivering a great immersive gaming experience than using complex shaders and high polygon count, for instance. Also, I'm convinced that the collision model should normally stay on par with the visuals, avoiding any simplified proxy models for handling collisions in order to implement an authentic "what you see it what you interact with" approach. As a sidenote, if a retro game is built using a modern bloatware tech, it typically feels kind of faked, tends to use excessive disk space and memory, as the oldschool look and feel is only imitated, not being such "under the hood".
The entire process of rendering of a 3D scene, once deemed to be a massively-parallel task (leading to the popularization of graphic hardware that could perform repeating independent tasks more efficiently), is in fact rather sequential by its nature. Unless you are using a hack like a Z-buffer, you can't draw a surface if you have to draw anything behind it first. So, although it doesn't generally matter in which order you fill the pixels of a single polygon, the order you draw your complex scene is determined by its depth relations. As for the general hidden surface culling algorithm, it basically works by means of progressive disclosure of parts of the level which are visible from preceding render nodes. This is also a sequential iterative task, no matter if you use BSP or sector-based approach. It turns out that only certain specialized types of rendering tasks are easily parallelizable by dedicated hardware (vertex transformations, textured polygon fill, some sorts of raytracing). The amount of causal dependencies pervading the way from a bunch of 3D geometry data to a complete rendered frame is such that it only partly parallelizable, requiring some careful effort to keep your execution units synchronized. This is the main reason why my renderer implementation is done in software; also any future accelerated versions must use this software renderer as a reference.
Despite using C++17 instead of C, the code is fairly low-level, and no actual object-oriented programming is done. To ensure best performance, I've developed special built-in facilities for fixed-point arithmetics based on look-up tables which dramatically speed up calculation of logarithms, square roots and other functions, yet maintaining a reasonable precision.
In early 2018, I've learned how to scan sector-based maps like in Build engine, and began experimenting with various CPU-based rendering techniques. As a result, in terms of visuals my engine extends far beyond DOS Build capabilities, allowing for true flat reflections, sector over sector, vertical look, anti-aliasing, a limited form of HDR rendering and a lot more neat things. And with the multithreading support, the engine easily surpasses Build's performance on many-core processors. I'm still yet to do the physics part and proper 3D sound though.
The map format is sector-based like good ol' Build engine, but has evolved its ideas far beyond with a ton of new features like heightmaps, fast multiple reflections, HDR, lightmaps, voxels and proposed pixel-precise collision detection, as well as native multithreading support. Supporting a variety of rendering techniques (not being limited to just flat polygons), the engine also doesn't use Z-buffer, using span records instead, what makes rendering very fast even in pure software. Having a high degree of module integration and transparency between engine parts, Brahma engine is flexible and adjustable to any needs. With it, one can create very dynamic games with an oldschool look and feel.
Now I'm developing a working prototype, which features a renderer fully implemented in software (thus, not being restricted by specific hardware features), somewhat inspired by Ken Silverman's Build engine but written from scratch, brought to the screen by the legacy Windows API. Next iterations will bring even more capabilities and speed through integration with CUDA, Direct2D, ASIO and other powerful APIs. Eventually the engine could be ported to other operating systems that use the x86-64 architecture if there's enough interest in doing that.
The Z-buffering technique has already been used in realtime 3D computer graphics for decades, proved itself as a viable way to solve the visibility problem on the level of individual pixels. Since the introduction of hardware 3D acceleration, the mainstream gaming industry has adopted this method to aid Z-culling, which can't be done consistently with simple polygon sort by their distance (the so-called painter's algorithm). Remembering a depth value for each screen pixel also enables for lots of useful post-effects, such as screen-space reflections and ambient occlusion modern engines can benefit from. However, as only one depth value is stored per pixel, the workability of Z-buffer is limited to the opaque geometry, and it turns to be a major downside, as various non-opaque stuff is becoming increasingly common in games.
To reduce the overdraw of opaque objects, one should normally sort the polygons from nearest to furthest one. The idea is that pixels behind ones already drawn will be discarded by Z-testing, and the earlier we plot the closest pixels, the more hidden stuff we will discard subsequently. However, as non-opaque polygons can be viewed through, they must be drawn in the far to near order after all the opaque geometry is done to minimize unwanted artifacts. And in cases of overlapping or intersecting non-opaque (translucent or alpha-channeled) objects, you're likely to get multiple depth conflicts within the same pixel, and that's where conventional Z-buffering always fails. Imagine a lengthy alpha-channeled projectile flying through an alpha-channeled obstacle such that only a part of it is closer to the viewer. There are possible workarounds such as depth peeling, which requires two Z-buffers and takes multiple passes to render everything behind transparent polygons separately, and the more layers of transparency you have got, the more passes it will require to yield the correct look. Needless to say how inefficient it can turn out for scenes complex enough.
Not being able to maintain the correct order of transparency is the fundamental deficiency of Z-buffering, and it is responsible for some ugly artifacts we encounter in lots of game products. Z-fighting is another nasty thing that keeps pursuing game developers. When working on my Brahma engine prototype renderer, I was confident about drawing everything along lines of constant depth, what seemed to be the optimal strategy for a software implementation with a slew of depth-dependent things such as fog and mipmapping. At some point I thought that if we augment this approach with processing all sprites and masks in the view at the same time, this will naturally grant us the ability to draw everything in a correct order without the need for any Z tests, allowing just to sort the objects according to their furthest point from the image plane.
Of course, this approach requires a more elaborate renderer pipeline which could keep track of a variable-size bunch of objects being concurrently rasterized. But as long as I already had some facility for multitexturing, extending it to support multiple objects was a reasonable and logical step. I'm still yet to make the masks and voxel sprites rendered in the same fashion, but you can watch this video to get a clue on how does it work in a real engine, and compare my results with the same map being run in EDuke32.
No articles were found matching the criteria specified. We suggest you try the article list with no filter applied, to browse all available. Post article and help us achieve our mission of showcasing the best content from all developers. Join now to share your own content, we welcome creators and consumers alike and look forward to your comments.