A Postmortem for My Work Creating Portals in Unreal Engine 4

Abby Welsh - Research Advisor Paul Dickson
Fall 2018

Portals and Portal Rendering

My first step of the work I did this fall was to set up teleporting between portals and visualizing portals. Teleporting between portals isn't too complicated. For position, it was just a matter of a bit of vector math with the relative positions of the player and the portals. Rotation was a bit more complicated—the player object's rotation is controlled by the playercontroller's rotation, which took a while to figure out.

Setting up visuals was a bit more challenging. For this implementation I used a render-target based method. Each portal has a material that shows the render target mapped to screenspace. Each render target is rendered to once per frame by a scene capture component that is transformed to have the same translation relative to the exit portal as the player has to the entrance portal. Clipping planes are used to keep the scene capture component from seeing stuff on the wrong side of the exit portal. Initializing render targets in C++ in unreal turns out to be a poorly documented operation. While when using the blueprint system, they can just be added via the content browser. However, I need one for each portal, so generating them at runtime in C++ makes a lot of sense. I ran through probably half a dozen potential solutions before I found one that functioned correctly.

While using render targets like I did is an effective high-level way to make portals work, it has a number of disadvantages. Perhaps the largest is the time cost—for each render target, the entire scene has to be rendered again. This can be improved by checking things such as whether a portal is on screen or whether it is facing the camera. Ultimately, though, for having 2 portals on screen, a render cost of 3 times as much is very high. In the future, I'm planning to pursue a stencil-buffer based approach, which will eliminate the need for render targets and hopefully reduce the per-portal cost to a small additive amount. The other challenge of using render targets and scene capture components as I did was that a portal could not be seen correctly through another portal; i.e., there was no recursion.

The last problem I ran into while setting up the graphics was a one frame lag between the player's view and the view through portals. After a bunch of debugging, I determined that this was because the player's transform is updated based on the movement component by the engine after all tick functions have been called, but before the scene renders. This meant that portals, which are rendered by each portal's update tick, were using the outdated player transformation. I was unable to find any way to tell the engine to update the player's position at a different time, so instead, I manually call the function to do so before each portal renders. Were I continuing with this version of the project, I'd research ordering or prioritizing tick calls so I could do this once after the player movement component has been updated based on input (which happens in the tick section), but before the portals render.

Movement and Gravity

For about a month, my main focus on the project was setting up the player character so that gravity was relative to your own orientation, and movement would work accordingly. I set out to do this initially by just modifying the player's rotation and seeing what would happen. In unreal, the player character is made up of several interlinked components- the pawn, the movement component, and the player controller, among others. Rotating the pawn isn't enough to rotate the character. The player controller is set up so that it controls the player's yaw, as well as controlling the camera independently of the character pawn's orientation. In order to set up the camera and player character so that I could orient them, I decoupled the player controller and manually controlled the camera based on mouse input and the pawn's world orientation. I don't believe this is the best way to do it- the player controller should be controlling the player. However, in order to set it up to do so, I believe I would have to override much of its functionality to work relative to the player's orientation, or write a new controller entirely.

Once I had player orientation working, I moved on to working on the player's movement. The first step of this was to rotate the movement inputs according to the player camera's orientation, as opposed to the controller's rotation, which was a fairly straightforward bit of vector math. Next, I had to rework gravity. I had already determined that globally, gravity was a scalar, so just setting or overriding the character's gravity with a new vector was out of the question. By stepping through the character's movement component code, I discovered that gravity was internally hardcoded to be in the z direction. I decided the best approach was to override the functions of the player movement component that resolved movement. Unreal's default character movement is incredibly fleshed out and robust—out of the box, it supports jumping, crouching, flying, walking, air movement settings, and perhaps most impressively, networking. In most contexts, this is fantastic, and the out of the box readiness is one of Unreal's main draws. Unfortunately, that means that the code is tightly coupled, very complex, and without much documentation. I quickly started to run into issues where modifying the way gravity worked stretched across many functions and I spent almost all of my time stepping through the code with the debugger and trying to decipher how the character movement worked. After a few weeks of this, I decided to set it aside to focus on finishing portals and working on level design. Once again, I believe the correct way to solve this problem would be to step back and majorly rewrite the movement component or just write a new one from scratch.

Takeaways

C++ feels like a second—class citizen in UE4 Throughout my experience working with C++ in Unreal, I felt like I was fighting with the engine. The online documentation for the majority of the codebase consists of one-line descriptions of classes and lists of functions, each with maybe one line of description if I was lucky. My main method for solving problems was primarily finding old forum posts tangentially related to what I was working on, guessing to modify the code so it fit my situation, and stepping through with the debugger to figure out why it wasn't working. I think perhaps the most telling example of C++ being second-class is Unreal's preprocessor system. UPROPERTYs are C++ member variables that are exposed to the editor and/or blueprints. They are declared by putting UPROPERTY(...) before each declaration in the header file. They also have the nifty little feature of making the variable public. This is to be expected—for a variable to be accessible outside the class in a blueprint, of course it has to be public. However, this is documented nowhere. In the header file, the variable can be, and sometimes is, in a private section. Since UPROPERTY declarations are processed by Unreal's preprocessor, Visual Studio has no indication of this. It only took me two days of puzzling to figure out that you can just go ahead and access that variable that you want to even though everything indicates that it's a private member.

For a fundamental change to the system, don't just modify the system. Non-Euclidean space is not trivial. Even with the most open sandbox of a game engine, creating non-Euclidean effects is a bunch of hacks and trickery. Because of this, the flexibility of the engine matters a great deal. Unreal is a very powerful engine, and could undoubtedly do non-Euclidean space. However, in order to do so effectively, I would need to scrap a large part of Unreal's built-in systems (which are a lot of its value) and build my own to support non-Euclidean behavior from the ground up.

Process Takeaways

Were I to do this again, and when I continue this work in the future, there's a number of things I'd do differently. I'd use a stencil-buffer approach rather than render targets for rendering. I'd scrap or rewrite Unreal's movement and character control system. I'd follow through and complete portal functionality before putting a bunch of time into gravity and movement. Ultimately, though, these are things I only know in hindsight. What's become clear to me is that the entire process this fall was an exploration, figuring out what does and doesn't work. A period of exploration is to be expected, of course - it comes with any new project, and takes longer when the project is more complex and challenging. What I didn't realize at the beginning of the semester is that the entire time would be exploration. Understanding that now, I can change my process to be more effective in the future.

A large piece of the process this fall was figuring out what does and doesn't work and what is and isn't possible. By coming up with a more concrete plan for a solution, I can come up with shorter tests that will assess the viability of a particular part of the overall plan. These will have clearer cut conditions of success, as they aren't dependent on the rest of the project, and can be completed in a constrained time-span. While testing pieces individually, then combining them into the larger project will involve a lot of rewriting, this is something that happens anyways and is an important step to building the system well, and doing it in smaller segments will make it more manageable and effective.

Overall, I'm satisfied with what I accomplished this semester. While I didn't end up where I was expecting, I learned a ton along the way- mostly what doesn't work, but also what does and how to make that distinction earlier. I feel much better equipped as I move forward to evaluate other engines' viability for creating the non-Euclidean mechanics I'm looking for.