Creating Texture2D at runtime gets usage Static



I have a situation where I want to create RenderTexture2D instances at runtime, to pass into uniforms of a shader. I’d rather not add all of the textures in my resource JSON, and runtime instantiation seems to be fine, but I’m having trouble rendering to them.

The runtime instantiated textures are stored in an array and set on a plane material’s shader uniforms.
For each render step I update the render target’s color texture, then render to it, and finally render the plane using aforementioned shader.

When I use a RenderTexture2D resource defined in the JSON for a uniform in the shader, everything works fine.
However, at the moment I try to use runtime instantiated RenderTexture2D instances, all remains black.

When checking the textures after initialization, the only difference with the JSON defined texture that I can find is that the Usage setting of the internal opengl::Texture2D instance remains Static even though I set texture->mUsage to DynamicRead.
However, I believe that this would only impact performance, rather than prevent rendering to it, correct?

Below are some code snippets for reference:

Setting up the textures:

        opengl::Texture2DSettings settings;
        settings.mWidth = 1920;
        settings.mHeight = 1080;
        settings.mFormat = GL_RGB;
        settings.mInternalFormat = GL_RGB8_EXT;
        settings.mType = GL_UNSIGNED_BYTE;
        int index = 0;
        for (auto& texture : mFrameBuffer) {
            texture.mWidth = settings.mWidth;
            texture.mHeight = settings.mHeight;
            texture.mUsage = opengl::ETextureUsage::DynamicRead;
            planeMaterial.getOrCreateUniform<UniformTexture2D>("inTexture" + std::to_string(index)).setTexture(texture);

Updating the render target’s color texture:


Any help would be much appreciated :slight_smile:



Hi Ralph,

I’m not sure I’m following this completely. But from what I see in your code snippet the order of initialization is wrong.

If it is a hardware texture you want to create at runtime you should probably use a nap::RenderTexture2D. This class is generally used as a stand-alone Texture2D.

To create a RenderTexture2D at runtime the following should work:

	std::unique_ptr<RenderTexture2D> render_tex = std::make_unique<RenderTexture2D>();
	render_tex->mWidth = 1920;
	render_tex->mHeight = 1080;
	render_tex->mFormat = RenderTexture2D::EFormat::RGB8;
	render_tex->mUsage = opengl::ETextureUsage::DynamicRead;
	if (!render_tex->init(error))
		return false;

As you can see order of initialization is important. Usage should also work based on the example above. As a rule consider the following. If the object you want to create at run-time is a resource make sure to call init() after setting all the parameters (marked as property). That is what the resource manager does as well.

Also: creating textures at run-time could fail for various reasons (active opengl context for example). It is therefore not recommended. When you declare a texture2D as a resource you know for sure it is initialized correctly and will therefore work. It is easy to make subtle mistakes at run-time, your app could crash or misbehave.

Also, it’s completely fine to render to the same render target in different render passes. Would that suffice?

I would recommend using ‘Napkin’ to declare all the buffers you need and link to them in a component. The component will handle setting the right texture (on update). To do this:

  • add all the 2D RenderTexture’s you need to the json file
  • create a component
  • add as a property: std::vector<ResourcePtr> mTargets (see demos / video demo)
  • link in the textures you need using napkin
  • in the update call perform the logic

Using a list of links to resources offers many benefits, one of them is that artists can themselves update them if required. It also makes sure everything is initialized in order.

Cheers Coen


Also, after setting the texture to use you should call init() again on the video render target.
I spotted this (bug) the other day when working on “auto video rendering utilities” and have removed the functionality to update the texture at run-time because of it. But in the build you have calling init() after setting the texture should do the trick.

To properly support this I can add some low-level code. This would allow for a more optimal way of switching textures for a target at run-time.

Right now initializing / updating render-targets at run-time is a rather costly operation. It is recommended to create the targets you need on initialization. Often these are known on forehand or is there a reason you need to create custom targets at runtime?


Thanks Coen, that makes sense, I’ll move forward in that direction.
Just to be clear, what you’re saying is rather than adding, say, 10 textures in the JSON and switching the active one on the render target for each render step, I should instead add 10 corresponding render targets tied to a specific texture as well?
The only reason I am currently creating all this at runtime is to prevent bloating the JSON.

Note that the aim is to store the most recent 10 frames of a video only, meaning I need to render the video once each time, but to a different texture.


Hey Ralph,

Gotcha, so the aim is to store the last 10 frames in a different texture. Right now adding 10 different targets is the most efficient way of solving your problem. There is considerable overhead associated with switching textures of a render-target at runtime as it’s currently done. But, you could use the init() after setting the texture on the render target for now if you prefer to do it at run-time.

I would create a std::vector<ResourcePtr> as a property of a component and link in all the targets you want in there. The component would handle rendering / switching of target based on your requirement.

I have a branch where i’m working on special video textures and video targets, to limit the amount of json declarations you have to go through when sampling video in the future. This would also render the video directly to a texture using a pre-compiled shader. It’s pretty much done but had to finish some other stuff before I am able to move on with this. I will make sure that this solution also has support for switching textures of a target at run-time.

To properly solve your issue I have to add some low-level support for switching textures on render-targets, which shouldn’t take a lot of work. What are you trying to achieve with this exactly? If it’s about rendering textures to disk I would maybe recommend storing the result of a render-step in a bitmap using the startGetData() and endGetData(Bitmap& bmp) methods available in the texture2d.h file.


Hi Ralph,

Quickly added support for switching render target textures at run-time. Validated on my end on a windows machine. Performance is good, overhead is low.

To change a texture at run-time:

	if (!mVideoRenderTarget->switchColorTexture(*mVideoTexture2, error))
		return false;

If the texture can’t be switched an error is returned. The most likely case is that the dimensions don’t match. Internally the target will keep a reference to the old texture if switching fails. You can safely switch textures when rendering, just make sure the primary window is active when doing so. Switching textures on update() of a component is also safe, the primary window is always active when update() is called.

In the future i’d like to work towards configurable render-targets, where you can define in json what type of attachments you will use. This improves render times and allows for more fine-grained control over the various render layers you want to use (multiple color attachments for example).

With this change in mind I would opt for defining a list of Texture2D resources in json (std::vector<ResourcePtr>) and linking those in, together with a single render target. You can safely switch the textures for the render-target at run-time now.

If you do decide to create textures at run-time make sure they are created on init() of a component, otherwise file changes (hot-loading) might fail to work.

Build that includes the fix:


Ow, small tip, before I forget. If you don’t plan on shrinking (minimizing) your textures on screen you can change the min-filter to: Linear. Otherwise the GPU will create mip-maps automatically for you. This is ok when you don’t have many textures but with many it will impact performance.


Thanks Coen!

What I’m trying to achieve is a poor mans Sampler2DArray for my shader, as I believe that’s not supported in NAP currently.
I think your update will do me just fine for that :slight_smile:

Napkin fails to launch, incorrect checksum, abort trap 6