How to use Texture2D::asyncGetData [SOLVED]


#1

Hi,

I’m trying to get the content of a Texture2D into a bitmap, to then use the getData method of the bitmap to process the buffer for other use.

I have tried to use asyncGetData, but it seems like i can’t init a bitmap to upload this to:

Bitmap* bitmap
// tex is my Texture2D;
bitmap->initFromDescriptor(tex->getDescriptor());

I’ve also tried to set a bitmap using napkin, but I can’t seem to add it to any Entity to retrieve it in my scene…
Any help?

Best,

P


#2

The nap::Snapshot uses asyncGetData to copy texture data into a bitmap that is saved to disk. The download is scheduled on the GPU, when complete the nap::Bitmap (given to the function) is updated using a callback . The callback is assigned as a lambda expression:

	void Texture2D::asyncGetData(Bitmap& bitmap)
	{
 		assert(!mReadCallbacks[mRenderService->getCurrentFrameIndex()]);
 		mReadCallbacks[mRenderService->getCurrentFrameIndex()] = [this, &bitmap](const void* data, size_t sizeInBytes)
		{
			// Check if initialization is necessary
			if (bitmap.empty() || bitmap.mSurfaceDescriptor != mDescriptor) {
				bitmap.initFromDescriptor(mDescriptor);
			}
			memcpy(bitmap.getData(), data, sizeInBytes);
			bitmap.mBitmapUpdated();
 		};
		mRenderService->requestTextureDownload(*this);
	}

You can add a bitmap resource using napkin:

image

And link to that resource from any other resource or component, or fetch it on init in your app:

#include <bitmap.h>
....
nap::ResourcePtr<nap::Bitmap> bitmap = mResourceManager->findObject<nap::Bitmap>("NewBitmap");

Make sure you read the System, Resource and Rendering documentation. NAP is completely data-driven, which has many advantages, including the ability to manage complexity more efficiently, but requires a different approach to software and application development. Similar to how you would program a ‘game’ using a game engine such as unity.

I also recommend watching the youtube getting started tutorials. They explain many concepts, including how to create, link and reference Resources, most importantly, your own.


#3

Forgot to mention, the vinyl demo implements the screenshot functionality, where it takes a screenshot using x amount of rows & columns and saves it to disk. @lshoek wrote an article about it as well. Could be of interest.


#4

yes thank you for the documented answer.

Didn’t know the resoure could be accessed without being part of an Entity, will study the documentation, :).

I read the article already, v interesting!

Will implement a callback function similar to the snapshot one, as it doesn’t use a bitmat at all, and I don’t need it either.

Thanks for the pointers, will report here tomorrow.

Best,

P


#5

Well I’m getting a const void* data always empty, here is my code :

// in videoPlayer::update(double deltaTime){
{
//sender defined in videoPlayer.h as 
//napSender *sender;
	sender->SendImage(mVideoEntity->getComponent<RenderVideoComponentInstance>().getOutputTexture());
}


```cpp
// in napSender.cpp :

#include <functional>
using namespace std::placeholders;
//[...]
bool napSender::SendImage(Texture2D& tex){

	int w = tex.getWidth();
	int h = tex.getHeight();
	tex.asyncGetData(std::bind(&napSender::onTextureCallback, this, _1, _2, w, h));
}
void napSender::onTextureCallback(const void *data, size_t sizeByte, int w, int h) {

	uint8_t* tmp = (uint8_t *)data;
// point break before closing bracket to inspect the data // tmp
}

The same texture is used in render:

		// Start recording into the headless recording buffer.
		if (mRenderService->beginHeadlessRecording())
		{
			// Render video to video texture
			RenderVideoComponentInstance& video_render_comp = mVideoEntity->getComponent<RenderVideoComponentInstance>();
			video_render_comp.draw();
//[...]
}

The texture which I’m using is :

Can I use the texture2D of my RenderVideoComponentInstance to draw in render as well as for other stuff in update?

Not too sure why I’m not getting anything
Any ideas?
I can provide the whole projec if needed.

Best,

P


#6

Hey, I’m not sure what you’re trying to do. The code does not explain your problem very well. If the screenshot works you should take it from there and reverse engineer it for your purpose. I recommend using renderdoc to inspect the Vulkan calls if you don’t get what you expect. But we’re not really in a position to spit through your project. If there are specific bugs you run into we can take a look at them. But as it stands this request is unfortunately too vague.

The video is rendered to texture using the RenderVideoComponent, which converts the YUV textures from the video player into a single RGB texture. From there that texture is bound to multiple targets for rendering and displacement. You have to be aware of the fact that Vulkan renders 1 frame whilst the CPU is preparing another, this means that many Resources are double buffered, which is important when you want to read from the GPU.

If you want to learn more about Vulkan & NAP, I recommend reading this post, especially the parts on texture use and limitations. Your texture must be declared DynamicRead, as explained in the doc. The post also explains how we handle double buffered resources. If the snapshot works (vinyl demo) it is a good idea to take that as a starting point and reverse engineer your problem from there.


#7

Ok, thanks for the link, I’ll jhave a look at it.

Let me clarify the problem i have / what I’m trying to do:
I want to access the buffer which contains the color pixels in a format [R][G][B][A] arranged as 8bits each or so, for further use with NDI.
I got the NDI part working, this is out of the scope of this question, but retrieving the data from the GPU in NAP is causing me problems.

I’m not very familiar with Vulkan, and the issue might be coming from that, :persevere:

I’ve got a video player working-project called VideoPlayer -, of which i want to access the texture as a uint8_t* buffer.

The RenderTexture2D I’m using (called VideoColorTexture) is as follow :

The OutputTexture of my RenderVideoComponent is what I am trying to process to get the buffer on the CPU.

I have made a mini class to explain where I’m at:

class NapSender {
public:
	NapSender()
	{
	}


	void SendImage(Texture2D& tex) {
		VkFormat fmt = tex.getFormat();
		if (fmt != 0) {
			const SurfaceDescriptor desc = tex.getDescriptor();
			int w = tex.getWidth();
			int h = tex.getHeight();

			tex.asyncGetData(std::bind(&NapSender::onTextureCallback, this, _1, _2, w, h, desc));
		}
	}

	void onTextureCallback(const void* data, size_t sizeByte, int w, int h, const SurfaceDescriptor desc) {
	// empty buffer here	
         uint8_t* tmp = (uint8_t*)data;
	}
};

I’m calling SendImage inside the update method of VideoPlayer by doing :

	void VideoPlayerApp::update(double deltaTime)
	{
		// Use a default input router to forward input events (recursively) to all input components in the default scene
		nap::DefaultInputRouter input_router(true);
		mInputService->processWindowEvents(*mRenderWindow, input_router, { &mScene->getRootEntity() });

		napSender->SendImage(mVideoEntity->getComponent<RenderToTextureComponentInstance>().getOutputTexture()); 
    }

After waiting for the video player to start playing (1 or 2 seconds), I set a point break inside my method called back by `asyncGetData`, and this is what I'm getting:


When setting my breakpoint after the callback, and inspecting the uint8_t* tmp, it is empty…
As you can see the texture descriptor is filled up with stuff, so seems right.

Hope this explains the problem better.

I’m not sure what I’m doing wrong to get the uint8_t* to fill up with content.

I have looked at the snapshot method pointed above : void Snapshot::snap(PerspCameraComponentInstance& camera, std::function<void(nap::SnapshotRenderTarget&)> renderCallback)
The callback transforms the const void* straight into (uint8*) no problem it seems here, this is why I was asking for clarifications as it seems straight forward, :).

Will study it more ( recompile snapshot in debugf maybe so i can see the steps in detail) as I must have missed something.

Thanks for your help so far,

Best,

P


#8

Ok it’s now working I understood that the texture was not retrived unless the nap::RenderService had started a frame with beginFrame(). I called my function within beginFrame and endFrame of the nap::RenderServiceafter the RenderVideoComponentInstance call to the draw() method and it worked.

Thks for the help.

++


#9

Yes, this has to do with the double buffering of resources. With OpenGL you never had to worry about any of those things, ie: you could schedule anything at any point in time and have the driver figure this out. Vulkan is way (way way) more explicit: you gain performance but add complexity. That’s why we have introduced the concept of frames, which are double buffered. So you schedule render operations on the CPU whilst the other is being processed on the GPU. When you schedule a download (which is a render operation), you must do that within the specification of a frame. The transfer is scheduled and executed after being submitted (GPU) whilst the next frame is being prepared (CPU). When the next (3rd) frame after that arrives, it waits for tasks on the GPU to be completed, in this case also your download.

void RenderService::requestTextureDownload(Texture2D& texture)
{
	// We push a texture download specifically for this frame. When the fence for that frame is signaled,
	// we know the download has been processed by the GPU, and we can send the texture a notification that
	// transfer has completed.
	mFramesInFlight[mCurrentFrameIndex].mTextureDownloads.push_back(&texture);
}

void RenderService::beginFrame()
{
	// We wait for the fence for the current frame. This ensures that, when the wait completes, the command buffer
	// that the fence belongs to, and all resources referenced from it, are available for (re)use.
	// Notice that there are multiple other VkQueueSubmits that are performed by RenderWindow(s), and headless 
	// rendering. All those submits do not trigger a fence. They are all part of the same frame, so when the frame
	// fence has been signaled, we can be assured that all resources for the entire frame, including resources used 
	// by other VkQueueSubmits, are free to use.
	vkWaitForFences(mDevice, 1, &mFramesInFlight[mCurrentFrameIndex].mFence, VK_TRUE, UINT64_MAX);
	
	// We call updateTextureDownloads after we have waited for the fence. Otherwise it may happen that we check the fence
	// status which could still not be signaled at that point, causing the notify not to be called. If we then wait for
	// the fence anyway, we missed the opportunity to notify textures that downloads were ready. Because we reset the fence
	// next, we could delay the notification for a full frame cycle. So this call is purposely put inbetween the wait and reset
	// of the fence.
	updateTextureDownloads();

This gives the system time to process the transfer, therefore not stalling the CPU. NAP completely manages this process in terms of resources and render operations for you. The render interface, from a user perspective, barely changed with the switch from OpenGL to Vulkan, but as a user it is good to be aware of certain, more low-level aspects of the render system, such as the concept of frames and scheduling render operations within those frames.

I recently gave a NAP Render Workshop, which explains many of these concepts, from high to low-level, including the way Vulkan is implemented in NAP and how it is different from OpenGL. You can download the slides, maybe it is of help to you. I’m planning to release it as a blog-post or video at some point.