Selecting objects based on mouse coordinates


I was looking at the documentation and it wasn’t immediately apparent if selecting objects in a 3D scene is currently supported with a provided component or not.

I was looking at different approaches based on stencil/depth buffers and was wondering if something has been already built?

If someone could point me into the right direction, I would appreciate it.




There isn’t a direct component available to do so but it is possible to translate screen space to world space coordinates using the math and mesh utility functions. The blobtrace demo uses this to calculate the position of the mouse cursor on a 3 dimensional plane. You could do the same using a bounding box or vertices of your object. blobtraceapp.cpp -> doTrace()

	// World space camera position
	glm::vec3 cam_pos = math::extractPosition(camera_xform.getGlobalTransform());

	// Used by intersection call
	TriangleData<glm::vec3> tri_vertices;
	// Create the triangle iterator
	TriangleIterator triangle_it(mesh);

	// Perform intersection test, walk over every triangle in the mesh.
	// In this case only 2, nice and fast. When there is a hit use the returned barycentric coordinates
	// to get the interpolated (triangulated) uv attribute value at point of intersection
	while (!triangle_it.isDone())
		// Use the indices to get the vertex positions
		Triangle triangle =;
		tri_vertices[0] = (math::objectToWorld(vertices[triangle[0]], world_xform.getGlobalTransform()));
		tri_vertices[1] = (math::objectToWorld(vertices[triangle[1]], world_xform.getGlobalTransform()));
		tri_vertices[2] = (math::objectToWorld(vertices[triangle[2]], world_xform.getGlobalTransform()));

		glm::vec3 bary_coord;
		if (utility::intersect(cam_pos, screen_to_world_ray, tri_vertices, bary_coord))
			TriangleData<glm::vec3> uv_triangle_data = triangle.getVertexData(uvs);
			mMouseUvPosition = utility::interpolateVertexAttr<glm::vec3>(uv_triangle_data, bary_coord);

Alternatively you can use opengl calls to read and write to the stencil buffer. If you have any ideas on how to add a new component or functionality to support your request I’d love to hear it!


Thanks for the fast reply.

My current plan is something like this:

  • add an ObjectSelectionComponent as a sibling to my camera component
  • add the camera as a dependency to the ObjectSelectionComponent to be able to access the perspective projection
  • add the input component as a dependency to access pointer inputs (PointerInputComponent)
  • have a list of references to the component instances that can be selected, probably a vector of ComponentInstancePtrs (I couldn’t figure out how to do a list instead of a single instance yet)

Once these are accessible I think it should be possible to do the selection on mouse events.

Another question is, once an object is selected what to do, maybe send a signal or similar so others could subscribe something like that.

I will take a look at the blobtrace demo over the weekend and try to come up with something.



Your approach sounds good. Regarding vector of component ptrs, this is supported but not yet available in the current NAP package. We’re currently merging / testing the new NAP 0.3 package, which will add support for component instance ptrs. If you have a package build from source it is available.

Look at componentptr.h, line 247:

 * Example:
 * 		class SomeComponent : public Component
 *		{
 *			std::vector<ComponentPtr<OtherComponent>> mOtherComponentList;
 *		};
 *		class SomeComponentInstance : public ComponentInstance
 *		{
 *			std::vector<ComponentInstancePtr<OtherComponent>> mOtherComponentList = initComponentInstancePtr(this, &SomeComponent::mOtherComponent);
 *		};

If this comment is part of your package you should be able to use arrays of componentptr’s.