- Home /
How to generate a real time 3D (mesh) model in Unity using Kinect sensor.
Greetings,
I'm currently developing an application with the initial goal of obtaining, in real time, a 3D model of the environment "seen" by a Kinect device. This information would be later on used for projection mapping but that's not an issue, for the moment.
There are a couple of challenges to overcome, namely the fact that the Kinect will be mounted on a mobile platform (robot) and the model generation has to be in real-time (or close to it).
After a long research on this topic, I came up with several possible (?) architectures:
1) Use the depth data obtained from Kinect, convert it into a point cloud (using PCL for this step), then a Mesh and then export it into Unity for further work.
2) Use the depth data obtained from Kinect, convert it into a point cloud (using PCL for this step), export it into Unity and then convert it into a Mesh.
3) Use KinectFusion that already the option of creating a Mesh model, and (somehow) automatically load the Mesh model created into Unity.
4) Use OpenNI+ZDK (+ wrapper) to obtain the depth map and generate the Mesh using Unity.
Quite honestly, I'm kinda lost here, my main issue is the real-time requirement along with being forced to integrate several software components makes this it tricky problem. I don't know which if any of these solutions are viable and the information/tutorials on these issues isn't exactly abundant like the one, for example, for Skeleton tracking .
Any sort of help would be greatly appreciated.
Regards, Nuno
I don't see how this is a technical question related to Unity?
also looking for something like this would be realy nice to get a meshed livestream based on the zigfu data stream or some other depth stream in unity. read that it is possible to build up meshes from script in unity.
http://forum.unity3d.com/threads/54092-Draw-Polygon
but i am not sure if someone can read the amount of data the kinect produces on the fly in unity. Zigfu by itself gets realy slow when generating a high density pointcloud dont know if that comes from the massive ammount of particles or the 3D points it has to handle. but saw awesome stuff with unity now so why not. but i think you realy need a good coder for that. Please let me now when you achived a step in that direction. Also did some other Projection Projects maybe we could share some experience. Feel free to contact me under pmoede(at)gmail.com.
Website -> PORTFOLIO
Best Greetings
Answer by MainframePT · Jan 13, 2014 at 01:03 PM
Fair enough, let me put this way: how to generate a dynamic mesh (in real-time or close to it) using Unity based on what the Kinect is currently seeing?
I didn't want to just throw out this question alone since unfortunately the answer to this question will most likely have to include the use of other frameworks/libraries (mentioned in my original post).
There are already wrappers using OpenNI (ZigFU) or Microsoft Kinect SDK which give access to Kinects depth information within Unity but is that alone enough to build a Mesh using only Unity?
Have you generated the mesh using $$anonymous$$inect depth data? I'm trying to do the same and having problem because of the depth shadows.
No, I ended up using an entire different approach that didn't require mesh generation.
Thanks @$$anonymous$$ainframePT for your reply.
I have generated the $$anonymous$$esh(shown in the image untitled3.png) using the https://www.assetstore.unity3d.com/en/#!/content/18708. But I'm not able to remove the depth shadows(near the chair). I've tried what's mentioned in the https://www.codeproject.com/Articles/317974/$$anonymous$$inectDepthSmoothing but still getting the depth shadows.. Can you please suggest me something to remove the depth shadows? Any help would be appreciated. Thanks
[1]: /storage/temp/103521-untitled3.png