- Home /
how do i get differences between a background and webcam texture like a green screen?
i don't know if anyone else has a similar problem but i'm trying to make a kinect like motion controller script that uses a webcam such as the one on your laptop and the way i'm doing it is taking a picture of the background and the current camera texture2d. the only problem is i can't subtract the same pixels from both of them. i looked online and there was almost no talk about comparing 2 texture2d and none included a green screen effect. here is my script
using System.Collections;
using System.Collections.Generic;
using System.IO;
using UnityEngine;
public class facerecognition : MonoBehaviour {
void Start ()
{
WebCamTexture CamTex = new WebCamTexture();
Renderer renderer = GetComponent<Renderer>();
renderer.material.mainTexture = CamTex;
CamTex.Play();
}
// Update is called once per frame
void Update ()
{
//sets the background when s is pressed
if (Input.GetKeyUp("s"))
{
StartCoroutine (setBackground());
//lets you know s key has registerd
Debug.Log ("s has been pressed");
}
StartCoroutine(backgroundisolation());
}
//isolates background
IEnumerator backgroundisolation()
{
yield return WaitForEndOfFrame();
//gets current webcamtexture
Texture2D tex2 = null;
tex2.SetPixels((GetComponent<Renderer>().material.mainTexture as WebCamTexture).GetPixels());
//variables for background
Texture2D tex = null;
byte[] fileData;
//loads background
fileData = File.ReadAllBytes(Application.persistentDataPath + "background.png");
tex = new Texture2D(2, 2);
tex.LoadImage(fileData);
//checks if it's the same (helps for lag and can program default animation
if (tex2 != tex)
{
//texture2d the green screen effect is applying to
Texture2D changes = new Texture2D(tex.width, tex.height);
//heres my problem \|/
changes.SetPixels(tex2 - tex);
}
yield break;
}
//sets the background
public IEnumerator setBackground()
{
yield return new WaitForEndOfFrame();
//gets current webcamtexture
Texture2D snap = new Texture2D(GetComponent<Renderer>().material.mainTexture.width,
GetComponent<Renderer>().material.mainTexture.height);
snap.SetPixels((GetComponent<Renderer>().material.mainTexture as WebCamTexture).GetPixels());
snap.Apply();
byte[] bytes = snap.EncodeToPNG();
//writes a png
File.WriteAllBytes(Application.persistentDataPath + "background.png", bytes);
//to know it was called
Debug.Log ("background saved");
}
}
if there's any operators i don't know of or something like SamePixels that would really help thank you.
Don't forget to format your pasted code with the 101010 button - this was done on your behalf this time. :)
thank you i'll remember that next time it was my first question on unity answers i usually figure them before having to ask. hope the answer comes so "everyone" might be able to port kinect games to pc.
Answer by AlwaysSunny · Dec 29, 2016 at 01:04 AM
Doing such an operation in real time (especially alongside other stuff like game logic) probably requires some kind of specialized optimization techniques I am not personally aware of. There may be mature third-party solutions you could bring into Unity, this being more of a general programming problem than a Unity-specific thing.
For doing it yourself, a few things come to mind. Foremost is that performing background removal isn't a difficult algorithm, but doing it well and doing it fast will require some careful thought with regard to optimization. You may want to consider putting this work in a separate thread, if possible. The basic principle is this:
Loop over every pixel value in the current texture.
Compare the value to the background texture.
If they are similar within some threshold,
you've found a pixel to exclude from the current texture.
Using a greenscreen has a similar procedure, but instead of comparing to another texture, you're comparing to a "chroma key" color.
Have a look at the Texture2D API reference to learn about the Get- and SetPixels() calls you'll be using. Definitely use pixels, emphasis on the plural "s", because it'll be cheaper. We can discuss further if you like, but this is roughly the extent of my knowledge on the subject.
would something like mathf.approximately except on a bigger scale calculate it fast enough to be a decent frame rate and resolution for it to work. it would compare each pixel so it might be a little slow but i might be able to configure it to scan a line of pixels at one time and anything different can scan pixel by pixel if there is any way of actually getting it to work in the first place. so basically there's ways to optimise but i need to get it working first.
I imagine that established third-party solutions have it all figured out down to the last detail to squeeze every millisecond of performance. If you want or need to code your own solution, you could search for articles on the subject - you could probably find some academic papers too.
I suspect they also multi-thread, since multi-core processors are so commonplace. Again, I'm no expert, but I have done real-time image adjustments before, and they are very expensive. I think multi-threading should be your first optimization. By all means, get it working without multi-threading first if you're unfamiliar with multi-threading.
The stated mathf.approx function might work, but you'd have to do it at most (totalPixes*3) times. That "times three" being for the separate R, G, and B channels. I'm honestly not sure whether there's a better way that will guarantee success in every situation. You could skip two (or one) of the channel approximation checks when the first (or second) channel proved sufficiently different. That's an obvious optimization.
Splitting the work into chunks (perfor$$anonymous$$g the tests only on a certain rectangular area of the whole texture in a given frame) and perfor$$anonymous$$g work on them separately might help - if you were okay with allowing the background removal system to operate at its own pace. In other words, if you allow the job to be split up over say, two frames ins$$anonymous$$d of one, you cut the texture's update framerate in half, but could maintain your target FPS for the rest of the game / project.
Oh, and I wouldn't actually use $$anonymous$$athf.Approx - you should code your own approximation test so that you have control over the desired threshold.
Answer by First-47777-Guy · Dec 29, 2016 at 06:39 AM
well if someone could do this realtime someone could look at the shape of the given offset from the background, it would basically be an isolation in which you could make the kinect equivalent to the infrared sensor. and if so the resolution of the screen wouldn't matter that much but the fps would effect what i'm trying to do greatly. basically anything possible with the kinect would be possible with any ordinary webcam on say a laptop. you could even calculate depth if the player's height and focal length of the webcam is given and the fastest way to do that would be to have the variables already there and instead of having a third party solution i could give the meaning of that separation instead of having to put that in a script for the calculation and for the motion tracking itself. in other words i don't need to have the code go through 2 scripts and could instead make 1 that does the greenscreen and another that then could translate the pixels faster than a third party solution. this would make microsoft ship an ordinary webcam for the xbox scorpio kind of big. so you can see why even if it was a proof of concept it is really important without even mentioning the pc porting for kinect devs so any help is really great
Your answer
Follow this Question
Related Questions
WebCamTexture to movie 0 Answers
Save Webcamtextures. Why running out of memory (ram)? 1 Answer
How to take High Res photos ? 1 Answer
setpixels throws Unity Runtime Error 0 Answers