- Home /
Problem, Textures updated by plugin won't behave as expected in combination with HLSL shader
Hi
I've already posted this issue in the forums, but I still haven't got any answer.
I'm developing under Windows 7 with a GF 210M (DX 10.1) and have the following problem. I'm updating 2 textures from my c++ plugin per glTexSubImage2D. This part works, I can read those textures as GUITexture or render them with a shader to the backbuffer. They are displayed correctly. I run my program in forced-opengl mode. This part of the code works perfectly fine.
The second thing I do is render my scene, and then, in the OnRenderImage part I apply a post processing shader with Blit. I can access and display the rendered depthtexture as well as the rendered colortexture. I can also access the other 2 textures I've mentioned before. But, if I use all of those at the same time, one of the textures will display wrong data.
First off, here's the script which is attached to the camera :
using UnityEngine;
using System.Collections;
public class MixAR : MonoBehaviour
{
public Shader mixShader;
public float kinectMaxDepth;
private Texture2D colorTex;
private Texture2D depthTex;
private Material targetMaterial;
IEnumerator Start ()
{
targetMaterial = new Material(mixShader);
yield return StartCoroutine(KinectManager.Init(KinectManager.NUI_INITIALIZE_FLAG_USES_COLOR | KinectManager.NUI_INITIALIZE_FLAG_USES_DEPTH));
KinectManager.InitImageFetch(ImageType.NUI_IMAGE_TYPE_COLOR, ImageResolution.NUI_IMAGE_RESOLUTION_640x480);
KinectManager.InitImageFetch(ImageType.NUI_IMAGE_TYPE_DEPTH, ImageResolution.NUI_IMAGE_RESOLUTION_320x240);
colorTex = new Texture2D(640, 480);
depthTex = new Texture2D(320, 240, TextureFormat.RGB24, false);
KinectManager.BindTexture(colorTex, ImageIndex.INDEX_RGB);
KinectManager.BindTexture(depthTex, ImageIndex.INDEX_DEPTH);
targetMaterial.SetFloat("_MaxDepth", kinectMaxDepth);
targetMaterial.SetTexture("_KinectColorSource", colorTex);
targetMaterial.SetTexture("_KinectDepthSource", depthTex);
}
void OnRenderImage(RenderTexture src, RenderTexture dst)
{
Graphics.Blit(src, dst, targetMaterial, 0);
}
}
Furthermore, here's the mixShader's code, which is attached to this script:
Shader "Custom/KinectOcclusion"
{
Properties
{
_KinectColorSource ("Kinect Color Camera", 2D) = "" {}
_KinectDepthSource ("Kinect Depth Source", 2D) = "" {}
_MainTex ("", 2D) = "" {}
_CameraDepthTexture ("", 2D) = "" {}
_MaxDepth ("Maximum KinectSDK Depth", Float) = 4096
}
SubShader
{
Pass
{
ZTest Always Cull Off ZWrite Off
CGPROGRAM
#pragma vertex identityTransf
#pragma fragment fragment2Depth
#pragma only_renderers opengl d3d9
//#pragma target 3.0
#include "UnityCG.cginc"
uniform sampler2D _KinectColorSource;
uniform sampler2D _KinectDepthSource;
uniform sampler2D _MainTex;
uniform sampler2D _CameraDepthTexture;
float _MaxDepth;
struct VertexInput
{
float4 vertex : POSITION;
float4 texCoord : TEXCOORD0;
};
struct VertexOutput
{
float4 vertexPosition : SV_POSITION;
float4 texCoord;
};
struct FragmentOutput
{
float4 color : COLOR;
};
VertexOutput identityTransf(VertexInput input)
{
VertexOutput output;
output.vertexPosition = input.vertex;
output.vertexPosition.xy *= 2.0f;
output.vertexPosition.xy -= 1.0f;
output.texCoord = input.texCoord;
return output;
}
FragmentOutput fragment2Depth(VertexOutput input)
{
FragmentOutput output;
float2 depthComponents = tex2D(_KinectDepthSource, input.texCoord.xy).rg;
float newDepth = (depthComponents.g * 255.0f + depthComponents.r) * 255.0f / _MaxDepth;
float oldDepth = DECODE_EYEDEPTH(tex2D(_CameraDepthTexture, input.texCoord.xy).r) / 10.0f;
if (newDepth < oldDepth)
{
output.color = tex2D(_KinectColorSource, input.texCoord.xy);
} else
{
output.color = tex2D(_MainTex, input.texCoord.xy);
}
return output;
}
ENDCG
}
}
}
If I execute this code, _MainTex contents will be replaced with the contents of _KinectDepthTex, which is basically a depth texture encoded in the red and green channel. However, if I bind a normal/static texture from the assets to colorTex and depthTex, everything works fine and as expected.
I've read a lot of the unity documentation, and I didn't find anything I've forgotten. Any help would be appreciated.
I too have the same problem, I developped a plugin to manage the occlusion with kinect, I excute my shader with unity 3d opengl mode (-force-opengl), and wonder fonction OnRenderImage problem :S
Did you find out the solution? Thanks
Answer by sulix · Jan 05, 2012 at 10:18 AM
Hi, any update on this issue yet? We really would need this functionality and as long as we don't know if this is a bug/incompatibility on Unity's side, it isn't worth the effort to look further into this problem. We've already spent a lot of time on that matter.,Any update
I too have the same problem, I developped a plugin to manage the occlusion with kinect, I excute my shader with unity 3d opengl mode (-force-opengl), and wonder fonction OnRenderImage problem :S
Did you find out the solution? Thanks