- Home /
How can I add a border to my sprites?
I'm using 2D Toolkit to create a 2D game for iOS and Windows with Unity Free. I would like to be able to add a border to my sprites under certain conditions, like so:
Ideally, I don't want to have to create two versions of every sprite; one with a border, and one without. I'd like to simply create a single version of each sprite.
The way I currently have this working is by creating the border directly in the texture source image in Photoshop, and setting the opacity of the border pixels to 99%. Then, in Unity, I have two shaders:
The normal 2D Toolkit shader, which I use for displaying the sprite with border: http://pastebin.com/ER1aJzmm
A custom shader I made to hide any pixels with alpha less than 1.0, which I use to display the sprite without border: http://pastebin.com/K9vGpLgZ
Unfortunately, I was just reading elsewhere on Unity Answers that using AlphaTest in a shader is a "big no-no" for iOS, so it seems like this is ultimately not going to be a very good solution. Furthermore, it locks me into using a single border color (whatever color I choose when I make the sprite in Photoshop), instead of being able to dynamically add a border of any color at runtime.
Is there a better, higher performance way to add a border to a sprite with a shader that will work well on iOS? Or am I stuck creating two versions of each of my sprites?
Just thinking out loud here: I've done some simple pixel shader work in the past with HLSL, and I was able to test the color of nearby pixels and selectively set a new color for any given pixel based on that test (which seems like it might be helpful for creating a border), but I haven't been able to figure out how to do that with Unity. Admittedly, I don't understand Unity shaders very well (or any shaders very well, for that matter), so maybe I'm completely barking up the wrong tree here, but I just thought I'd throw that idea out there.
[4]: http://answers.unity3d.com/questions/39908/which-is-the-most-efficient-additive-transparent-s.html
You should tell whoever made 2D Toolkit that fixed function shaders aren't appropriate for mobile devices anymore!
There isn't really a way to do these kinds of graphics that isn't a "big no-no"; GPUs were not created to deal with this stuff. Alpha Testing is very likely the best solution for aliased sprites. I'm interested to see the HLSL.
GLES 1.1 tends to be faster with dynamic meshes, the kind you'd need with animated sprites, even on newer devices.
With GLES2.2 on iOS devices at least, you end up with very large performance spikes occasionally when directly manipulating the $$anonymous$$esh structures. This can be VERY large.
The only workaround for this at the moment seems to be double buffering, but a naive implementation of that would mean VERY high script overhead updating more mesh data than necessary. A clever implementation would still be slower than the GLES1.1 approach above.
I don't understand your comment about fixed function shaders using more battery - fixed function shaders or any register combiner type setup will get compiled to the same internal shader microcode by the driver. Using the fixed function pipeline means the driver has to do a bit more work, but couldn't make any significant difference unless you're constantly changing shaders while rendering, and even then it would be dwarfed by the overhead described above.
Depending on what you do, GLSL shaders could be slightly faster, and/or save a little bit of battery life in a lot of circumstances, but not in the one described above.
Edit: Just so I'm clear, what I'm saying is performance using GLES1.1 (arm6 mode) on newer devices is better than GLES2 mode on those devices, and you don't get the spikes described above in that mode.
$$anonymous$$y point isn't as much about shaders as how efficient it is to update mesh data from within Unity. Updating mesh data from the GLES1.1 codepath is significantly more efficient than through the GLES2 path, unless you double buffer extremely efficiently - which isn't as easy as you'd think it would be in something as general as 2D Toolkit.
If you're using the GLES1.1 to avoid this overhead, then you don't have a choice but to use the FFP, which incidentally get compiled down into shaders, even on hardware which doesn't officially export shader capability, eg. PVR $$anonymous$$BX.
The speed of the shader isn't as relevant here, as I mentioned before, as the cost of updating the meshes can be much much larger (and bog down your CPU significantly) depending on what you do, or how many sprites you have, etc.
Again all it boils down to is what you're drawing. $$anonymous$$aking a blanket statement like FFP shaders are slower than GLSL shaders isn't looking at the whole picture, which can be a lot more complicated than that.
Writing GLSL shaders equivalents for these shaders is trivial. If anyone would like it, let me know and I'll add them, but I don't see any point in proritizing this over the real bottleneck which is going to affect a lot more people - i.e. the mesh update cost.
If you would be kind enough to read my previous post properly, I said "I don't see any point in proritizing this over the real bottleneck". I'm not going to bother writing GLSL alternatives until the real issue is solved with dynamic meshes in the iOS GLES2 codepath. As it stands, with or without programmable shaders, the GLES2 codepath is a lot slower than GLES1.1, which makes the shaders point kinda moot. If someone needs these shaders sooner than I get around to it, let me know and I'll adjust my priorities, but no one has actually reported any critical performance issues or otherwise.
Jessy, what you're suggesting doesn't solve the problem either. In fact there wasn't one to start with. You just said, fixed function shaders are slow and drain battery life without any consideration of the situation or the circumstances.
The solution isn't to put in programmable shaders, but to solve the underlying problem to make GLES2 a good alternative before putting it in.
Sorry, but you seem to be missing the point here. What I'm saying is this prerequisite really needs to be done before implementing and recommending GLES2 shaders. That's it.
You misunderstand my recommendation to customers as responding to product criticism - I don't waste my time with that sort of stuff.
As it stands, its hard to recommend the GLES2 codepath over GLES1. With any amounts of animated sprites, and the performance spikes are going to be much higher than any gains to be had from shader performance.
I've had enough support requests about the performance spikes that it IS a problem to me which I am addressing, even if it is a bug in Unity.
Anyway, I've said what I need to here - anyone who is having any performance issues, just get in touch in our support forum/email.
Answer by unikronsoftware · Jun 14, 2012 at 07:22 PM
To answer that question, you can do something quite cheeky to get the effect you require, but will require a custom shader and the texture set up in a special way. What I'm describing here will only work with point sampling, and will fail when bilinear / trilinear is turned on.
First of all, lets say you color key your source image in such a way that r = 0 => transparent. You can make your shader produce the same result you see above, by making it consider the alpha channel.
half4 color = tex2D( ... ); // you can reduce this to a max/multiply
color.a = (color.r == 0)?1:0; // if this evaluates into a branch
Now you have a basic sprite drawing, but you now have the alpha channel free to store anything you like. There is only one caveat here, you will not be able to draw any color where r = 0, but r = 1 works, and that might be a fair compromise.
Now for the border
In your alpha channel, make sure you always draw a one pixel border around all your sprites. When you need to draw something with a border, simply switch to a shader which uses this
color.rgb = (color.r == 0)?borderColor:color;
So now you have a border, which you can control the color of. You can try to be creative with this and use the vertex color attribute to do various things instead of just tinting the sprite.
Just a random example that springs to mind, and assuming you can get the instruction count low enough, you can merge the 2 shaders to produce something where you can fade the border in using the alpha channel of vertex color.
half colorKey = color.r;
color.rgb = (colorKey == 0)?borderColor:color;
color.a = (colorKey == 0)?IN.vcolor.a:0;
Hope this helps.
p.s. always look at shader output and/or profile to see what the compiler has produced. There will always be ways to coax it to do one thing over the other to gain performance.
@unikron: Wow, great idea! Thanks for sharing. I have question though: Let's say having the ability to do dynamic border color isn't important. Would the method you're proposing still be more performant than what I described above in my original question? (99% opacity border in the texture, hidden/shown using AlphaTest in the shader.) Or are the two shader methods about equal, performance-wise?
I'm not sure about performance there, its probably worth profiling in situ to work out what is better. It really depends on the bottleneck, and whether it is even a performance issue in the first place.
Alpha test is slower than alpha blending on PVR. You can refactor your shader so it ends up doing something like this.
color.a = (color.a < 1.0f)?0.0f:1.0;
and simply use alpha blending and not bother switching at all