- Home /
Why did my render time increase after lowering the vertex count?
UPDATED: Revised question with new numbers after getting both high vertex objects and low vertex objects to dynamically batch.
I have around 80 animated objects on screen. They are composed of about 13 submeshes that are used to create the animation. Each object used to be 1630 verts and is now 600 verts.
My framerate did not improve at all after optimizing the meshes.
Unity's internal profiler on iOS reveals this about my game:
BEFORE
cpu-player> min: 38.2 max: 43.0 avg: 40.2
cpu-ogles-drv> min: 9.0 max: 10.4 avg: 9.4
cpu-present> min: 0.4 max: 2.8 avg: 1.0
frametime> min: 48.3 max: 53.6 avg: 50.9
draw-call #> min: 17 max: 17 avg: 17 | batched: 1106
tris #> min: 68662 max: 68662 avg: 68662 | batched: 60988
verts #> min: 51097 max: 51097 avg: 51097 | batched: 45583
player-detail> physx: 6.1 animation: 0.0 culling 0.0 skinning: 0.0 batching: 12.8 render: 9.9 fixed-update-count: 1 .. 1
mono-scripts> update: 8.8 fixedUpdate: 0.7 coroutines: 0.0
mono-memory> used heap: 19111936 allocated heap: 29650944 max number of collections: 0 collection total duration: 0.0
AFTER
cpu-player> min: 36.7 max: 39.8 avg: 38.5
cpu-ogles-drv> min: 4.6 max: 5.5 avg: 4.8
cpu-present> min: 0.9 max: 3.8 avg: 1.5
frametime> min: 42.9 max: 50.0 avg: 45.3
draw-call #> min: 14 max: 14 avg: 14 | batched: 1232
tris #> min: 40236 max: 40236 avg: 40236 | batched: 32560
verts #> min: 35614 max: 35614 avg: 35614 | batched: 30096
player-detail> physx: 0.4 animation: 3.4 culling 0.0 skinning: 0.0 batching: 7.2 render: 15.2 fixed-update-count: 1 .. 1
mono-scripts> update: 9.3 fixedUpdate: 0.7 coroutines: 0.0
mono-memory> used heap: 20541440 allocated heap: 29650944 max number of collections: 0 collection total duration: 0.0
You can see that the render cost is quite high in the After state. In fact, render time is reporting as HIGHER in an environment where draw calls and vertex count were lower! What gives?
What does that render number refer to specifically and how do I improve it? The docs don't delve into enough detail.
The docs aren't clear really, but "time spent rendering" probably means per frame. These numbers are from the debugger, right? I've noticed that while optimizing and debugging my fps stays pretty constant. I don't notice any real gains until I'm play testing the game off the debugger.
Also, look here maybe: http://cocoa$$anonymous$$m.com/post/78650905677/polishing-your-unity3d-app-for-ios-and-other
Answer by Bunny83 · Feb 03, 2015 at 12:53 AM
It looks like you just have trouble to correctly interpret your results here. Almost all values have improved after your optimization. Those values are in milli seconds. The lower the frame time the higher the frame rate. The relation between fps and frame time is
fps = 1/frametime;
frametime = 1/fps;
So a frametime of 50ms equals ~20fps (1/0.05 == 20)
If the current result isn't satisfying, you might just have reached the limit of the used hardware. In the past there was a draw call limit of 2 - 3. On newer hardware it's around 30 - 50. However it's always the sum of everything in your app.
I'm observing FPS numbers in Xcode on an iPad $$anonymous$$ini (original). Both before and after optimization the number is s$$anonymous$$dy around 20/21 FPS. I suppose it's possible that the optimization, while better, just isn't enough. It seems strange though that I didn't see any real improvement.
Why is render time HIGHER on a lower vert count and lower draw call count?
I've updated my numbers - this answer isn't quite as relevant as it doesn't discuss the "render" time question.
What the render number exactly represents isn't clear from the documentation. However an increase of 5.3 ms really isn't something you should care about.
Also note that Apple often clamps the frame rate to certain values to avoid extreme fluctuations.
I'm not as worried about the 5.3 ms additional so much as worried as to why there was an increase ins$$anonymous$$d of a decrease.
If there is something screwey going on with our model or settings or environment that is causing the increase, that could be robbing us of the performance gains to be had by optimizing the vertex count so heavily. There isn't really too much else to optimize here without robbing the game of its essentials, so I need to absolutely figure out why optimizing those meshes didn't raise our FPS a single digit (looking at Xcode FPS)
Answer by eshan-mathur · Feb 04, 2015 at 02:58 AM
So it turns out that the number of mesh renderers was the cause of the high render time.
As I explained at the top of the post, each object is composed of 13 pieces that animate - this means 13 mesh renderers in a game object hierarchy.
Reducing these 13 pieces to 4 pieces significantly improved the render cost (from 15.2 ms in that sample down to around 5 ms). I'm not exactly sure why the more complex meshes handled this better, but it doesn't really matter. Now that the render cost is down, the batching cost of complex meshes makes them absolutely less performant.
Cutting down the number of individual pieces in an object did mean that the animation fidelity took a hit but seems manageable.
I won't mark this as the answer however, because it does not explain why the render cost is higher with less complex meshes. I would still really love to know why that makes any kind of sense.
Your answer
Follow this Question
Related Questions
How to improve FPS on mobile? 1 Answer
big terrain performance && mobile 1 Answer
Texture Quality - Performance for Mobile Devices 2 Answers
SketchUp Sandbox vs. Unity Terrain 2 Answers