- Home /
Share Viewport (Camera) in Multiplayer
I am planning on using Unity's Network View to create a Dual Device game, this is a game which will be using a PC for an FPS with a mobile device to control equipment. The mobile device will be required to render allot and I feel it won't be able to render the entire world if required, however the PC should be able to easily. I want the mobile device to recieve the view from a camera in the PC FPS and then use that as the viewport on the mobile device. The mobile device will also send input back to the PC. I feel making the PC do all the work is the best approach, unless someone has a better idea. Any advice would be greatly welcomed.
Answer by JamieFristrom · Nov 12, 2012 at 09:21 PM
Are you saying you want the PC to render the view, and then upload its screen buffer to the mobile device, so the mobile device doesn't have to do rendering? That sounds like it would be much less efficient, and laggy, than simply having the mobile device do its own rendering. I would start making your game and find out if the mobile device truly can't handle the art assets you're using; and in that event, use simpler, easier to render assets on the mobile side.
Yes, the PC render would look a lot better and an important part of this project is to make something that is a realistic product. I understand what your saying and I will try the other technique first but was looking for alternative methods that might be better or atleast so I can explain the problem with each technique. Is it possible to do what I'm asking? thank you for your answer :)
I hesitate to say anything's impossible with computers - it's the same idea that OnLive used, right, not rendering locally? - but OnLive just declared bankruptcy, maybe because FPS players usually demand incredibly low latency. They want to shoot something and see it die. They don't want to shoot something, send the message to a PC over the internet, wait for the PC to render the frame, wait for the, let's say, 1280x720x3 byte screen to come back, and then see the dead thing.
The mobile will be over LAN so no real latency and with a much smaller resolution than the one you've described so that doesn't factor much into it. Only equipment will be used by the mobile device however if I want a sentry gun, I'll need to sync the entire level over the network, compared to the actual screen this could be less efficient?
Not sure what you're asking anymore. I think you just have to try and make your game and then see what problems come up. It's good to think ahead up to a point. Being over a LAN will definitely help. Being on a smaller screen (iphone 4 is 960x640 - not that much smaller than 1280x720...) will help a little. You shouldn't have to sync the whole level over the network, because you won't need anything on the mobile - if you're doing the rendering on the PC you might as well do everything on the PC - the mobile becomes a dumb client that just sends input to the PC, the PC does the work, and sends bitmap back to the mobile. Still not how I would do it - another reason to do it a more orthodox way is that tools have already been made to do it that way.
Answer by lil_billy · Nov 20, 2012 at 02:01 AM
i dont need to read everything to answer this
ITS NOT EFFICIENT TO SYNC THE ACTUAL SCREEN DATA essentially this is basically sending a texture file over the internet every single frame. Now you might think hey streaming videos can do it, they cant both generate a unique frame and then send it.
now compare sending an image file to sending vector coordinates of your object and camera its like a millionth the size
the only way to get any degree of realism that you are seeking is SUPER low poly models and a masterful use of textures (id say normal maps and the works but youd have to use the works sparingly too)
then it comes down to the laundry list of optimization tricks for mobil which youd have to look up because people literally write books on the matter
THERE IS NO work around for this
short of making your own engine and then somehow doing so better than the UNITY DEVS whose sole career is based on making the engine.
-edit: I notice that my advice sort of sounds a bit centric on bandwidth, which certainly it is but, it still applies to the phones processor, by the time the server renders a frame its already after the action took place then there is the time involved in sending that data and the time it takes for the phone to re render the image
by the time all that happens the character is massively behind temporally (would look laggy as a cheerleader in a math class) and because of the way you did it there is no way to compensate for the latency with code on the client side
plus the fact that you would max data plans mercilessly in addition to the high premiums youd be paying just to meet the bandwidth requirements. I hope that this is adequate enough to deter you from this idea completely, if it isnt, then by all means prove me wrong an good luck to ya.
I understand that a sufficient amount of processing is required for what I'm asking. Internet latency is not a big issue nor is the data plan issue as the entire concept will use a LAN connection over WiFi. I am aware that this isn't the best method and wanted to research the area, if I could do it this way I would test the performance of each method. The game will be ai$$anonymous$$g to mimic a full FPS roa$$anonymous$$g world and will be extremely big, the idea in using the PC for all calculations was purely to keep a good quality and to provide a better experience. The world will have allot of AI which will all be synced over the network and I was assu$$anonymous$$g it might be more reliable to process everything on the PC Side of things, the cost of sending the camera will never change where as the other method could be allot slower or faster. I gather that this method is not possible, I'm not to worried about practical as that's the idea in research! Thanks for everyones input so far.