Given the choice to use any combination of libraries or engines I wanted, I naturally opted for just using SFML for everything, which provides a module for networking on top of all the other things it does. SFML is great - if you're a budding game programmer who knows a bit of C++ and you want to test your skills or just make something quick without worrying about anything terribly low-level it's a great option, provided you're on Windows or Linux. The networking module has its misgivings but simplifies a lot of the busywork involved in creating/managing sockets and connections, which was all I needed. I was trying to learn network programming, not master its intricacies. With SFML's networking classes I could have the mental benefits of cleaner, less dense code without having to write my own wrapper for WinSock, the socket library we used in the labs (and in so doing tie myself to Windows).
There's a bunch of pretty big questions you have to ask when you sit down to design a networked game. How do you arrange the different machines in a network? What route does information take to get around? How do game worlds running on different machines remain in sync with each other?
Different network architectures exist, most of which fit into existing patterns or models. In the Client-Server model the network consists of a bunch of client machines connected to a single server machine. All communication goes via the server, and the server controls what happens when clients' worlds get out of sync. The Client-Server model is generally preferred by games, but sometimes they have to settle for Peer-to-Peer architectures. In the Peer-to-Peer model every network member is connected to every other network member. There's no central hub. There's no definitive version of the game world, and the different machines have to sort it out between themselves when there are inconsistencies between their different versions.
I chose to use a Client-Server Hybrid model in which one of the members of the network acts as both a client and a server. This is a pretty common pattern in game networking. The special machine is called the 'host' of the game. For my lightweight game I couldn't imagine a case where users would want to set up a powerful dedicated server - there just isn't that much data to crunch.
All instances of the game - client or host - run their own basic simulation of the game in motion, aiming to run at 30 steps per second. On the clients' versions of the simulation, game entities' positions are updated each frame by their velocity, and input is handled for the local player's ship, but not much else. Only on the host is collision detection done and resolved, meaning that the host says whether an asteroid hit a player and no awkward situations arise with ambiguity.
Information for which ordering is vital is sent over TCP using non-blocking sockets. For example, when an asteroid gets destroyed it's pretty much impossible for the separate game worlds to keep running correctly and in sync unless they are all immediately informed of the event. It's even more crucial that they find out about the event before another asteroid gets blown up - real headaches will occur if they don't thanks to the way I chose to manage game entities.
To find out if an event has happened, sockets are polled once per update loop (so as many times as possible per second) and if something has arrived on the socket it is immediately acted upon. I could have used a socket selector to do this without any busywaiting, but I had difficulty getting SFML's sf::SocketSelector class to work, and even when it did kind of work it was actually less time-consuming to just poll each and every socket.
When an object's velocity changes, the new velocity is sent over TCP. Each client controls the velocity of only a single object - their player's ship - while the host controls the velocity of everything else. Asteroids don't change velocity during their lifetime after creation and player ships don't feel the effects of drag. Everything follows a linear path apart from player ships, which can accelerate unpredictably. Velocity-change events don't represent a huge amount of network traffic, as such, so they shouldn't gum up the works, but it is important that the server finds out about a player's decision to accelerate as soon as possible so it can pass that information on to other players
Other less critical data (such as the positions of objects) is sent over UDP. The periodic position updates help a great deal in keeping things in sync, but it doesn't matter if they arrive out of order, late or at all, because the network messages all have timestamps.
If a position update for an object arrives from the server and it's wildly different from the client's version of the object's position, some correction is needed. In this case there are two options: we can simply set the position, or we linearly interpolate to the position based on the time difference between when the message was sent and when it was received. The first option causes perceivable jerkiness which makes the game difficult to play and painful to look at, but the second smooths things out pretty nicely, bringing objects back into line.
For testing how the application responded to latency, packet loss and other fun things I used clumsy. It made me sad to see how badly the game responded to adverse network conditions, but it helped me get an understanding of where I needed to strengthen the code to handle such conditions.
I learnt quite a lot from building my network game in this way. My big takeaways from the project were:
- Higher simulation steps-per-second meant high latency caused bigger problems. I was originally running the game at 60 timesteps per second, and the drop to 30 made the game handle higher levels of network lag so much more gracefully. I didn't get around to making up for the timestep deficiency, but I could do it by rendering objects ahead of where they actually are between steps based on their velocity, which would produce a smoother-looking game, I could also figure out a way to vary the timestep dynamically based on network conditions, but that sounds tricky to get right.
- High-magnitude accelerations are unkind. If an object's velocity is changing in a big way it becomes easier and easier for versions of the game world to become out of sync when network conditions aren't ideal. It's therefore in the interest of the programmer that game object's accelerations be clamped within some limits.
- The game world wraps around, so that if an object moves off the right edge of the screen it reappears on the left. This plus linear interpolation of positions equals weird bugs where objects quickly fly across the screen, which sucks.
I left the project with some problems still needing to be fixed:
- I left an annoying bug - when an asteroid gets destroyed, the newly-created ones on the client can end up slightly behind their host counterparts. This could be fixed by sending periodic position updates from the host to clients for asteroids, not just players, although this would of course increase network traffic by a substantial amount.
- There also seem to be issues which I forgot about with connecting more than one client to the host. As in, it's impossible for multiple clients to connect to the host. So I need to fix that.
- Fix a bunch of bugs and issues and smooth out the code.
- Internet multiplayer. How hard can it be?
- Implement the gameplay, putting art in in the process.
- Get local mutliplayer working.
In the future I'm hoping to find time to do a blog post discussing the system I used for managing game entities, which was pretty nifty, but I seem to be pretty busy this summer with things.