Then it seems to be different here in germany.
I got 15kB/s down and 5kB/s up (not at the same time of course), without any hacking, cheating, cracking...
... ... ...
if the states ever lose net neutrality, i'm moving to germany
Lemme try to think-out-loude/brain-storm on this approach... (beware that I can be completely off base with many of the thing that follow)
i'm, in general, at risk of going off base too, since a lot of my concepts, while either heavily or completely based on tried and tested methodology, may be too hard to implement in warzone as the code stands, as i've only looked at a few parts of the code so far, and less refined approaches might be "good enough" (though i'm not much of one to give up).
So, I'm a warzone client, and I'm only interested in receiving data about units that are "probably within my radar-sight".
I'm playing against DevUrandom, who's another warzone client, so he needs to send me data on those of his units that I might be able to see.
ooh, forgot to mention (sorry), that most of my ideas lately (collision detection, netcode, etc) just wont work without a dedicated server (warzone really needs a dedicated server) -- even without any improvements otherwise, a dedicated server in and of itself is an improvement. this means, at the minimum going through and creating some compile flags that compile it to read a different command line argument option list, allow for a remote admin client (we could have server -> irc notifications as well, among other things), and getting it to compile completely without any graphics or sound support whatsoever. this would greatly benefit from the proposed seperation of src/ and lib/, and thus could go hand in hand.
since i have absolutely no experience in programming with sound or graphics api's (i do know a little bit about how the interact with the hardware, though), i'd be willing to give it a shot, but i'd need a lot more time to get familiar with the source.
That means that I have to send him 512*512/8 = 32 KB of data w/o compression - although it's within the 64 KB limit of the UDP packet, it's still a lot of data, and we don't really think we need that amount of detail.
yes, it's within the upper limit, but if we ever did send a packet this large, it'd probably get fragmented by a few routers along the way (not that this really affects anything aside from latency) and an increase in the chance of packetloss.
The point in my idea was not that the client tells the server what it can see. The point was that the client does not do that...
my suggestions thus far have followed the same principle, and your concept can be made to fit that -- you've got a good approach below, and every good idea should be discussed as if it's to be implemented, to see where it can be refined and improved: showing someone ways to improve their own ideas tends to make them start thinking more in-depth about this stuff than showing improvements for someone else's idea.
Alright, so that approach is too heavy - and I haven't even talked about the response t the eye-sight data yet.
your math checks out, and i like that you know any bad idea (from a resource usage standpoint) can become a good one if even simple improvements are added, as you show us below.
The short of it is, that we all send the difference between last known state and current state, and the recepients sends a response to confirm that they have recieved our state update.
It's a bit harder to predict how much data will be transfered per second, but it's most likely less than the above approach, since all packages containes only the relative changes to the previous packet.
you've struck on the principle behind all modern game-networking (and other types of networking: rsync, for example, uses this heavily) -- don't teach someone what they already know. and, as you said, it's effectively impossible to predict unless you have a working example to reference, and the problem with this, despite all its benefits, is that you have to think about it from a compression programmer's approach: in compression, when the data is too varied and seemingly random, it doesn't compress well -- in programming game netcode this way, if enough has changed, you'd might as well just send them new data. luckily enough for us, that doesn't happen much in an rts, so we wont really have to think about that until we see some really wierd can't-figure-out bugs that crop up during netcode programming, if that ever even happens to us
UDP already defines the full length of the entier body of the packet, so we don't need to do that.
problem with using udp for diffs is that you have to count on eventual (if not frequent, depending on the situation) packet loss, even if the server is conneted directly to the client by 1 meter of ethernet. when this happens, you have to rsync the client somehow, and all actions from the client, like "build this unit", and all responses should be either tcp, or udp emulating tcp -- in this case, udp emulation is good enough, since, at least according to my above model for action events doesn't conflict with anyone elses netcode suggestions i've seen, and no alternative has been suggested as of yet: the client sends an action via UDP, and the server, upon receiving and validating must
send either a "request #x valid at tick #y" or "request #x invalid" response -- if the client doesn't receive a response within a few ticks, it'll resend the request with the same request id until responded to. this method will have the speed of udp, and the reliability of tcp since responses act as logical ACK's, and the server doesn't need to know or care if the response got back to the client (if it didn't, the client will complain).
Code: Select all
<player_ID : Uint8>
<sequence_ID : Uint16>
<delta_since_sequence_ID : Uint16>
<data_type_ID : Uint8>
<data_body_length_in_bytes : Uint16>
<data_body : Unit8>
And we could define the eye-sight data sector like this:
On top of that, as Hook suggest, we could apply a per-packet Huffman or deflate compression.
Thinking about it, this approach might work alright.
i'm confused as to the use of delta_since_sequence_ID
well, numbers are hard to compress in that way, and i'm guessing the reason you might be suggesting it in this case is because each of the above numbers would, 99999 out of 100000, have more than 1 zero in them, but aside from the performance increase (no need to compress), smaller datatypes usually take up less space anyways than their compressed big datatype equivalents. and, for many kinds of packets, player data is either not needed (such as sending unit/structure data changes, since a player can only see that for their own units in wz), and when it is needed, there are a few things we can do to make that field more lean:
first, we could change player_ID from using an 8-bit int to using a 3-bit uint, since i haven't heard any popular requests for allowing more players into a warzone game, and a 3-bit unsigned int can hold the numbers 0-7, which is all we need - whether or not our target platforms can handle 3-bit int, i don't know, but as i recall, there's some kind of "union" construct in c that may or may not be useful here, and if nothing else, we can just merge 3-bit and 5-bit "logical" types into one 8-bit type, and then using simple math to split them into two 8-bit types (or one 8-bit and one 4-bit int if available on the target platform).
wherever possible, you can add protocols to change all those Uint16 that store packet sequence data (or other incremental data) into possibly using Uint8's, since in either case you're going to have to send some kind of, hopefully, extremely small "sequence count reset" packet to the client every time sequence_ID hits 65535 (and that'd happen hundreds of thousands of times for typical warzone game), or, if you can work out a few problems, this can be implicitly sychronus between the server and client. if using packet-based reset, there comes a time where bandwidth/latency analysis determines that a bigger number will be more ideal than a smaller number for storage: the ideal is probably somewhere between 8 and 16 bit if you use packet-based resets (anything less and you'd be packet-flooding the network), and probably around 8-bit if you're using implicit resets and you have a really smart way to work out the issues to come up (the smaller this number, the harder it is to work out sending lost unreceived server responses and other data that occured before the last reset -- you want to get it so that you'd only have to store the last reset's worth of data, and not several generations of resets worth).
in addition to the above, instead of having a player_ID flag at the top of each payload that needs to store player-centric info, we could send meta-packets as such:
Code: Select all
<player_ID : Uint3 (or whatever)>
<sequence_ID : Uint16>
before data is sent, packet payloads are formed and cached, and their sequence_ID's thresholds (the last sequence_ID of one player, and the first sequence_ID of the next player's) will be determined, so that wz_player_meta_packets can be given those threshold sequence_ID's, and sent off first. alternatively, player-centric data will be grouped by player, and a wz_player_meta_packet will be sent off before each group, followed by the player's actual data. i like this second approach better since it is simpler, and while adding slightly more latency than the first before the client can process the data (no more than 10 ms at the worst), it puts far less load on the server, which in a pure client/server setup, is doing all the work while the clients do almost nothing at all (in a thought-out implementation of pure client/server warzone, a client would need a 250 mhz processor, 32 mb of ram, and a really kickass gfx card for that anti-aliasing to be able to run smoothly, in addition to whatever the os needs).
with that, any player-centric information that is received will have its sequence_ID checked against the received wz_player_meta_packet sequence_ID's received, and if it has already received two "surrounding" wz_player_meta_packet's (the one indicating "this player's data", and the one indicating "the next player's data"), then the client will know which player a given piece of data implicitly belongs to, unless of course the most recent player was the last player (player 7, counting from 0), in which case only the leading wz_player_meta_packet is needed to determine this. note: the wz_player_data_packet for player 0 just be sent anyways to indicate that "the server is sending player-centric data now".
wz_player_meta_packet if used, is critical information, and should be sent via tcp instead of udp, or, alternatively, wz_player_meta_packet can be sent via udp, and, if the client has not received one of them, it can send a specific request back to the server for that packet (either approach to this is really just as reliable, and as usual, udp is faster).
if we were to say that on average, there were 40 game player-centric objects that needed to be sent to each player each update (however long an update is) in a 4 player game, and on average 10 of those objects are for each player, then using karma's above packet layout, you'd have the following server upstream bandwidth usage each update when you consider only
that data which is needed to identify the player of any given piece of player-centric data, and going with the worst case, each object has something that changed with it since the last update:
8 bit * 40 = 320 bits, or 40 bytes per update
now if we reduce that to 3 bit player specifiers, since that's all that's currently needed, we get:
3 bit * 40 = 120 bits, or 15 bytes per update, which is a 62.5% decrease from using 8-bits.
if we stay go with meta-packets, the difference in bandwidth use between 3 and 8 bits is almost negligable, so we'll show the below with 8 bits:
4 players * 8 bits = 32 bits, or 4 bytes per update.
and then if we once again knock it down to 3 bit's to store the player_ID:
4 players * 3 bits = 12 bits, or 1.5 bytes per update, which is about a 96% decrease in bandwidth per update to send player-related data.
Tell me what you think.
heh. what i think is that we could use carrier pigeons to send warzone game updates between players... now while this has a pretty high latency for going between continents, strapping the pigeon to the outside of a saturn v rocket should speed things up considerably, and pigeons are good because they can hold a whole lot more than 64 kilobytes of data.
if any given idea uses more than zero cpu time, more than zero memory, and more than zero bandwidth, it still has room for improvement: the trick is finding out which implemented pieces are farthest from that mark and working to improve those first.
the most terrible, but heavily optimized idea is always better than a good idea that take a brute-force approach. at the same time that same terrible, but heavily optimized idea can't even compare to the a good idea that is moderately optimized.
The server keeps track of all units (it must do that anyway) and roughly in which "sectors" they are and which sector is seen by which client (that client has own or allied units in that sector, radar ranging into the sector or a satelite-uplink).
When it now wants to send client X an update, it has some sort of list what can be seen by him and that's what is sent. No need for the client to tell him anything.
my conceptualization is that the physical server used in a warzone game is the ingame satellite that all users have equal access to, and that the client is some commander sitting at a satellite uplink terminal with a field radio.
i'm working on two assumptions:
- first, it's bad to send the client anything they don't need to know -- using the above conceptualization, the only things they need to know are what they see on the screen right in front of them, and the start times of any build/production/research orders they made, among a few other minor things -- and anything beyond that is a waste of bandwidth.
- if we figure out some idea that's really low on bandwidth, but is a little too high in processor usage, then it's a really good idea, since, by the time we implement it, we'll have thought of a whole lot of ideas to reduce processor usage down to very usable levels.
we can pretty much try to figure out the best, absolute least bandwidth, purest client/server case. if we find out that this uses way
to much processor usage, we should sit on that idea and think of ways to add the minimum amount of bandwidth while reducing the most amount of processor usage until we get something that will work, and once typical server hardware gets good enough to handle the best bandwidth/most processor case, we implement that idea.