Syncing variable data sizes with Aspects/RMI's

#1
How should you sync data of variable size from Server->Client?

I have a state machine on both, and I need each small state class deleted then re-created to match what the server sent. However, each state has a few variables that need to be sync'd as well.

I can't use aspects obviously because these classes come/go as part of the same state machine component. Plus there's the whole issue with 'aspects need to send the same amount of data every time' that straight up doesn't work with stuff such as state-data. I can't send all the states 24/7 when I'm only using, say, 5-10 out of 20.

Should I use an RMI with 'pre-attach' ? Would that even arrive with the same 'aspects' that were sent? e.g. on frame 30 I send aspects + RMI from server, I can't possibly have the data RMI arrive with frame 31's aspects since that'd cause desync.


This honestly looks like the first question anyone would ask after reading the 'limitations' on aspects. Any help is appreciated, unlikely to get it anywhere though considering the current track record.

Re: Syncing variable data sizes with Aspects/RMI's

#2
Maybe you still can use aspects though?
1. Are you sure you cannot just send everything. Depending on the data size and compression policies for your data, it's likely that unchanged fields are delta-comressed to much smaller number of bits.
On top of that, arithmetic compression (if it's enabled) may squeeze even more compression from your data stream (or it may not - mysterious thing).
2. Maybe you can split your data over several aspects. Especially if you know, that there are distinct sets of states which change roughly together. There are 32 aspects - if you say, you've got 20 states, maybe you can just assign aspect per state?
3. You can try to use optional groups. Although switching optional groups flushes memento streams, so delta compression cannot be applied for anything inside and after that group in that send. So it's better not to switch optional groups often.
4. You can have up to 8 profiles on aspects and switch them at runtime. They have to be prepared up-front and can't change, but at least this allows to use aspect for different set of data. Although, profile switching thrashes history for delta-compression as well. And I never used this feature.

Apart from aspects, maybe you can recompute your values on remote side just based on the state index or on some sort of less frequent data. If don't have to send values at all - ultimate win.

If nothing above is possible, then yes, you have to fallback to use RMIs. Regarding the ordering, attached RMIs are written together with the aspect data into the same packet. So either RMI+aspects are delivered together, or they are lost&resent together. Looking at the code, seems like expectation is that one should use unreliable RMIs together with the aspect attachments (which makes sense, as your aspect data itself is unreliable). Although, reliable RMIs seem to be rescheduled to be sent again, together with the aspect data, which probably allows to gather a history of lost messages together with the last snapshot of aspect data.
I have no idea, how the system would handle the case if too many attachments are piled up and they all don't fit into single packet. Would receiving side wait until all data comes through?

Keep in mind, that if you opt to use RMIs, you'll have to provide some strategy for late join or re-join.

Re: Syncing variable data sizes with Aspects/RMI's

#3
1+2) Yar, the more actors there are the messier this would be. I'm presuming that there are going to be far more than 32 or so states/actor. Any form of optimization/compression simply lessens the performance overhead, doesn't remove it.

Plus, assigning 1 aspect/state seems messy and unstable >.>

3) Not sure how option groups work to be honest, but since it's best not to change them often, well, best not to use this either .-.
4) Yar I'm aware of this, unfortunately this is the same messy/unstable workaround...

It's impossible to recompute, a sample scenario would be physics divergence (1 frame off or so, bFlying would be 1 on server and 0 on client leading to MidAir state being triggered earlier).
Even if it was possible, the Server is supposed to override the Client side and if the client-side is recalculating it goes against the whole point .-.

That's nice to know, that they pop in the same packet!
I can just send 1 RMI Server->Client with 'Post-attach' and trigger rollback in the same RMI resolution. However, there still is the problem of sending a variable amount of data in the RMI. Since the RMI expects a static param set.

E.g. somehow create a serializer that the RMI can take as a param, write to the packet + send, and then the client could parse + create serializer. e.g. simulating 'TSerialize serialize' usage in NetSerialize and other areas.
This would let me pass in 'ser' into each state and not worry about variable data length etc.


P.S.: Thanks for your first post! If only you were here earlier, maybe you can help some other peeps out since you seem far more knowledgeable about networking! Fury22k probably has question too.

Re: Syncing variable data sizes with Aspects/RMI's

#4
more than 32 or so states/actor. Any form of optimization/compression simply lessens the performance overhead, doesn't remove it. Plus, assigning 1 aspect/state seems messy and unstable

Sure, but I gave a 1 aspect/state as an extreme example. It could be 2,3,... states per aspect. But, as you say, it is an optimization. Maybe at this point, it's better to just get your system working and only then analyze it's performance (CPU&memory, bandwidth consumption) to see how to optimize it.

For now, I'd rather concentrate on differences between the guarantees provided by RMIs and aspects.

Aspects are eventually consistent as in "eventually all clients will receive the latest snapshot of data". But there is no guarantee to deliver each and every snapshot. Some may be lost or never sent. This works pretty well for, for example, physics - if you've got good interpolation/extrapolation on client to reconstruct missing snapshots. But it becomes a bit trickier to synchronize logical state - you cannot assume anymore, that you've got all state transitions. Not every system can deal with that.

RMIs have more control over delivery guarantees and who receives them. And it's easier to use them - almost like a remote function call. Reliable ordered will definitely replicate every state transition of your logic. But RMIs have their own downsides. Reliability comes with the cost of increased latency. Ordering may block delivery of new messages, in case when previous message was lost in flight due to packet loss. CryNetwork doesn't delta-compress them as it does with aspects. CryNetwork doesn't store them to deliver again in case of client re-join - game code has to do it itself.
Unfortunately, we don't have ideal instruments for every case. I see building networked system as an acrobatic trick of balancing different approaches.

3) Not sure how option groups work

Serializer object has BeginOptionalGroup() function. I think it's supposed to be used like this:

Code: Select all

if (isGroupEnabled = ser.BeginOptionalGroup("group", isGroupEnabled))
{
  // serialize values inside group
  ser.EndGroup();
}

When you write, flag isGroupEnabled=true means that you're entering the if() and keep serializing your data. And isGroupEnabled=false means that you skip if().
When you read, BeginOptionalGroup() returns the value of isGroupEnabled flag and enters or skips the if().

Network-wise, BeginOptionalGroup() writes single bit for isGroupEnabled and then proceeds to serialize values (or skips them).
It's fine to use the feature with aspects. But like I said before, switching the flag removes history of values and they cannot be delta-compressed for this particular send (and probably, until the ACK is received).
RMIs don't use delta-compression at all (there is no history to base delta-compression on), so it's fine to switch the flag any time you want.

However, there still is the problem of sending a variable amount of data in the RMI. Since the RMI expects a static param set.

RMI sends single struct as a parameter, yes. But there is no requirement to have static number of values inside that struct. The only requirement - reader should consume everything what writer wrote, i.e. reader should go through the equivalent code path. CryNetwork is particularly picky about that - if it detects, that reader have not consumed everything, then it immediately disconnects (if it doesn't crash before that).
Give that requirement, we can do for example this:

Code: Select all

struct RmiParams
{
   std::vector<int> intVector;
   void SerializeWith(TSerialize ser)
   {
      ser.Value("ints", intVector);
   }
};

Here we have variable number of values inside intVector. Serialization for vector sends a size and then loops over values. Reader reads the size and then does same number of loop iterations to read values. Instead of int's, such vector may contain structures, of course.

Or we can do something like this:

Code: Select all

struct MovementParams
{
   enum EMode { eMode_A, eMode_B, eMode_C, eMode_Count };
   EMode mode;
   ...
   void SerializeWith(TSerialize ser)
   {
      ser.EnumValue("mode", mode, eMode_A, eMode_Count);
      switch (mode)
      {
      case eMode_A: ...; break;
      case eMode_B: ...; break;
      case eMode_C: ...; break;
      }
   }
};

Here we've got idea similar to OptionalGroup - but instead of single bool, send an enum value, and then switch to select one of serialization branches. Although, unlike OptionalGroups, this trick cannot be used in aspect serialization - aspect serialization code have special support for OptionalGroups. Although, now thinking about it, it should be possible to extent that code to support not only bool flags, but also enum's or maybe even counters.

Re: Syncing variable data sizes with Aspects/RMI's

#5
'But there is no requirement to have static number of values inside that struct. '

Sorry, didn't mean that.

What I meant was to avoid pre-defining the RMI params.

E.g. have a generalized RMI that just sends state vars in a compressed block, instead of 1 RMI/State.

The RMI would send a 'TSerialize ser' (somehow....) and read it too like normal, which would bypass the need to define params for each state.
Each state could now, instead, do the typical SerializeWith(ser) and do ser.Value() etc. (States will write/read in same order)

Re: Syncing variable data sizes with Aspects/RMI's

#6
Soooo what you want is basically SendPaket() because you are unable to design a system that works within sensible constraints.

>I have a state machine on both, and I need each small state class deleted then re-created to match what the server sent.

This is a dumb setup. First of all its a performance waste cause you are constantly allocating and freeing memory and secondly all possible states should be known by server as well as client on state machine startup. You just enter, leave and transition between them.

Which is where optional groups come into play. Create all required states on state machine initialization and wait for the initial net ser setup to run (the condition in optional groups for this ALWAYS returns true in the setup process so those need to exist at that stage) later on you can ser optional groups as is required.

Re: Syncing variable data sizes with Aspects/RMI's

#7
'secondly all possible states should be known by server as well as client on state machine startup. You just enter, leave and transition between them.'

I don't think that makes sense. I have all the states explicitly defined on both sides, obv, just not all are active 24/7 (but more than 1 can be active at any time)

Plus, I don't think it's feasible to have every state created and ready 24/7.....that just seems like a large waste of memory considering that more states will be inactive than active at any given time.


As to the memory allocation/de-allocation, I admit that's a problem but at the moment I'd prefer a performance cost at the benefit of a system that actually makes sense.
I can maybe do something similar to physics/etc. and pre-allocate memory a suitable segment of memory to minimize this (with exception of extreme 'many states active at once' scenarios)
Each state is containerized and less likely to blow something else up. Overall just seems neater and easier to follow/debug/optimize than a mish-mash would.

Alternative suggestions are of course appreciated. Insults to my 'lack of' intelligence and what not aren't, I know I'm making mistakes/doing stuff wrong otherwise I wouldn't be asking.


Edit: Plus, as noted above option groups shouldn't be changed often, and since states (due to input) will change often using option groups isn't a good idea.

Who is online

Users browsing this forum: No registered users and 1 guest