Up to Main Index                           Up to Journal for January, 2016

                    JOURNAL FOR TUESDAY 5TH JANUARY, 2016
______________________________________________________________________________

SUBJECT: Networking pushed to public dev branch
   DATE: Tue  5 Jan 23:09:23 GMT 2016

First post of 2016 so lets make it a good one. Just pushed out the networking
changes to the public dev branch. No accounts or passwords, so don't put a
server on the internet yet!

I did hope to get the networking code out for the holidays so that people
could have a play with it. At the list minute I discovered an issue with the
messenger which crippled performance when there were a large number of
players. Since then I've been working on a solution and can now easily handle
up to 15,000 players at once with the small test world I have. The test world
only has 10 locations so that's an average of 1,500 players per location with
very high lock contention. At the moment 15,000 players peaks at about 65Mb of
RAM. Most of the testing so far is on 64bit Intel hardware.

I have tested on a Raspberry PI - model B Rev 2, CPU clocked at 900MHz, which
can handle 2048 players at once using about 3Mb RAM. Not sure why it struggles
to handle more players, not CPU bound, something I need to look into.

I did suggest to some people that maybe WolfMUD shouldn't focus on being able
to handle thousands of players at once. Are there even 15,000 MUD players
still out there? This didn't go down too well. So, as a result "crowd control"
is going to be a standard feature.

This means for example messages broadcast to a location will not be seen in
crowded locations. So messages about players entering or leaving a location
will not be sent - if it's crowded the player entering or leaving will go
unnoticed. Location descriptions will just list "You can see a crowd here".

Locations will automatically determine if a location is crowded and suppress
broadcasts automatically.

The number of players that constitute a crowd will be configurable. If you
don't like the crowd feature you can make the number very large to effectively
turn it off. Bear in mind that doing so could cause some performance issues
depending on your hardware, network bandwidth and world size - bigger worlds
will support more players will less lock contention.

At the moment the status type has fields that are accessed directly for
messages (comments removed for clarity):


  type state struct {
    actor       has.Thing
    where       has.Inventory
    participant has.Thing
    input       []string
    cmd         string
    words       []string
    ok          bool

    locks []has.Inventory

    msg struct {
      actor       *buffer
      participant *buffer
      observers   map[has.Inventory]*buffer
    }
  }


*buffer is a pointer to a modified bytes.Buffer. So we can send a message to
the actor or participant using:


  state.msg.actor.WriteString("Hello World!")
  state.msg.participant.WriteString("Hello World!")


To broadcast a message to observers we also need the Inventory to use. The
message is then sent to all players/mobiles at the location who are not also
the actor or participant:


  state.msg.observers[state.where].WriteString("Hello World!")


While this is nice and convenient I'm starting to think there should be state
methods for sending the messages, at least for observers. This would allow us
to check if a location is crowded and save us from having to even build the
messages rather than intercept them as they are about to be sent. Methods
could then be added for the actor and participant if only to keep messaging
consistent. This would also allow us to use lazy initialisation of the buffers
as well, potentially reducing memory usage further.

So plan is: get it working, get it out, look for optimisations

I also want to merge the sending of messages to the actor into the general
messenger instead of passing the messages back up to comms.Client to handle.
Again this will make handling messages consistent.

--
Diddymus


  Up to Main Index                           Up to Journal for January, 2016