Namespace Akka.Cluster.Sharding
Classes
ClusterSharding
This extension provides sharding functionality of actors in a cluster. The typical use case is when you have many stateful actors that together consume more resources (e.g. memory) than fit on one machine. - Distribution: You need to distribute them across several nodes in the cluster - Location Transparency: You need to interact with them using their logical identifier, without having to care about their physical location in the cluster, which can change over time.
'''Entities''': It could for example be actors representing Aggregate Roots in Domain-Driven Design terminology. Here we call these actors "entities" which typically have persistent (durable) state, but this feature is not limited to persistent state actors.
'''Sharding''': In this context sharding means that actors with an identifier, or entities, can be automatically distributed across multiple nodes in the cluster.
'''ShardRegion''': Each entity actor runs only at one place, and messages can be sent to the entity without requiring the sender to know the location of the destination actor. This is achieved by sending the messages via a ShardRegion actor, provided by this extension. The ShardRegion knows the shard mappings and routes inbound messages to the entity with the entity id. Messages to the entities are always sent via the local ShardRegion. The ShardRegion actor is started on each node in the cluster, or group of nodes tagged with a specific role. The ShardRegion is created with two application specific functions to extract the entity identifier and the shard identifier from incoming messages.
Typical usage of this extension:
1. At system startup on each cluster node by registering the supported entity types with
the Start(String, Props, ClusterShardingSettings, ExtractEntityId, ExtractShardId, IShardAllocationStrategy, Object) method
1. Retrieve the ShardRegion actor for a named entity type with ShardRegion(String)
Settings can be configured as described in the akka.cluster.sharding
section of the reference.conf
.
'''Shard and ShardCoordinator''': A shard is a group of entities that will be managed together. For the first message in a specific shard the ShardRegion requests the location of the shard from a central Akka.Cluster.Sharding.ShardCoordinator. The Akka.Cluster.Sharding.ShardCoordinator decides which ShardRegion owns the shard. The ShardRegion receives the decided home of the shard and if that is the ShardRegion instance itself it will create a local child actor representing the entity and direct all messages for that entity to it. If the shard home is another ShardRegion, instance messages will be forwarded to that ShardRegion instance instead. While resolving the location of a shard, incoming messages for that shard are buffered and later delivered when the shard location is known. Subsequent messages to the resolved shard can be delivered to the target destination immediately without involving the Akka.Cluster.Sharding.ShardCoordinator. To make sure at-most-one instance of a specific entity actor is running somewhere in the cluster it is important that all nodes have the same view of where the shards are located. Therefore the shard allocation decisions are taken by the central Akka.Cluster.Sharding.ShardCoordinator, a cluster singleton, i.e. one instance on the oldest member among all cluster nodes or a group of nodes tagged with a specific role. The oldest member can be determined by IsOlderThan(Member).
'''Shard Rebalancing''':
To be able to use newly added members in the cluster the coordinator facilitates rebalancing
of shards, migrating entities from one node to another. In the rebalance process the
coordinator first notifies all ShardRegion actors that a handoff for a shard has begun.
ShardRegion actors will start buffering incoming messages for that shard, as they do when
shard location is unknown. During the rebalance process the coordinator will not answer any
requests for the location of shards that are being rebalanced, i.e. local buffering will
continue until the handoff is complete. The ShardRegion responsible for the rebalanced shard
will stop all entities in that shard by sending them a PoisonPill. When all entities have
been terminated the ShardRegion owning the entities will acknowledge to the coordinator that
the handoff has completed. Thereafter the coordinator will reply to requests for the location of
the shard, allocate a new home for the shard and then buffered messages in the
ShardRegion actors are delivered to the new location. This means that the state of the entities
are not transferred or migrated. If the state of the entities are of importance it should be
persistent (durable), e.g. with akka-persistence
so that it can be recovered at the new
location.
'''Shard Allocation''': The logic deciding which shards to rebalance is defined in a plugable shard allocation strategy. The default implementation LeastShardAllocationStrategy picks shards for handoff from the ShardRegion with highest number of previously allocated shards. They will then be allocated to the ShardRegion with lowest number of previously allocated shards, i.e. new members in the cluster. This strategy can be replaced by an application
'''Recovery''':
The state of shard locations in the Akka.Cluster.Sharding.ShardCoordinator is stored with akka-distributed-data
or
akka-persistence
to survive failures. When a crashed or unreachable coordinator
node has been removed (via down) from the cluster a new Akka.Cluster.Sharding.ShardCoordinator singleton
actor will take over and the state is recovered. During such a failure period shards
with known location are still available, while messages for new (unknown) shards
are buffered until the new Akka.Cluster.Sharding.ShardCoordinator becomes available.
'''Delivery Semantics''':
As long as a sender uses the same ShardRegion actor to deliver messages to an entity
actor the order of the messages is preserved. As long as the buffer limit is not reached
messages are delivered on a best effort basis, with at-most once delivery semantics,
in the same way as ordinary message sending. Reliable end-to-end messaging, with
at-least-once semantics can be added by using AtLeastOnceDelivery
in akka-persistence
.
Some additional latency is introduced for messages targeted to new or previously unused shards due to the round-trip to the coordinator. Rebalancing of shards may also add latency. This should be considered when designing the application specific shard resolution, e.g. to avoid too fine grained shards.
The ShardRegion actor can also be started in proxy only mode, i.e. it will not host any entities itself, but knows how to delegate messages to the right location.
If the state of the entities are persistent you may stop entities that are not used to reduce memory consumption. This is done by the application specific implementation of the entity actors for example by defining receive timeout (SetReceiveTimeout(Nullable<TimeSpan>)). If a message is already enqueued to the entity when it stops itself the enqueued message in the mailbox will be dropped. To support graceful passivation without losing such messages the entity actor can send Passivate to its parent ShardRegion. The specified wrapped message in Passivate will be sent back to the entity, which is then supposed to stop itself. Incoming messages will be buffered by the ShardRegion between reception of Passivate` and termination of the entity. Such buffered messages are thereafter delivered to a new incarnation of the entity.
ClusterShardingExtensionProvider
INTERNAL API
ClusterShardingSettings
TBD
ClusterShardingStats
Reply to GetClusterShardingStats, contains statistics about all the sharding regions in the cluster.
CurrentRegions
Reply to GetCurrentRegions.
CurrentShardRegionState
Reply to GetShardRegionState
If gathering the shard information times out the set of shards will be empty.
EntityLocation
Response to a GetEntityLocation query.
EnumerableExtensions
GetClusterShardingStats
Send this message to the ShardRegion actor to request for ClusterShardingStats,
which contains statistics about the currently running sharded entities in the
entire cluster. If the timeout
is reached without answers from all shard regions
the reply will contain an empty map of regions.
Intended for testing purpose to see when cluster sharding is "ready" or to monitor the state of the shard regions.
GetCurrentRegions
Send this message to the ShardRegion actor to request for CurrentRegions, which contains the addresses of all registered regions. Intended for testing purpose to see when cluster sharding is "ready" or to monitor the state of the shard regions.
GetEntityLocation
Send this message to a ShardRegion actor to determine the location and liveness of a specific entity actor in the region.
Creates a EntityLocation message in response.
GetShardRegionState
Send this message to a ShardRegion actor instance to request a CurrentShardRegionState which describes the current state of the region. The state contains information about what shards are running in this region and what entities are running on each of those shards.
GetShardRegionStats
Send this message to the ShardRegion actor to request for ShardRegionStats, which contains statistics about the currently running sharded entities in the entire region. Intended for testing purpose to see when cluster sharding is "ready" or to monitor the state of the shard regions.
For the statistics for the entire cluster, see GetClusterShardingStats.
GracefulShutdown
Send this message to the ShardRegion actor to handoff all shards that are hosted by the ShardRegion and then the ShardRegion actor will be stopped. You can Watch(IActorRef) it to know when it is completed.
HashCodeMessageExtractor
Convenience implementation of IMessageExtractor that construct ShardId based on the StringHash(String) of the EntityId. The number of unique shards is limited by the given MaxNumberOfShards.
LeastShardAllocationStrategy
Use LeastShardAllocationStrategy(Int32, Double) instead. The new rebalance algorithm was included in Akka.Net 1.4.11. It can reach optimal balance in less rebalance rounds (typically 1 or 2 rounds). The amount of shards to rebalance in each round can still be limited to make it progress slower.
This implementation of IShardAllocationStrategy allocates new shards to the ShardRegion with least number of previously allocated shards.
When a node is added to the cluster the shards on the existing nodes will be rebalanced to the new node. evenly spread on the remaining nodes (by picking regions with least shards).
When a node is added to the cluster the shards on the existing nodes will be rebalanced to the new node.
It picks shards for rebalancing from the ShardRegion
with most number of previously allocated shards.
They will then be allocated to the ShardRegion with least number of previously allocated shards,
i.e. new members in the cluster. There is a configurable threshold of how large the difference
must be to begin the rebalancing.The difference between number of shards in the region with most shards and
the region with least shards must be greater than the rebalanceThreshold
for the rebalance to occur.
A rebalanceThreshold
of 1 gives the best distribution and therefore typically the best choice.
A higher threshold means that more shards can be rebalanced at the same time instead of one-by-one.
That has the advantage that the rebalance process can be quicker but has the drawback that the
the number of shards (and therefore load) between different nodes may be significantly different.
Given the recommendation of using 10x shards than number of nodes and rebalanceThreshold=10
can result
in one node hosting ~2 times the number of shards of other nodes.Example: 1000 shards on 100 nodes means
10 shards per node.One node may have 19 shards and others 10 without a rebalance occurring.
The number of ongoing rebalancing processes can be limited by maxSimultaneousRebalance
.
Passivate
If the state of the entities are persistent you may stop entities that are not used to reduce memory consumption. This is done by the application specific implementation of the entity actors for example by defining receive timeout (SetReceiveTimeout(Nullable<TimeSpan>)). If a message is already enqueued to the entity when it stops itself the enqueued message in the mailbox will be dropped. To support graceful passivation without losing such messages the entity actor can send this Passivate message to its parent ShardRegion. The specified wrapped StopMessage will be sent back to the entity, which is then supposed to stop itself. Incoming messages will be buffered by the ShardRegion between reception of Passivate and termination of the entity. Such buffered messages are thereafter delivered to a new incarnation of the entity.
Instance is a perfectly fine StopMessage.
ShardAllocationStrategy
ShardedDaemonProcess
This extension runs a pre-set number of actors in a cluster.
The typical use case is when you have a task that can be divided in a number of workers, each doing a sharded part of the work, for example consuming the read side events from Akka Persistence through tagged events where each tag decides which consumer that should consume the event.
Each named set needs to be started on all the nodes of the cluster on start up.
The processes are spread out across the cluster, when the cluster topology changes the processes may be stopped and started anew on a new node to rebalance them.
Not for user extension.
ShardedDaemonProcessExtensionProvider
INTERNAL API
ShardedDaemonProcessSettings
ShardingEnvelope
Default envelope type that may be used with Cluster Sharding.
The alternative way of routing messages through sharding is to not use envelopes, and have the message types themselves carry identifiers.
ShardRegion
INTERNAL API
This actor creates children shard actors on demand that it is told to be responsible for. The shard actors in turn create entity actors on demand. It delegates messages targeted to other shards to the responsible ShardRegion actor on other nodes.
ShardRegion.StartEntity
When remembering entities and a shard is started, each entity id that needs to be running will trigger this message being sent through sharding. For this to work the message must be handled by the shard id messageExtractor.
ShardRegionStats
Entity allocation statistics for a specific shard region.
ShardState
TBD
TuningParameters
TBD
Interfaces
IActorSystemDependentAllocationStrategy
Shard allocation strategy where start is called by the shard coordinator before any calls to rebalance or allocate shard. This is much like the [[StartableAllocationStrategy]] but will get access to the actor system when started, for example to interact with extensions.
IClusterShardingSerializable
Marker trait for remote messages and persistent events/snapshots with special serializer.
IMessageExtractor
Interface of functions to extract entity id, shard id, and the message to send to the entity from an incoming message.
IShardAllocationStrategy
Interface of the pluggable shard allocation and rebalancing logic used by the Akka.Cluster.Sharding.PersistentShardCoordinator.
IShardRegionCommand
Marker interface for commands that can be sent to a ShardRegion.
IShardRegionQuery
Marker interface for read-only queries that can be sent to a ShardRegion.
IStartableAllocationStrategy
Shard allocation strategy where start is called by the shard coordinator before any calls to rebalance or allocate shard. This can be used if there is any expensive initialization to be done that you do not want to to in the constructor as it will happen on every node rather than just the node that hosts the ShardCoordinator
Enums
RememberEntitiesStore
StateStoreMode
Delegates
ExtractEntityId
Interface of the partial function used by the ShardRegion to
extract the entity id and the message to send to the entity from an
incoming message. The implementation is application specific.
If the partial function does not match the message will be
unhandled
, i.e. posted as Unhandled
messages on the event stream.
Note that the extracted message does not have to be the same as the incoming
message to support wrapping in message envelope that is unwrapped before
sending to the entity actor.
ExtractShardId
Interface of the function used by the ShardRegion to extract the shard id from an incoming message. Only messages that passed the ExtractEntityId will be used as input to this function.