Click or drag to resize
Akka.NETAkka.Cluster.Sharding Namespace
The Akka.Cluster.Sharding namespace contains classes that provide for actor sharding functionality within the cluster.
Classes
  ClassDescription
Public classClusterSharding

This extension provides sharding functionality of actors in a cluster. The typical use case is when you have many stateful actors that together consume more resources (e.g. memory) than fit on one machine. You need to distribute them across several nodes in the cluster and you want to be able to interact with them using their logical identifier, but without having to care about their physical location in the cluster, which might also change over time. It could for example be actors representing Aggregate Roots in Domain-Driven Design terminology. Here we call these actors "entities". These actors typically have persistent (durable) state, but this feature is not limited to actors with persistent state.

In this context sharding means that actors with an identifier, so called entities, can be automatically distributed across multiple nodes in the cluster. Each entity actor runs only at one place, and messages can be sent to the entity without requiring the sender to know the location of the destination actor. This is achieved by sending the messages via a ShardRegion actor provided by this extension, which knows how to route the message with the entity id to the final destination.

This extension is supposed to be used by first, typically at system startup on each node in the cluster, registering the supported entity types with the Start method and then the ShardRegion actor for a named entity type can be retrieved with ShardRegion(String). Messages to the entities are always sent via the local ShardRegion. Some settings can be configured as described in the `akka.contrib.cluster.sharding` section of the `reference.conf`.

The ShardRegion actor is started on each node in the cluster, or group of nodes tagged with a specific role. The ShardRegion is created with two application specific functions to extract the entity identifier and the shard identifier from incoming messages. A shard is a group of entities that will be managed together. For the first message in a specific shard the ShardRegion request the location of the shard from a central coordinator, the PersistentShardCoordinator. The PersistentShardCoordinator decides which ShardRegion that owns the shard. The ShardRegion receives the decided home of the shard and if that is the ShardRegion instance itself it will create a local child actor representing the entity and direct all messages for that entity to it. If the shard home is another ShardRegion instance messages will be forwarded to that ShardRegion instance instead. While resolving the location of a shard incoming messages for that shard are buffered and later delivered when the shard home is known. Subsequent messages to the resolved shard can be delivered to the target destination immediately without involving the PersistentShardCoordinator.

To make sure that at most one instance of a specific entity actor is running somewhere in the cluster it is important that all nodes have the same view of where the shards are located. Therefore the shard allocation decisions are taken by the central PersistentShardCoordinator, which is running as a cluster singleton, i.e. one instance on the oldest member among all cluster nodes or a group of nodes tagged with a specific role. The oldest member can be determined by IsOlderThan(Member).

The logic that decides where a shard is to be located is defined in a pluggable shard allocation strategy. The default implementation LeastShardAllocationStrategy allocates new shards to the ShardRegion with least number of previously allocated shards. This strategy can be replaced by an application specific implementation.

To be able to use newly added members in the cluster the coordinator facilitates rebalancing of shards, i.e. migrate entities from one node to another. In the rebalance process the coordinator first notifies all ShardRegion actors that a handoff for a shard has started. That means they will start buffering incoming messages for that shard, in the same way as if the shard location is unknown. During the rebalance process the coordinator will not answer any requests for the location of shards that are being rebalanced, i.e. local buffering will continue until the handoff is completed. The ShardRegion responsible for the rebalanced shard will stop all entities in that shard by sending `PoisonPill` to them. When all entities have been terminated the ShardRegion owning the entities will acknowledge the handoff as completed to the coordinator. Thereafter the coordinator will reply to requests for the location of the shard and thereby allocate a new home for the shard and then buffered messages in the ShardRegion actors are delivered to the new location. This means that the state of the entities are not transferred or migrated. If the state of the entities are of importance it should be persistent (durable), e.g. with `Akka.Persistence`, so that it can be recovered at the new location.

The logic that decides which shards to rebalance is defined in a pluggable shard allocation strategy. The default implementation LeastShardAllocationStrategy picks shards for handoff from the ShardRegion with most number of previously allocated shards. They will then be allocated to the ShardRegion with least number of previously allocated shards, i.e. new members in the cluster. There is a configurable threshold of how large the difference must be to begin the rebalancing. This strategy can be replaced by an application specific implementation.

The state of shard locations in the PersistentShardCoordinator is persistent (durable) with `Akka.Persistence` to survive failures. Since it is running in a cluster `Akka.Persistence` must be configured with a distributed journal. When a crashed or unreachable coordinator node has been removed (via down) from the cluster a new PersistentShardCoordinator singleton actor will take over and the state is recovered. During such a failure period shards with known location are still available, while messages for new (unknown) shards are buffered until the new PersistentShardCoordinator becomes available.

As long as a sender uses the same ShardRegion actor to deliver messages to an entity actor the order of the messages is preserved. As long as the buffer limit is not reached messages are delivered on a best effort basis, with at-most once delivery semantics, in the same way as ordinary message sending. Reliable end-to-end messaging, with at-least-once semantics can be added by using AtLeastOnceDeliveryActor in `Akka.Persistence`.

Some additional latency is introduced for messages targeted to new or previously unused shards due to the round-trip to the coordinator. Rebalancing of shards may also add latency. This should be considered when designing the application specific shard resolution, e.g. to avoid too fine grained shards.

The ShardRegion actor can also be started in proxy only mode, i.e. it will not host any entities itself, but knows how to delegate messages to the right location. A ShardRegion starts in proxy only mode if the roles of the node does not include the node role specified in `akka.contrib.cluster.sharding.role` config property or if the specified `EntityProps` is .

If the state of the entities are persistent you may stop entities that are not used to reduce memory consumption. This is done by the application specific implementation of the entity actors for example by defining receive timeout (SetReceiveTimeout(NullableTimeSpan)). If a message is already enqueued to the entity when it stops itself the enqueued message in the mailbox will be dropped. To support graceful passivation without loosing such messages the entity actor can send Passivate to its parent ShardRegion. The specified wrapped message in Passivate will be sent back to the entity, which is then supposed to stop itself. Incoming messages will be buffered by the ShardRegion between reception of Passivate and termination of the entity. Such buffered messages are thereafter delivered to a new incarnation of the entity.

Public classClusterShardingExtensionProvider
TBD
Public classClusterShardingSettings
TBD
Public classClusterShardingStats
Reply to GetClusterShardingStats, contains statistics about all the sharding regions in the cluster.
Public classCurrentRegions
Reply to GetCurrentRegions.
Public classCurrentShardRegionState
Reply to GetShardRegionState If gathering the shard information times out the set of shards will be empty.
Public classEnumerableExtensions
Public classGetClusterShardingStats
Send this message to the ShardRegion actor to request for ClusterShardingStats, which contains statistics about the currently running sharded entities in the entire cluster. If the `timeout` is reached without answers from all shard regions the reply will contain an empty map of regions. Intended for testing purpose to see when cluster sharding is "ready" or to monitor the state of the shard regions.
Public classGetCurrentRegions
Send this message to the ShardRegion actor to request for CurrentRegions, which contains the addresses of all registered regions. Intended for testing purpose to see when cluster sharding is "ready".
Public classGetShardRegionState
Send this message to a ShardRegion actor instance to request a CurrentShardRegionState which describes the current state of the region. The state contains information about what shards are running in this region and what entities are running on each of those shards.
Public classGetShardRegionStats
Send this message to the ShardRegion actor to request for ShardRegionStats, which contains statistics about the currently running sharded entities in the entire region. Intended for testing purpose to see when cluster sharding is "ready" or to monitor the state of the shard regions. For the statistics for the entire cluster, see GetClusterShardingStats.
Public classGracefulShutdown
Send this message to the ShardRegion actor to handoff all shards that are hosted by the ShardRegion and then the ShardRegion actor will be stopped. You can Watch(IActorRef) it to know when it is completed.
Public classHashCodeMessageExtractor
Convenience implementation of IMessageExtractor that construct ShardId based on the GetHashCode of the EntityId. The number of unique shards is limited by the given MaxNumberOfShards.
Public classLeastShardAllocationStrategy
The default implementation of LeastShardAllocationStrategy allocates new shards to the ShardRegion with least number of previously allocated shards. It picks shards for rebalancing handoff from the ShardRegion with most number of previously allocated shards. They will then be allocated to the ShardRegion with least number of previously allocated shards, i.e. new members in the cluster. There is a configurable threshold of how large the difference must be to begin the rebalancing. The number of ongoing rebalancing processes can be limited.
Public classPassivate
If the state of the entries are persistent you may stop entries that are not used to reduce memory consumption. This is done by the application specific implementation of the entity actors for example by defining receive timeout (SetReceiveTimeout(NullableTimeSpan)). If a message is already enqueued to the entity when it stops itself the enqueued message in the mailbox will be dropped. To support graceful passivation without loosing such messages the entity actor can send this Passivate message to its parent ShardRegion. The specified wrapped StopMessage will be sent back to the entity, which is then supposed to stop itself. Incoming messages will be buffered by the `ShardRegion` between reception of Passivate and termination of the entity. Such buffered messages are thereafter delivered to a new incarnation of the entity. PoisonPill is a perfectly fine StopMessage.
Public classPersistentShard
This actor creates children entity actors on demand that it is told to be responsible for. It is used when `rememberEntities` is enabled.
Public classPersistentShardCoordinator
Singleton coordinator that decides where shards should be allocated.
Public classPersistentShardCoordinatorAllocateShardResult
Result of [!:PersistentShardCoordinator.AllocateShard] is piped to self with this message.
Public classPersistentShardCoordinatorBeginHandOff
PersistentShardCoordinator initiates rebalancing process by sending this message to all registered ShardRegion actors (including proxy only). They are supposed to discard their known location of the shard, i.e. start buffering incoming messages for the shard. They reply with PersistentShardCoordinatorBeginHandOffAck. When all have replied the PersistentShardCoordinator continues by sending PersistentShardCoordinatorHandOff to the ShardRegion responsible for the shard.
Public classPersistentShardCoordinatorBeginHandOffAck
Public classPersistentShardCoordinatorGetShardHome
ShardRegion requests the location of a shard by sending this message to the PersistentShardCoordinator.
Public classPersistentShardCoordinatorGracefulShutdownRequest
ShardRegion requests full handoff to be able to shutdown gracefully.
Public classPersistentShardCoordinatorHandOff
When all ShardRegion actors have acknowledged the PersistentShardCoordinatorBeginHandOff the PersistentShardCoordinator sends this message to the ShardRegion responsible for the shard. The ShardRegion is supposed to stop all entries in that shard and when all entries have terminated reply with PersistentShardCoordinatorShardStopped to the PersistentShardCoordinator.
Public classPersistentShardCoordinatorHostShard
PersistentShardCoordinator informs a ShardRegion that it is hosting this shard
Public classPersistentShardCoordinatorRebalanceResult
Result of `rebalance` is piped to self with this message.
Public classPersistentShardCoordinatorRegister
Public classPersistentShardCoordinatorRegisterAck
Public classPersistentShardCoordinatorRegisterProxy
Public classPersistentShardCoordinatorShardHome
Public classPersistentShardCoordinatorShardHomeAllocated
TBD
Public classPersistentShardCoordinatorShardHomeDeallocated
TBD
Public classPersistentShardCoordinatorShardRegionProxyRegistered
TBD
Public classPersistentShardCoordinatorShardRegionProxyTerminated
TBD
Public classPersistentShardCoordinatorShardRegionRegistered
TBD
Public classPersistentShardCoordinatorShardRegionTerminated
TBD
Public classPersistentShardCoordinatorShardStarted
Public classPersistentShardCoordinatorShardStopped
Protected classPersistentShardCoordinatorState
Persistent state of the event sourced PersistentShardCoordinator.
Public classPersistentShardCoordinatorStateInitialized
TBD
Public classShard
TBD
Public classShardCurrentShardState
TBD
Protected classShardEntityStarted
Protected classShardEntityStopped
Public classShardGetCurrentShardState
TBD
Public classShardGetShardStats
TBD
Protected classShardRestartEntities
When initialising a shard with remember entities enabled the following message is used to restart batches of entity actors at a time.
Protected classShardRestartEntity
When an remembering entries and the entity stops without issuing a Passivate(IActorRef, Object), we restart it after a back off using this message.
Protected classShardRetryPersistence
When a ShardStateChange fails to write to the journal, we will retry it after a back off.
Protected classShardShardState
Persistent state of the Shard.
Public classShardShardStats
TBD
Protected classShardSnapshotTick
The Snapshot tick for the shards.
Protected classShardStateChange
TBD
Public classShardInitialized
We must be sure that a shard is initialized before to start send messages to it. Shard could be terminated during initialization.
Public classShardRegion
This actor creates children entity actors on demand for the shards that it is told to be responsible for. It delegates messages targeted to other shards to the responsible ShardRegion actor on other nodes.
Public classShardRegionStats
TBD
Public classShardResolvers
TBD
Public classShardState
TBD
Public classTunningParameters
TBD
Interfaces
  InterfaceDescription
Public interfaceIClusterShardingSerializable
Marker trait for remote messages and persistent events/snapshots with special serializer.
Public interfaceIMessageExtractor
Interface of functions to extract entity id, shard id, and the message to send to the entity from an incoming message.
Public interfaceIShardAllocationStrategy
Interface of the pluggable shard allocation and rebalancing logic used by the PersistentShardCoordinator.
Public interfaceIShardRegionCommand
TBD
Public interfaceIShardRegionQuery
TBD
Public interfacePersistentShardCoordinatorICoordinatorCommand
Messages sent to the coordinator.
Public interfacePersistentShardCoordinatorICoordinatorMessage
Messages sent from the coordinator.
Public interfacePersistentShardCoordinatorIDomainEvent
DomainEvents for the persistent state of the event sourced PersistentShardCoordinator
Protected interfaceShardIShardCommand
TBD
Public interfaceShardIShardQuery
TBD
Delegates
  DelegateDescription
Public delegateIdExtractor
Interface of the partial function used by the ShardRegion to extract the entity id and the message to send to the entity from an incoming message. The implementation is application specific. If the partial function does not match the message will be `unhandled`, i.e. posted as `Unhandled` messages on the event stream. Note that the extracted message does not have to be the same as the incoming message to support wrapping in message envelope that is unwrapped before sending to the entity actor.
Public delegateShardResolver
Interface of the function used by the ShardRegion to extract the shard id from an incoming message. Only messages that passed the IdExtractor will be used as input to this function.