ctdb - Clustered TDB
CTDB is a clustered database component in clustered Samba that provides a
high-availability load-sharing CIFS server cluster.
The main functions of CTDB are:
•Provide a clustered version of the TDB
database with automatic rebuild/recovery of the databases upon node
•Monitor nodes in the cluster and
services running on each node.
•Manage a pool of public IP addresses
that are used to provide services to clients. Alternatively, CTDB can be used
Combined with a cluster filesystem CTDB provides a full high-availablity (HA)
environment for services such as clustered Samba, NFS and other services.
A CTDB cluster is a collection of nodes with 2 or more network interfaces. All
nodes provide network (usually file/NAS) services to clients. Data served by
file services is stored on shared storage (usually a cluster filesystem) that
is accessible by all nodes.
CTDB provides an "all active" cluster, where services are load
balanced across all nodes.
CTDB uses a recovery lock
to avoid a split brain
, where a cluster
becomes partitioned and each partition attempts to operate independently.
Issues that can result from a split brain include file data corruption,
because file locking metadata may not be tracked correctly.
CTDB uses a cluster leader and follower
model of cluster management. All
nodes in a cluster elect one node to be the leader. The leader node
coordinates privileged operations such as database recovery and IP address
failover. CTDB refers to the leader node as the recovery master
node takes and holds the recovery lock to assert its privileged role in the
By default, the recovery lock is implemented using a file (specified by
) residing in shared storage (usually) on a cluster
filesystem. To support a recovery lock the cluster filesystem must support
lock coherence. See ping_pong
(1) for more details.
The recovery lock can also be implemented using an arbitrary cluster mutex
call-out by using an exclamation point ('!') as the first character of
. For example, a value of !/usr/bin/myhelper
would run the given helper with the specified arguments. See the
source code relating to cluster mutexes for clues about writing call-outs.
If a cluster becomes partitioned (for example, due to a communication failure)
and a different recovery master is elected by the nodes in each partition,
then only one of these recovery masters will be able to take the recovery
lock. The recovery master in the "losing" partition will not be able
to take the recovery lock and will be excluded from the cluster. The nodes in
the "losing" partition will elect each node in turn as their
recovery master so eventually all the nodes in that partition will be
CTDB does sanity checks to ensure that the recovery lock is held as expected.
CTDB can run without a recovery lock but this is not recommended as there will
be no protection from split brains.
Each node in a CTDB cluster has multiple IP addresses assigned to it:
•A single private IP address that is
used for communication between nodes.
•One or more public IP addresses that
are used to provide NAS or other services.
Each node is configured with a unique, permanently assigned private address.
This address is configured by the operating system. This address uniquely
identifies a physical node in the cluster and is the address that CTDB daemons
will use to communicate with the CTDB daemons on other nodes.
Private addresses are listed in the file specified by the CTDB_NODES
configuration variable (see ctdbd.conf
(5), default /etc/ctdb/nodes).
This file contains the list of private addresses for all nodes in the cluster,
one per line. This file must be the same on all nodes in the cluster.
Private addresses should not be used by clients to connect to services provided
by the cluster.
It is strongly recommended that the private addresses are configured on a
private network that is separate from client networks. This is because the
CTDB protocol is both unauthenticated and unencrypted. If clients share the
private network then steps need to be taken to stop injection of packets to
relevant ports on the private addresses. It is also likely that CTDB protocol
traffic between nodes could leak sensitive information if it can be
Example /etc/ctdb/nodes for a four node cluster:
Public addresses are used to provide services to clients. Public addresses are
not configured at the operating system level and are not permanently
associated with a particular node. Instead, they are managed by CTDB and are
assigned to interfaces on physical nodes at runtime.
The CTDB cluster will assign/reassign these public addresses across the
available healthy nodes in the cluster. When one node fails, its public
addresses will be taken over by one or more other nodes in the cluster. This
ensures that services provided by all public addresses are always available to
clients, as long as there are nodes available capable of hosting this address.
The public address configuration is stored in a file on each node specified by
configuration variable (see
(5), recommended /etc/ctdb/public_addresses). This file
contains a list of the public addresses that the node is capable of hosting,
one per line. Each entry also contains the netmask and the interface to which
the address should be assigned.
Example /etc/ctdb/public_addresses for a node that can host 4 public addresses,
on 2 different interfaces:
In many cases the public addresses file will be the same on all nodes. However,
it is possible to use different public address configurations on different
Example: 4 nodes partitioned into two subgroups:
In this example nodes 0 and 1 host two public addresses on the 10.1.1.x network
while nodes 2 and 3 host two public addresses for the 10.1.2.x network.
Public address 10.1.1.1 can be hosted by either of nodes 0 or 1 and will be
available to clients as long as at least one of these two nodes are available.
If both nodes 0 and 1 become unavailable then public address 10.1.1.1 also
becomes unavailable. 10.1.1.1 can not be failed over to nodes 2 or 3 since
these nodes do not have this public address configured.
The ctdb ip
command can be used to view the current assignment of public
addresses to physical nodes.
The current status of each node in the cluster can be viewed by the ctdb
A node can be in one of the following states:
This node is healthy and fully functional. It
hosts public addresses to provide services.
This node is not reachable by other nodes via
the private network. It is not currently participating in the cluster. It
does not host public addresses to provide services. It might be shut
This node has been administratively disabled.
This node is partially functional and participates in the cluster. However, it
does not host public addresses to provide services.
A service provided by this node has failed a
health check and should be investigated. This node is partially functional and
participates in the cluster. However, it does not host public addresses
to provide services. Unhealthy nodes should be investigated and may require an
administrative action to rectify.
CTDB is not behaving as designed on this node.
For example, it may have failed too many recovery attempts. Such nodes are
banned from participating in the cluster for a configurable time period before
they attempt to rejoin the cluster. A banned node does not host public
addresses to provide services. All banned nodes should be investigated and may
require an administrative action to rectify.
This node has been administratively exclude
from the cluster. A stopped node does no participate in the cluster and
does not host public addresses to provide services. This state can be
used while performing maintenance on a node.
A node that is partially online participates
in a cluster like a healthy (OK) node. Some interfaces to serve public
addresses are down, but at least one interface is up. See also ctdb
Cluster nodes can have several different capabilities enabled. These are listed
Indicates that a node can become the CTDB
cluster recovery master. The current recovery master is decided via an
election held by all active nodes with this capability.
Default is YES.
Indicates that a node can be the location
master (LMASTER) for database records. The LMASTER always knows which node has
the latest copy of a record in a volatile database.
Default is YES.
The RECMASTER and LMASTER capabilities can be disabled when CTDB is used to
create a cluster spanning across WAN links. In this case CTDB acts as a WAN
LVS is a mode where CTDB presents one single IP address for the entire cluster.
This is an alternative to using public IP addresses and round-robin DNS to
loadbalance clients across the cluster.
This is similar to using a layer-4 loadbalancing switch but with some
One extra LVS public address is assigned on the public network to each LVS
group. Each LVS group is a set of nodes in the cluster that presents the same
LVS address public address to the outside world. Normally there would only be
one LVS group spanning an entire cluster, but in situations where one CTDB
cluster spans multiple physical sites it might be useful to have one LVS group
for each site. There can be multiple LVS groups in a cluster but each node can
only be member of one LVS group.
Client access to the cluster is load-balanced across the HEALTHY nodes in an LVS
group. If no HEALTHY nodes exists then all nodes in the group are used,
regardless of health status. CTDB will, however never load-balance LVS traffic
to nodes that are BANNED, STOPPED, DISABLED or DISCONNECTED. The ctdb
command is used to show which nodes are currently load-balanced
In each LVS group, one of the nodes is selected by CTDB to be the LVS master.
This node receives all traffic from clients coming in to the LVS public
address and multiplexes it across the internal network to one of the nodes
that LVS is using. When responding to the client, that node will send the data
back directly to the client, bypassing the LVS master node. The command
will show which node is the current LVS master.
The path used for a client I/O is:
1.Client sends request packet to
2.LVSMASTER passes the request on to one node
across the internal network.
3.Selected node processes the request.
4.Node responds back to client.
This means that all incoming traffic to the cluster will pass through one
physical node, which limits scalability. You can send more data to the LVS
address that one physical node can multiplex. This means that you should not
use LVS if your I/O pattern is write-intensive since you will be limited in
the available network bandwidth that node can handle. LVS does work wery well
for read-intensive workloads where only smallish READ requests are going
through the LVSMASTER bottleneck and the majority of the traffic volume (the
data in the read replies) goes straight from the processing node back to the
clients. For read-intensive i/o patterns you can achieve very high throughput
rates in this mode.
Note: you can use LVS and public addresses at the same time.
If you use LVS, you must have a permanent address configured for the public
interface on each node. This address must be routable and the cluster nodes
must be configured so that all traffic back to client hosts are routed through
this interface. This is also required in order to allow samba/winbind on the
node to talk to the domain controller. This LVS IP address can not be used to
initiate outgoing traffic.
Make sure that the domain controller and the clients are reachable from a node
you enable LVS. Also ensure that outgoing traffic to these hosts
is routed out through the configured public interface.
To activate LVS on a CTDB node you must specify the
configuration variables. CTDB_LVS_NODES
a file containing the private address of all nodes in the current node's LVS
Normally any node in an LVS group can act as the LVS master. Nodes that are
highly loaded due to other demands maybe flagged with the
"slave-only" option in the CTDB_LVS_NODES
file to limit the
LVS functionality of those nodes.
LVS nodes file that excludes 192.168.1.4 from being the LVS master node:
CTDB tracks TCP connections from clients to public IP addresses, on known ports.
When an IP address moves from one node to another, all existing TCP
connections to that IP address are reset. The node taking over this IP address
will also send gratuitous ARPs (for IPv4, or neighbour advertisement, for
IPv6). This allows clients to reconnect quickly, rather than waiting for TCP
timeouts, which can be very long.
It is important that established TCP connections do not survive a release and
take of a public IP address on the same node. Such connections can get out of
sync with sequence and ACK numbers, potentially causing a disruptive ACK
NAT gateway (NATGW) is an optional feature that is used to configure fallback
routing for nodes. This allows cluster nodes to connect to external services
(e.g. DNS, AD, NIS and LDAP) when they do not host any public addresses (e.g.
when they are unhealthy).
This also applies to node startup because CTDB marks nodes as UNHEALTHY until
they have passed a "monitor" event. In this context, NAT gateway
helps to avoid a "chicken and egg" situation where a node needs to
access an external service to become healthy.
Another way of solving this type of problem is to assign an extra static IP
address to a public interface on every node. This is simpler but it uses an
extra IP address per node, while NAT gateway generally uses only one extra IP
One extra NATGW public address is assigned on the public network to each NATGW
group. Each NATGW group is a set of nodes in the cluster that shares the same
NATGW address to talk to the outside world. Normally there would only be one
NATGW group spanning an entire cluster, but in situations where one CTDB
cluster spans multiple physical sites it might be useful to have one NATGW
group for each site.
There can be multiple NATGW groups in a cluster but each node can only be member
of one NATGW group.
In each NATGW group, one of the nodes is selected by CTDB to be the NATGW master
and the other nodes are consider to be NATGW slaves. NATGW slaves establish a
fallback default route to the NATGW master via the private network. When a
NATGW slave hosts no public IP addresses then it will use this route for
outbound connections. The NATGW master hosts the NATGW public IP address and
routes outgoing connections from slave nodes via this IP address. It also
establishes a fallback default route.
NATGW is usually configured similar to the following example configuration:
Normally any node in a NATGW group can act as the NATGW master. Some
configurations may have special nodes that lack connectivity to a public
network. In such cases, those nodes can be flagged with the
"slave-only" option in the CTDB_NATGW_NODES
file to limit the
NATGW functionality of those nodes.
See the NAT GATEWAY section in ctdbd.conf
(5) for more details of NATGW
When the NATGW functionality is used, one of the nodes is selected to act as a
NAT gateway for all the other nodes in the group when they need to communicate
with the external services. The NATGW master is selected to be a node that is
most likely to have usable networks.
The NATGW master hosts the NATGW public IP address CTDB_NATGW_PUBLIC_IP
on the configured public interfaces CTDB_NATGW_PUBLIC_IFACE
and acts as
a router, masquerading outgoing connections from slave nodes via this IP
address. If CTDB_NATGW_DEFAULT_GATEWAY
is set then it also establishes
a fallback default route to the configured this gateway with a metric of 10. A
metric 10 route is used so it can co-exist with other default routes that may
A NATGW slave establishes its fallback default route to the NATGW master via the
private network CTDB_NATGW_PRIVATE_NETWORK
with a metric of 10. This
route is used for outbound connections when no other default route is
available because the node hosts no public addresses. A metric 10 routes is
used so that it can co-exist with other default routes that may be available
when the node is hosting public addresses.
can be used to have NATGW create more specific
routes instead of just default routes.
This is implemented in the 11.natgw eventscript. Please see the eventscript file
and the NAT GATEWAY section in ctdbd.conf
(5) for more details.
Policy routing is an optional CTDB feature to support complex network
topologies. Public addresses may be spread across several different networks
(or VLANs) and it may not be possible to route packets from these public
addresses via the system's default route. Therefore, CTDB has support for
policy routing via the 13.per_ip_routing eventscript. This allows routing to
be specified for packets sourced from each public address. The routes are
added and removed as CTDB moves public addresses between nodes.
There are 4 configuration variables related to policy routing:
. See the POLICY ROUTING section in
(5) for more details.
The format of each line of CTDB_PER_IP_ROUTING_CONF
<public_address> <network> [ <gateway> ]
Leading whitespace is ignored and arbitrary whitespace may be used as a
separator. Lines that have a "public address" item that doesn't
match an actual public address are ignored. This means that comment lines can
be added using a leading character such as '#', since this will never match an
A line without a gateway indicates a link local route.
For example, consider the configuration line:
If the corresponding public_addresses line is:
is 100, and CTDB adds the address to eth2
then the following routing information is added:
ip rule add from 192.168.1.99 pref 100 table ctdb.192.168.1.99
ip route add 192.168.1.0/24 dev eth2 table ctdb.192.168.1.99
This causes traffic from 192.168.1.1 to 192.168.1.0/24 go via eth2.
The ip rule
command will show (something like - depending on other public
addresses and other routes on the system):
ip route show table ctdb.192.168.1.99
0: from all lookup local
100: from 192.168.1.99 lookup ctdb.192.168.1.99
32766: from all lookup main
32767: from all lookup default
192.168.1.0/24 dev eth2 scope link
The usual use for a line containing a gateway is to add a default route
corresponding to a particular source address. Consider this line of
192.168.1.99 0.0.0.0/0 192.168.1.1
In the situation described above this will cause an extra routing command to be
ip route add 0.0.0.0/0 via 192.168.1.1 dev eth2 table ctdb.192.168.1.99
With both configuration lines, ip route show table ctdb.192.168.1.99
192.168.1.0/24 dev eth2 scope link
default via 192.168.1.1 dev eth2
Here is a more complete example configuration.
192.168.1.98 192.168.200.0/24 192.168.1.254
192.168.1.98 0.0.0.0/0 192.168.1.1
192.168.1.99 192.168.200.0/24 192.168.1.254
192.168.1.99 0.0.0.0/0 192.168.1.1
The routes local packets as expected, the default route is as previously
discussed, but packets to 192.168.200.0/24 are routed via the alternate
When certain state changes occur in CTDB, it can be configured to perform
arbitrary actions via a notification script. For example, sending SNMP traps
or emails when a node becomes unhealthy or similar.
This is activated by setting the CTDB_NOTIFY_SCRIPT
variable. The specified script must be executable.
Use of the provided /etc/ctdb/notify.sh script is recommended. It executes files
CTDB currently generates notifications after CTDB changes to these states:
Valid values for DEBUGLEVEL are:
It is possible to have a CTDB cluster that spans across a WAN link. For example
where you have a CTDB cluster in your datacentre but you also want to have one
additional CTDB node located at a remote branch site. This is similar to how a
WAN accelerator works but with the difference that while a WAN-accelerator
often acts as a Proxy or a MitM, in the ctdb remote cluster node configuration
the Samba instance at the remote site IS the genuine server, not a proxy and
not a MitM, and thus provides 100% correct CIFS semantics to clients.
See the cluster as one single multihomed samba server where one of the NICs (the
remote node) is very far away.
NOTE: This does require that the cluster filesystem you use can cope with
WAN-link latencies. Not all cluster filesystems can handle WAN-link latencies!
Whether this will provide very good WAN-accelerator performance or it will
perform very poorly depends entirely on how optimized your cluster filesystem
is in handling high latency for data and metadata operations.
To activate a node as being a remote cluster node you need to set the following
two parameters in /etc/sysconfig/ctdb for the remote node:
Verify with the command "ctdb getcapabilities" that that node no
longer has the recmaster or the lmaster capabilities.
This documentation was written by Ronnie Sahlberg, Amitay Isaacs, Martin
Copyright © 2007 Andrew Tridgell, Ronnie Sahlberg
This program is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation; either version 3 of the License, or (at your option) any later
This program is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with
this program; if not, see http://www.gnu.org/licenses