I have recently been exploring some aspects of the Elgg scalability question by exploring how easy it would be to get the latest version of Elgg (1.6) running on a MySQL cluster.

In this article I will document the process, but first I should point out:

  • This is highly experimental and not endorsed in any way.
  • It is built against Elgg 1.6.1
  • This is not canonical and doesn’t reflect anything to do with the roadmap
  • This has not been extensively tested so caveat emptor.

Setting up the cluster

The first step is to set up the cluster on your equipment.

A MySQL cluster consists of a management node and several data nodes connected together by a network. Because I was running rather low on hardware, I cheated here and created each node as a Virtual Box image on my laptop – but the principle is the same.

Each node is an Ubuntu install (although you can use pretty much any OS) with two (virtual) network cards, one connected to the wider network (to install packages) and another on an internal network. If you do this for real you should consider removing the internet facing card once you’ve set everything up since a cluster isn’t secure enough to be run on the wider internet.

In my test configuration I had three nodes with name/internal IP as follows:

  • HHCluster1/192.168.2.1 – Management node & web server
  • HHCluster2/192.168.2.2 – First data node
  • HHCluster3/192.168.2.3 – Second data node

HHCluster1 – The management node

Install mysql, apache etc. This should be a simple matter of apt-getting the relevant packages. Clustering (ndb) support is built into the version of mysql bundled with Ubuntu, but this may not be the case universally so check!

You need to create a file in /etc/mysql/ called ndb_mgmd.cnf, this should contain the following:


[NDBD DEFAULT]
NoOfReplicas=2 # How many nodes you have
DataMemory=80M # How much memory to allocate for data storage (change for larger clusters)
IndexMemory=18M # How much memory to allocate for index storage (change for larger clusters)
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]

[NDB_MGMD]
HostName=192.168.2.1 # IP address of this system

# Now we describe each node on the system

# First data node
HostName=192.168.2.2
DataDir=/var/lib/mysql-cluster
BackupDataDir=/var/lib/mysql-cluster/backup
DataMemory=512M
[NDBD]
# Second data node node
HostName=192.168.2.3
DataDir=/var/lib/mysql-cluster
BackupDataDir=/var/lib/mysql-cluster/backup
DataMemory=512M

#one [MYSQLD] per data storage node
[MYSQLD]
[MYSQLD]

Data nodes (HHCluster2 & 3)
You must now configure your data nodes:

  1. Create the data directories, as root type:

    mkdir -p /var/lib/mysql-cluster/backup
    chown -R mysql:mysql /var/lib/mysql-cluster

  2. Edit your /etc/mysql/my.cnf and add the following to the [mysqld] section:

    ndbcluster
    # Replace the following with the IP address of your management server
    ndb-connectstring=192.168.2.1

  3. Again in /etc/mysql/my.cnf uncomment and edit the [MYSQL_CLUSTER] section so it contains the location of your management server:

    [MYSQL_CLUSTER]
    ndb-connectstring=192.168.2.1

  4. You need to create your database on each node (this is because clustering operates on a table level rather than a database level):

    CREATE DATABASE elggcluster;

Starting the cluster

  1. Start the management node:

    /etc/init.d/mysql-ndb-mgm start

  2. Start your data nodes:

    /etc/init.d/mysql restart
    /etc/init.d/mysql-ndb restart

Verifying the cluster
You should now have the cluster up and running, you can verify this by logging into your management node and typing show in ndb_mgm.

A word on access…

The cluster is now set up and will replicate tables (created with the ndbcluster engine – more on that later), but that is only useful to a point. Right now we don’t have a single endpoint to direct queries to, so this direction needs to be done at the application level.

We could take advantage of Elgg’s built in split read and writes, but this would only allow us to use a maximum of two nodes. A better solution would be to use a load balancer here such as Ultramonkey to direct the query to the appropriate server allowing us to scale much further.

I didn’t really have time to get into this, so I am using the somewhat simpler mysql-proxy.

  1. On HHCluster1 install and run mysql-proxy:

    apt-get install mysql-proxy
    mysql-proxy --proxy-backend-addresses=192.168.2.2:3306 --proxy-backend-addresses=192.168.2.3:3306

  2. On your data nodes edit your /etc/mysql/my.cnf file. Find bind-address and change its IP to the node’s IP address. Also ensure that you have commented out any occurrence of skip-networking.
  3. Again on your client nodes, log in to mysql and grant access to your cluster table to a user on HHCluster1 – for example:

    GRANT ALL ON elggcluster.* TO `root`@`HHCluster1.local` IDENTIFIED BY '[some password]'

Installing elgg

Unfortunately as it stands, you need to make some code changes to the vanilla version of Elgg in order for it to work in a clustered environment. These changes are necessary because of the restrictions placed on us by the ndbcluster engine.

Two things in particular cause us problems – ndbcluster doesn’t support FULLTEXT indexes, and it also doesn’t support indexes over TEXT or BLOB fields.

FULLTEXT is for searching and is largely not used in the vanilla install of elgg, so I removed them. Equally, most indexes blobs one can live without, the exception being on the metastrings table.

Metastrings is accessed a lot, so the index is critical. Therefore I added an extra varchar field which we’ll modify the code to include the first 50 characters of the indexed text – this is equivalent to the existing index:

CREATE TABLE `prefix_metastrings` (
`id` int(11) NOT NULL auto_increment,
`string` TEXT NOT NULL,
`string_index` varchar(50) NOT NULL,
PRIMARY KEY (`id`),
KEY `string_index` (`string_index`)
) ENGINE=ndbcluster DEFAULT CHARSET=utf8;

And the modified query:

$row = get_data_row("SELECT * from {$CONFIG->dbprefix}metastrings where string=$cs'$string' and string_index='$string_index' limit 1");

Mysql’s optimiser checks the index first so this doesn’t lose a significant amount of efficiency (at least according to the explain command).

» Modified schema

The next problem is that the system log currently uses INSERT DELAYED to insert the log data. This is also not supported under the clustered engine.

There are a number of approaches we could take including using Elgg’s delayed write functionality or writing a plugin which replaces and logs to a different location.

For the purposes of this test I decided to just comment out the code in system_log().

What won’t work
Currently there are a couple of core things that won’t work under these changes, here is a by no means complete summary:

  • The system log (as previously described). This isn’t too much of a show stopper as the river code introduced in Elgg 1.5 no longer uses this.
  • The log rotate plugin as this attempts to copy the table into the archive engine type and we can’t guarantee which node it will be executed on in this scenario.
  • Any third party plugins which attempt to access the metastrings table directly (of which there should be none as direct table access is a big no no!)

Anyway, here is a patch I made against the released version of 1.6.1 with all the code changes I made. Once you have applied this patch to your Elgg install you should be able to proceed with the normal Elgg install.

Let me know any feedback you may have!

» Elgg Clustering patch for Elgg 1.6.1

Top image “Birds-eye view of the 10,240-processor SGI Altix supercomputer housed at the NASA Advanced Supercomputing facility.”

in_ur_realityAnother one of Elgg‘s less documented but very powerful features is the ability to expose functionality from the core and user modules in a standard way via a REST like API.

This gives you the opportunity to develop interoperable web services and provide them to the users of your site, all in a standardised way.

The endpoint

To make an API call you must direct your query at a special URL. This query will be either a GET or a POST query (depending on the command you are executing), the specific endpoint you use depends on the format you want the return value returned in.

The endpoint:

http://yoursite.com/pg/api/[protocol]/[return format]/

Where:

  • [protocol] is the protocol being used, in this case and for the moment only “rest” is supported.
  • [return format] is the format you want your information returned in, either “php”, “json” or “xml”.

This endpoint should then be passed the method and any parameters as GET variables, so for example:

http://yoursite.com/pg/api/rest/xml/?method=test.test&myparam=foo&anotherparam=bar

Would pass “foo” and “bar” as the given named parameters to the function “test.test” and return the result in XML format.

Notice here also that the API uses the “PG” page handler extension, this means that it would be a relatively simple matter to add a new API protocol or replace the entire API subsystem in a module – should you be so inclined.

Return result

The result of the api call will be an entity encoded in your chosen format.

This entity will have a “status” parameter – zero for success, non-zero denotes an error. Result data will be in the “result” parameter. You may also receive some messages and debug information.

Exporting a function

Any Elgg function – core or module – can be exposed via the API, all you have to do is declare it using expose_function() from within your code, passing the method name, handler and any parameters (note that these parameters must be declared in the same order as they appear in your function).

Listing functions

You can see a list of all registered functions using the built in api command “system.api.list”, this is also a useful test to see if your client is configured correctly.

E.g.

http://yoursite.com/pg/api/rest/xml/?method=system.api.list

Authorising and authenticating

Most commands will require some form of authorisation in order to function. There are two main types of authorisation – protocol level which determines whether a given client is permitted to connect, and user level where a command whereby a user requires a special token in lieu of a username and password.

Protocol level authentication
Protocol level authentication is a way to ensure that commands only come from approved clients for which you have previously given keys. This is in keeping with many web based API systems and permits you to disconnect clients who abuse your system, or track usage for accountancy purposes.

The client must send a HMAC signature together with a set of special HTTP headers when making a call. This ensures that the API call is being made from the stated client and that the data has not been tampered with.

Eagle-eyed readers with long memories will see a lot of similarity with the ElggVoices API I wrote about previously.

The HMAC must be constructed over the following data:

  • The Secret Key provided by the target Elgg install (as provided easily by the APIAdmin plugin).
  • The current unix time in microseconds as a floating point decimal, produced my microtime(true).
  • Your API key identifying you to the Elgg api server (companion to your secret key).
  • URLEncoded string representation of any GET variable parameters, eg “method=test.test&foo=bar”
  • If you are sending post data, the hash of this data.

Some extra information must be added to the HTTP header in order for this data to be correctly processed:

  • X-Elgg-apikey – The API key (not the secret key!)
  • X-Elgg-time – Microtime used in the HMAC calculation
  • X-Elgg-hmac – The HMAC as hex characters.
  • X-Elgg-hmac-algo – The algorithm used in the HMAC calculation – eg, sha1, md5 etc

If you are sending POST data you must also send:

  • X-Elgg-posthash – The hash of the POST data.
  • X-Elgg-posthash-algo – The algorithm used to produce the POST data hash – eg, md5.
  • Content-type – The content type of the data you are sending (if in doubt use application/octet-stream).
  • Content-Length – The length in bytes of your POST data.

Much of this will be handled for you if you use the built in Elgg API Client.

User level tokens

User level tokens are used to identify a specific user on the target system, in much the same way as if they were to log in with their user name and password, but without the need to send this for every API call.

Tokens are time limited, and so it will be necessary for your client to periodically refresh the token they use to identify the user.

Tokens are generated by using the API command “auth.gettoken” and passing the username and password as parameters, eg:

http://yoursite.com/pg/api/rest/xml/?method=auth.gettoken&username=foo&password=bar

Anonymous methods
Anonymous methods (such as “system.api.list”) can be executed without any form of authentication, thus accepting connections from any client and regardless of whether they provide a user token. This is useful in certain situations and it goes without saying that you don’t expose sensitive functionality this way.

To do so set $anonymous=true in your call to expose_function().

Image “In UR Reality” by XKCD

Sometimes things need to be done without user interaction – for example, database optimisation or log rotation.

For this, Elgg has a cron endpoint.

Cron is a unix tool which executes commands at a specific time of day (other operating systems have similar tools). This keys off a file called a crontab – an example is given file is included and called crontab.example.

The crontab calls simplified yet powerful cron endpoint – http://yoursite/pg/cron/PERIOD, where PERIOD is one of the following:

  • reboot – Execute on system reboot
  • minute – Execute every minute
  • fiveminute – Execute every five minutes
  • fifteenmin – Execute every fifteen minutes
  • halfhour – Execute every half hour
  • hourly – Execute once every hour
  • daily – Execute every day
  • weekly – Execute weekly
  • monthly – Execute once a month
  • yearly – Execute every year

When these endpoints are triggered by your crontab a plugin hook is triggered. To make use of this, register a plugin hook as follows:

register_plugin_hook('cron', PERIOD, 'my_cron_handler');

Where PERIOD is one of the key words listed above. Here is some sample code using Cron – in this case it is taken from the system log rotation module I added to SVN today.

<?php
/**
* Elgg log rotator.
*
* @package ElggLogRotate
* @license http://www.gnu.org/licenses/old-licenses/gpl-2.0.html GNU Public License version 2
* @author Curverider Ltd
* @copyright Curverider Ltd 2008
* @link http://elgg.com/
*/

/**
* Initialise the plugin.
*
*/
function logrotate_init()
{
$period = get_plugin_setting('period','logrotate');
switch ($period)
{
case 'weekly':
case 'monthly' :
case 'yearly' :
break;
default: $period = 'monthly';
}

// Register cron hook
register_plugin_hook('cron', $period, 'logrotate_cron');
}

/**
* Trigger the log rotation.
*
*/
function logrotate_cron($hook, $entity_type, $returnvalue, $params)
{
$resulttext = elgg_echo("logrotate:logrotated");
if (!archive_log())
$resulttext = elgg_echo("logrotate:lognotrotated");

return $returnvalue . $resulttext;
}

// Initialise plugin
register_elgg_event_handler('init','system','logrotate_init');
?>