Our websites and applications are based on a framework using a stack of open-source software technologies.

At the heart of our framework architecture is a bespoke server-side model-view-controller framework written in PHP that dynamically generates client-side web content and business applications utilising HTML5, CSS3, and Javascript.

Single-page websites and applications

Our websites and applications are implemented as single-page applications that provide a user experience similar to that of desktop applications.

The single-page application renders all content on the server.

The first or main page is loaded in full.

Subsequent pages dynamically load only page-specific content, via asynchronous request, leaving common elements of the main page intact, and without downloading redundant content.

The main page does not reload at any point in the process, nor does control transfer to another page.

Operational components

The framework comprises a number of operational components:

  • Metadata Dictionary
    Part of the administrator control panel is a database discovery batch process that reverse-engineers the database schema and populates a metadata dictionary. It gathers information about tables, attributes, data types, and foreign key relationships and their cardinalities. It is run once after database schema changes, to synchronise the metadata dictionary with the physical database schema. It does not generate code.
  • Roles and Responsibilities Manager
    Part of the administrator control panel is an interactive application that facilitates administration of role-based privileges to control access to data and functions. Roles and responsibilities persist beyond database re-discovery, but may need to be checked after the discovery batch process has been run, to synchronise role definitions with the metadata dictionary.
  • Application Server
    The application server is database-driven, taking its rules from the metadata dictionary and roles and responsibilities definitions to control access to data and functions. It dynamically generates, on demand at runtime, the user interface complete with data-aware components and forms (view), and the back-end application and database objects (model). Live objects persist for the duration of a transaction and are then discarded again.

Access Control

Our applications protect the confidentiality and integrity of valuable and sensitive data and communications, by incorporating a sophisticated access control model.

Login security

Login security is implemented using the de-coupled aspects of authentication and authorisation:

  • Authentication is concerned with establishing the identity of the user.
  • Authorisation is based on organisational position and team membership, and controls role-based privileges.

Once authenticated, users can choose from a list of roles and applications they subscribe to, and are then logged in and granted access privileges commensurate with their designated role in the organisation.

Role-based privileges

Role-based privileges are defined in a multi-dimensional matrix based on:

  • Metadata,
  • Organisational domain hierarchy.
Based on Scope Controlled by
Metadata
(entities, attributes, and functions)
Tables, processes, and reports Application-level security
Column-level functions
('view', 'edit')
Role-specific field-level permissions
Row-level functions
('read', 'insert', 'update', 'delete')
Role-specific record-level permissions
Organisational domain hierarchy Row-level data belonging to the user's organisational domain The user's position in the organisational hierarchy

Database abstraction

The persistence layer of our framework architecture is divided into two parts:

  1. Database abstraction API,
  2. Object relational mapping that bridges between the relational model and the object model / domain model.

Database abstraction defines interfaces, which are then implemented by concrete database engine-specific driver instances. The abstract layer defines the common denominator of what a database platform has to be able to accomplish.

Object-relational mapping

Our object-relational mapping layer uses the database for persistence and models object-relational aspects at runtime, rather than mapping statically and generating code. We think that this approach has some advantages:

  • Referential integrity
    is built into the model. Foreign key relationships are auto-magically known to the model and do not require hand-coded rules or hints.
  • Data ownership and privacy
    are explicitly defined. The model is aware of them, provides role-based privileges, and enforces constraints.
  • Zero configuration
    Object-relational mapping is fully automated. It does not require any hand-coded configuration, but can nevertheless be sub-classed per application and table to allow fine-tuning and adaptation to different circumstances and requirements.
  • Data persistence
    is provided by the relational database management system. The ORM does not persist objects beyond the end of a transaction. Objects can be built on the fly for a very small performance penalty and there is no gain in persisting them.
  • Synchronisation
    Database schema changes only require a database re-discovery to update the metadata dictionary, but no re-generation and deployment of code for a possibly large number of applications and services.
  • Code quality
    Less code generally means less bugs. With this high level of code streamlining, sharing and reuse, bugs are more visible and pervasive, but once debugged, the overall quality of the code is higher than if every application object and user interface component had its own hand-coded logic.
  • Concise code base
    The framework code-base is concise. There is no redundant code, neither generated nor hand-coded.

Relational modelling principles

The data model is normalised to fourth normal form. Normalisation comprises a set of data modelling techniques to simplify transaction processing logic and insert/update/delete operations, by eliminating data redundancies, embedded dependencies, and anomalies. Data redundancies are eliminated when no attribute that is not a primary key is repeated in the database, except to record historical data.

The data model adheres to industry standard normalisation rules:

  • All key attributes are defined, there are no repeating groups of attributes, and all attributes are dependent on a primary key
  • There are no partial dependencies, that is, no attribute is dependent on only a portion of a primary key
  • There are no transitive dependencies, that is, every determinant attribute is a candidate key, no non-prime attribute is dependent on another non-prime attribute, and no non-key attribute is the determinant of a key attribute
  • There are no multiple sets of multi-valued dependencies, that is, all attributes are dependent on a single attribute primary key, but independent of each other

There are no identifying relationships. Every entity has its own intrinsic identity, represented by a single-attribute primary key. Foreign keys are never part of a primary key. This eliminates the mess associated with having to propagate compound keys to dependent entities, and the conflicts arising when associative entities inherit compound keys from two parents which both use the same foreign key attribute because both in turn depend on the same grandparent. The value of a primary key never changes. This eliminates the need to cascade primary key changes to foreign keys in all dependent tables, and the associated possibly high number of record locks required for this to succeed.

Relational database management

The power of the relational database management system and industry standard query language are utilised for searching, filtering, sorting, grouping, validating, and manipulating data using transactional blocking, — all with a high level of accuracy, integrity, performance, scalability, and reliability.

Explicit database schema

Entities and their attributes, relationships and dependencies are explicitely defined in a relational database schema. Integrity constraints imposed on the database define

  • entities by primary key,
  • referential integrity by foreign keys,
  • datatypes by domain,
  • and business rules.

ACID model transactional properties

The ACID (Atomicity, Consistency, Isolation, Durability) model is a set of database design principles that emphasize aspects of reliability that are important for business data and mission-critical applications. Transactional properties guarantee that database transactions are processed reliably.

  • Atomicity
    Atomicity requires that each transaction is "all or nothing": if one part of the transaction fails, the entire transaction fails, and the database state is left unchanged. An atomic system must guarantee atomicity in each and every situation, including power failures, errors, and crashes. To the outside world, a committed transaction appears (by its effects on the database) to be indivisible ("atomic"), and an aborted transaction does not happen.
  • Consistency
    The consistency property ensures that any transaction will bring the database from one valid state to another. Any data written to the database must be valid according to all defined rules, including but not limited to constraints, cascades, triggers, and any combination thereof. This does not guarantee correctness of the transaction in all ways the application programmer might have wanted (that is the responsibility of application-level code) but merely that any programming errors do not violate any defined rules.
  • Isolation
    The isolation property ensures that the concurrent execution of transactions results in a system state that would be obtained if transactions were executed serially, i.e. one after the other. Providing isolation is the main goal of concurrency control. Depending on the concurrency control method, the effects of an incomplete transaction may not even be visible to another transaction.
  • Durability
    Durability means that once a transaction has been committed, it will remain so, even in the event of power loss, crashes, or errors. In a relational database, for instance, once a group of SQL statements execute, the results need to be stored permanently (even if the database crashes immediately thereafter). To defend against power loss, transactions (or their effects) must be recorded in a non-volatile memory.

Distributed server architecture and database replication

Our distributed server architecture is designed to support mobile work-groups and on-the-road teams that may not always be online. Mobile application servers, each with their own replicated database, provide these teams with a collaborative platform on a local network that allows them to work in isolation, disconnected from the internet, and hence from the central server and database, but nevertheless completely self-sufficient, for periods of time. When a mobile server goes online again after having worked in isolation, it needs to synchronise, merge data with, and refresh local state from, the central server and database. Conflicts need to be resolved, either manually or automatically, that may have arisen when records with the same identifier were modified in different environments.

Universally unique identifiers

Universal uniqueness is the prerequisite and enabling technology for replicating databases in a distributed environment. It allows two servers to replicate each others changes to data, to add new records without risking collisions or identity clashes, and to modify or delete exisiting records.

Every primary key value is a system-generated integer representation of a universally unique identifier that is assigned by the database manager when a record is first inserted into a table.

Uniqueness is guaranteed universally at three levels:

  • locally within a table,
  • locally across the whole database,
  • globally across all databases in the realm.

Integrated database

Our integrated database is a shared resource used by all applications and all organisations. It hosts data for all organisations and their members in a virtualised, securely partitioned container that protects confidential information from unauthorised access. Data is shared and synchronised in real-time across applications. Changes made by one application are instantly available to all other applications. Access to data is restricted by a user's role and position in the organisational domain. Access is granted on a 'need to know' basis to sufficiently authorised users.

The database schema integrates and caters for all applications' needs, and as such is more general and more complex than a stand-alone application database schema would be. Changes to the model are more complex because the various downstream dependencies need to be considered. The benefit of this approach, however, is that sharing data and transaction processing logic between applications does not require an extra layer of integration services.

Data model

Most enterprise models are commonly based on industry-specific data models and culture-dependent ontologies. In contrast, our framework uses a single data model to describe all of its components in an abstract and generic way, that is, industry and culture agnostic. This approach to enterprise modelling reduces complexity and increases code reuse and maintainability.

At the heart of our data model are the organisation, person, and member tables, which are shared by all applications. Organisation is a single-inheritance self-referential tree structure that is flexible with respect to hierarchical chain of command and responsibility. Different organisational models are possible to support different hierarchical structures: from permanent, deeply nested traditional management hierarchies like 'head office, business units, departments, branch offices and outlets', to transient, flat, less hierarchical and more agile structures like 'teams and projects'.

The tree structure establishes hierarchical domains of responsibilities; each position's influence is downwards over positions underneath, but not sideways or upwards — a given position is in charge of subordinates, but not of peers or parents. A node in the organisation tree can have many children, but can have only one parent. The tree structure is also well-adapted to change in organisational structure or hierarchy of reporting lines, and can be rearranged quite easily through pruning and grafting, that is, editing a given organisational node and choosing a different parent, with the result that the entire branch hanging off that node moves with it to a different location in the tree. For historical reporting and data-warehousing applications, organisational change is often-times not quite as simple though, as it presents challenges of consolidating intermediate aggregated data from before and after the organisational restructuring.

Data Model Schema