1. Server Components

  2. Connector Components


Server Components

Load Balancing Service

As a load balancer the software component nginx [1] is used. The load balancer should receive all requests to the virtual server URL, which is usually the DNS or hostname of the server. Depending on the requested URL, nginx will route to an instance of the Core Application or an instance of the Websocket Service. This allows the implementation of a High Availability (HA) cluster, as individual server instances could be unavailable due to updates or maintenance, and still requests can be served by other instances.


The Load Balancer Service can be deployed on a variety of operating systems and platforms.


Security Considerations

Nginx is an open source project published under the 2-clause BSD-like license. It receives regular patches and updates from a highly active community.


The Load Balancing Service and nginx in particular are subject to the normative patches schedule.

Configuration

As the server components are based on a modular architecture, they can individually be extracted and deployed on different server machines. This allows a high flexibility in regards to both availability and scalability.

  • Configuration option 1: Everything on one server

The Load Balancing Service is installed together with the other services on a single server and acts as a pure gateway server to the underlying application.

  • Configuration option 2: Multiple servers, one acts as gateway

The Load Balancing Service is in a separate server instance, it forwards requests based on a Round Robin scheme to a list of available application servers, which are connected via a private, local network connection. To the outside, this looks like one single instance and URL endpoint. If an application server becomes unavailable, the following requests are rerouted in a transparent way. Websocket connections need to be reestablished, whereas regular web sessions continue and keep their valid session ticket.


Core Application

The core application is a Ruby on Rails application, using the thin webserver as runtime environment. Several worker threads serve different purposes, to handle web requests and events, while storing and receiving data from the database. Every web page presented to the user is created by this web server.


Websocket Services

The Websocket service provides a real-time channel to the web browser, which is used to deliver data points and updates to the clients. Additionally, when events and messages are exchanged between the Connector Boxes, the Websocket Service converts them and bridges them to the local messaging service for further processing. The Websocket Service is also based on the thin web server technology.


Background Services

In contrast to the web application, the background services are invoked at certain time intervals, to process long-running tasks or queries, such as generating export files. They are based on Ruby on Rails.


Messaging Service

The Messaging Service is an AMQP based event queue, which transports information between individual service processes and acts as “messaging backbone” of the server architecture. RabbitMQ is an AMQP compatible message queue written in Erlang.


Configuration:


As RabbitMQ itself is highly scalable, it can run on an independent cluster. 

  • Configuration option 1: Everything on one server

The Messaging Service is installed together with the other services on a single server instance, and acts as a central event queue.

  • Configuration option 2: Cluster mode, on application servers

The Messaging Service is installed on every application server, and all servers are interlinked and synchronized. Each application connects to its local instance of the event bus.

  • Configuration option 3: Cluster mode, on separate servers

The Messaging Service is installed on a separate cluster environment, and all services connect to one central endpoint.


Cache Service

The Cache Service is a Redis cache. It stores information in memory and is considered to be a volatile storage.

Configuration:

  • Configuration option 1: Everything on one server

The Cache Service is installed together with the other services on a single server instance, and acts as a central store.

  • Configuration option 2: Cluster mode, on application servers

The Cache Service is installed on every application server, and all servers are interlinked and synchronized. Each application connects to its local instance of the cache service.


Database

The database engine is Postgresql, which is a relational database. It stores information permanently on hard drives.

Configuration:

  • Configuration option 1: Everything on one server

The Database is installed together with the other services on a single server instance, and acts as a central store.

  • Configuration option 2: Cluster mode, on application servers

 The Database is installed on every application server, and all servers are interlinked and synchronized. Each application connects to its local instance of the database.

  • Configuration option 3: Cluster mode, on separate servers

The Database is installed on a separate cluster environment, and all services connect to one central endpoint.


Process Monitoring

The running processes on all server instances are monitored by Systemd, a service daemon which automatically restarts processes on eventual failures. All services are automatically started on system reboot.


Update Process

The operating system packages can be automatically updated from the package repositories. If the customer provides a managed package repository for Ubuntu, only selected updates can be applied upon customer request. The Laboperator server components can be updated via the remote maintenance channel, which requires web and ssh access to cloud-based repositories such as GitHub. Alternatively, updates can be applied as file-based in-place updates, which do not require loading from external web resources.



Connector Components

The connector consists of a core process an enrollment service and one driver process for each device connected via the connector instance. Usually this is the setup running on one physical Connector Box, but the same can be run in a wrapper on a Windows PC for example.

A port (not depicted in Figure 5) is a wrapper around a driver instance and is managed as part of the Connector Core. The difference between the concept of a port and a driver is the that the port represents the configured or detected source for a device, e.g. a USB port of the connector or a folder path. The driver contains the logic on how to communicate with such a port and is thus assigned to a port, e.g. a SerialDriver with knowledge of the baudrate for the connected device or a FolderWatcherDriver knowing what file types to read out.


First-time Connector Enrollment

When a new Connector Box is not pre-shipped or provisioned with enrollment settings, an initial manual enrollment is required. For this, a USB stick with the enrollment settings file, which can be downloaded from the corresponding Laboperator server via a web browser, needs to be inserted after powering on the Connector Box. No further interaction is necessary, as all required settings are automatically applied.


Connector Enrollment

The enrollment information contains the server endpoint URL of the Laboperator server. In periodic intervals, the URL is loaded and all new configuration parameters are applied. This happens via a regular SSL-encrypted HTTP request. If configuration changes are detected, the corresponding components are restarted.


Connector Core

The Connector Core service establishes the web socket connection to the Laboperator server.

The available USB ports are managed, and device drivers are automatically applied after a server-side pairing and device type definition process, performed by the end users.


Process Monitoring

All components are monitored by systemd, which guarantees automatic restart upon eventual failure.


Connector Updater

The operating system packages can be updated either by writing a new binary image to the SD card, or by triggering an automatic update routine which requires availability of a package repository in the customer’s network or internet.

The connector software (including enrollment and core) is automatically updated from the Laboperator server and does not require manual intervention. On the server, the desired target version of each Connector Box can be defined to prevent version updates if this is required by the customer.