Running within the internal network, the connector provides secure access to internal systems that are not otherwise available from the public internet – for example database servers, internal APIs, HL7 endpoints, or network filesystems – without opening external firewall ports.
The connector works by opening an outgoing WebSocket connection to Concentric’s public cloud environment. This connection is secure, authenticated, long-lived, and bidirectional. The connector registers RPC handlers on connect, and will receive and respond to RPC requests over this connection. If the connection drops the connector will automatically reconnect after a short delay.
The WebSocket connection is authorised as follows:
- The connector initiates a TLS (version 1.2) WebSocket connection to Concentric’s GCP server. The server’s SSL certificate is validated, thus authenticating its identity.
- The connector immediately sends an auth token over this connection. The server computes the SHA256 hash of this token to its internal database, thus authenticating the connector. On authentication failure the connection is closed immediately by the server.
An auth token will be securely provided during setup and must be kept secret. Auth token rotation is supported by the server, so any such requirements can be accommodated.
The connector should be run as a Windows service or Linux daemon.
The connector is extremely lightweight (~ 30MB RAM, minimal CPU load, and some disk space to store log output). It can run alongside other processes on an existing host, or a dedicated VM can be used.
If using a dedicated VM the following minimum specification is recommended:
- Windows: 1 CPU core, 2GB RAM, 32GB storage.
- Linux: 1 CPU core, 256MB RAM, 10GB storage.
Concentric runs separate demo and production environments. The demo environment is designed for testing purposes and must not be given access to real patient data. Separate auth tokens are used for each environment for security.
A separate connector is required for each environment, but they can co-exist on the same host with separate config files.
Scaling and redundancy
Each connector can handle multiple concurrent RPC calls which are multiplexed over a single connection. As a result it is not necessary or recommended to run multiple connectors for the same Concentric environment on the same host to achieve concurrency. A single connector will easily support the anticipated request volume for a large trust.
Redundancy can be achieved by running connectors on multiple hosts. Concentric will load balance requests to available connectors. In production we recommend running 2 connectors on physically separate servers.
Maintenance and monitoring
Concentric’s cloud environment monitors the status of each connector process and will alert the team in the case that there is no active connector available to handle requests.
When running multiple redundant connectors, server and connector upgrades should be staggered in order to avoid any downtime.
Bandwidth requirements for the connector are modest. Here is a back-of-the-envelope calculation:
Ignore periodic WebSocket ping-pong messages of less than 100MB per year.
Assume 10 demographic lookups per consent episode,
1KB (typical response) × 10 per episode = 10KB per episode
2 consent PDFs pushed to document integration,
100KB (typical PDF) × 2 (encoding overhead) × 2 per episode = 400KB per episode
and a total of 100,000 consent episodes per year.
410KB per episode × 100,000 episodes = 41GB per year
To calculate a conservative peak bandwidth requirement, we assume that all traffic occurs during weekdays during a 1 ‘peak’ hour time period:
250 weekdays × 1 peak hour × 60 min × 60 sec = 900,000s ≈ 10^6 peak-seconds per year
41GB ÷ 10^6s = 41 KB/s
Implementation & source code
The Go programming language is used which provides: simple compilation to Windows & Linux targets, zero-dependency static binaries, minimal resource consumption, type safety, and readable, auditable code.
The connector source code is typically forked for each Concentric deployment in order to implement the specific RPC methods required. The source code can be shared and binaries can be built locally if desired.