- Process logging and audit logs
- Data storage and tampering
- Secure application development practices
- Hosting and network diagram
- Server security
- Data deletion
- Resilience to failures
- Internal identity and access management
- Incident response process
Process logging and audit logs
Structured log data is written by application code and infrastructure services. Log data includes all API calls and includes the logged in user ID where applicable. Personal data should never be logged.
Logs are shippied to Cloud Logging where they are stored for 30 days.
Audit logging of user behaviour is a separate application concern.
During deployment of Concentric either an integrated Single Sign-On (SSO) approach will be taken, or Concentric accounts may be used.
Web-based approaches to single sign-on such as OpenID Connect are preferred and quicker to integrate, however other SSO systems can be supported.
Where Concentric accounts are used for authentication, users log in using an email and password. The approach to password authentication was designed using industry best practice and NSCS guidance. In particular:
- A defined joiners/movers/leavers process should be defined as part of deploying Concentric.
- Admins can create, edit, and disable users using the admin interface. All editing activity is audit logged.
- Users can edit their password within Concentric, and reset a forgotten password via a link sent to their email address.
- Passwords must be at least 8 characters in length. There is no maximum length requirement.
- Regular password resets are not required.
- Protection against brute-force password guessing is provided by limiting authentication attempts to 10 in any 5 minute period; exceeding this rate limit will trigger logging allowing further investigation.
- Passwords are stored in the database using a salted hash, and never stored or logged in plain text.
- 2 factor authentication (2FA) via OTP codes is enforced for admin level access.
Patients are provided with access to their consent episode in order that they may review the information discussed in their consent consultation, to allow them to access linked additional information (e.g. patient information videos), to provide the patient with a record of their legal consent, and optionally to give consent remotely.
Patients enter or confirm their email address and/or mobile number as part of their consent consultation. On choosing not to share contact details, an option to print the episode details is presented.
Patients will receive an email and/or SMS containing an unguessable unique URL (generated using a secure random generator with in excess of 1018 possible combinations). When patients access this link, they are required to enter their date of birth as second factor authentication before their consent record may be viewed.
In order to ensure that the URL and consent details are kept secure: the URL is locked if a date of birth is entered incorrectly too many times, only TLS connections are accepted, browsers and intermediaries are requested not to cache, and outbound links do not reveal the full referrer URL.
Emails and SMS messages are sent using external services (Postmark for email, Twilio for SMS), and contain no special category data.
Data storage and tampering
Append only data-structures are used for data storage which allows full audit tracking of changes. Protection against data tampering is provided by computing a cryptographic hash which encodes the current and past state of a consent episode at each mutation as a hash chain. This hash is included on screen after consent is given, and in the exported consent form PDF.
Secure application development practices
Software development takes place in feature branches (using Git) and pull requests are required before this code may be merged (and considered for deployment). Our pull request process requires that code changes to each component are reviewed by a defined code owner, that the code compiles, that all tests pass, and is subject to clinical safety approval by the Clinical Safety Officer (CSO) where appropriate.
Appropriate testing is embedded in the software development process, and not by a separate software testing function.
Deployments are linked to a Git SHA allowing full audit tracking of code changes, may only be initiated by defined individuals, and the process for deploying new code is integrated with the clinical safety process, overseen by the CSO.
The deployment system and data storage approach allow bad deployments to be rolled back to a known working version.
Concentric is hosted on Linux VMs which receive automatic patch updates.
Application code runs within containers which depend upon a small number of official base images. As part of our regular release process, containers are continually rebuilt using updated base images.
Automatic pull requests are created and reviewed for all application code dependency updates. Security updates are sent to designated individuals. Our policy is to deploy security-related updates within 2 weeks, or sooner if deemed necessary by our CTO.
Security vulnerabilities may be responsibly disclosed to firstname.lastname@example.org. Although there is no formal bug bounty programme, a bounty may be paid for high severity issues at our discretion.
Hosting and network diagram
Concentric’s cloud environment is hosted on Google Cloud Platform (GCP).
GCP’s Cloud load balancer provides HTTPS termination for end-user requests, and load balances these requests to backend instances. Traffic to backend instances is encrypted at the network level (see https://cloud.google.com/load-balancing/docs/ssl-certificates#backend-encryption for details). Any HTTP (port 80) requests received are redirected to port 443.
VM instances run multiple application processes and backend services which implement Concentric’s consent platform. There is an internal reverse proxy which handles routing. Backend servers are implemented according to zero-trust principles, which in practice means that all sensitive operations are authenticated using the logged in user token, logged, and used to write audit logs where appropriate.
Various components require read and write database access. GCP’s Cloud SQL service is used which provides high availability via automatic failover, regular backups, and encryption of data at rest. Separate database credentials are used for all services, and the communication to the database server is secured and encrypted using a Cloud SQL Proxy process (see https://cloud.google.com/sql/docs/postgres/sql-proxy for details).
GCP Cloud Storage is used for various purposes, including for long term retention of consent forms. Buckets are configured with retention policies which ensure that data cannot be deleted maliciously or accidentally.
Best practice, as per NHS digital guidance, are followed with regard to access control management, log management and patch management.
The following managed Google Cloud Platform (GCP) services are used, for which server security and encryption of data at rest is handled by GCP:
Concentric makes use of 2 technologies provided by GCP to store customer data at rest – Cloud SQL and Cloud Storage – which are ultimately responsible for the secure erasure of data. Google has published a whitepaper which describes their approach.
At a high level, the approach taken is that all data is stored encrypted at rest and that on deletion encryption keys are first deleted ensuring that data is unreadable (cryptographic erasure), with the physical data later deleted and over time expired from backup systems. Additionally at end of life drives are securely sanitised.
Resilience to failures
With regards to fault tolerance, redundancy and availability guarantees:
- Service Level Objective (SLO): 99.95% (less than 4.38 hours per year of unavailability)
- Automatic failover configured to handle all server failures, which is designed to cause less than 5 minutes of unavailability
- System is designed to not need any scheduled maintenance
- Zero downtime deployments of new application code
- Designed to be resilient to a single datacenter failure within a region
Data recovery processes are in place, in the unlikely event of total system failure:
- Database backups can be used in the case of total system failure. This scenario is not anticipated and would be a manual operation taken as a last resort.
- Configuration management system is used to configure all cloud services and hosts, allowing rapid total replacement of cloud infrastructure in the case of total failure.
Database backups are taken daily and stored for 30 days.
Internal identity and access management
All access to infrastructure is authenticated via two factor authentication and limited using GCP IAM policies, under the principle of least privilege. There is a defined joiners and leavers process.
Incident response process
Periodic monitoring of the system results in automatic notification to a human in the case of over 5 minutes of system unavailability.
Tenants are provided a company operational and technical contact for use in an emergency, with emergency support available 24/7.
Root cause analysis investigations are undertaken in response to failure.