On-Premise Setup

If you need to meet strong privacy regulations, legal requirements, or you want to make a custom installation within your infrastructure or any public cloud (AWS, Google, Azure, etc.), Heartex works on-premises. It is a self-contained version (no Internet connection is required) of the Platform, no data will leave your infrastructure. To make the installation the most accessible, we offer a Docker image.


Heartex backend stack consists of multiple components, each of them deployed as an isolated Docker containers. The main components are:



The main backend server. All processes are managed by the supervisor that takes care of:


Proxy web server for serving incoming requests and static files (js/css scripts and media)


PostgreSQL database as principal data storage


Redis server for keeping background jobs and temporary data

Machine learning backend

Machine learning (ML) backend is an orchestration of multiple ML servers, each of them dedicated to serving a single model. Web backend sends HTTP requests to ML backend for getting model predictions and retrieving updated model states.
The main components of each ML server are

model server
Serves the model for inference and updates. All processes are managed by the supervisor that takes care of:

Redis server for keeping background training jobs and training data

System requirements

Platform requirements

Your system must have installed:

Warning: The running docker host must have the vm.max_map_count setting variable to be at least greater than 262144. You can check the value by running: sysctl vm.max_map_count. If it is too low, set the value by running: sudo sysctl -q -w vm.max_map_count=262144.

Machine learning backends requirements

Machine learning backend comprises model training procedures, which typically are CPU & Memory consuming, heavily depending on the chosen model. Each connected ML backend keeps in memory several models depending on how many annotation projects connected to it. Recommended resource requirements for running one ML backend with one model served are:


Platform deployment

Step 1: Pull the latest image

Your organization should be authorized to use Heartex images. Please contact us to receive an auth token.

1.1. Setup docker login:

docker login --username heartexlabs

You will be asked to enter the password, enter the token here. If Login Succeeded, there will be created ~/.docker/config.json with auth settings.

1.2 Pull the latest Heartex image:

docker pull heartexlabs/heartex:latest

Note: In some cases, you need to use sudo mode.

Step 2: Get the license file

You have to obtain license.txt file to get docker running. Please contact us if you haven’t it yet. Create a working directory e.g. heartex:

mkdir -p heartex
cd heartex

Be sure to store your license file at heartex/license.txt.

Step 3: Quick start using docker-compose

Note: This step is optional only if you are planning to run Platform in a development mode. If you want to connect Platform to the external PostgreSQL and Redis servers, go immediately to the next step.

If you are planning to run Platform for development purposes, you can start using it with local PostgreSQL and Redis servers.

3.1. Be sure you have docker-compose command installed on your system.

3.2. Create configuration file heartex/config.yml with the following content:

version: '3'

    image: postgres
    hostname: db
    restart: always
      - ./postgres-data:/var/lib/postgresql/data
      - ./logs:/tmp
      - 5432:5432
    image: heartexlabs/heartex:latest
    container_name: heartex
      - ./license.txt:/heartex/web/htx/settings/license_docker.txt
      - HEARTEX_HOSTNAME=http://localhost:8080
      - POSTGRE_NAME=postgres
      - POSTGRE_USER=postgres
      - POSTGRE_PORT=5432
      - POSTGRE_HOST=db
      - REDIS_HOST=redis
      - REDIS_PORT=6379
      - REDIS_DB=0
    command: ["./deploy/wait-for-postgres.sh", "db", "supervisord"]
      - 8080:8080
      - redis
      - db
      - redis
    image: redis:alpine
    hostname: redis
      - "./redis-data:/data"
      - 6379:6379

3.3. Start all servers using docker-compose

docker-compose -f config.yml up

Note: Don’t forget about other services running on the ports: 5432, 6379, 8080. Modify “-ports:” for db, heartex or redis in config.yml if they interfere.

3.4. Open http://localhost:8080 in a browser.

Data persistence

When the Heartex server is running via docker-compose, all essential data is stored inside the container.
The following local file storages are linked to container’s volumes to ensure data persistence:

The integrity of these folders ensures that your data is not lost even if you completely stop and remove all running containers and images.

Step 4: Start using Docker

In case you are going to scale Platform to production deployment, you’ll probably need to link external databases and services. Bellow are hands-on steps to set up the most important settings:

4.1 Create file heartex/env.list with environmental variables list:

# The main server URL (should be full path like protocol://host:port)

# Auxiliary hostname URL: some platform functionality requires URIs generation with specified hostname, 
# in case HEARTEX_HOSTNAME is not accessible from server side, use this variable to specify server host

# PostgreSQL database name

# PostgreSQL database user

# PostgreSQL database password

# PostgreSQL database host

# PostgreSQL database port

# PostgreSQL SSL mode (https://www.postgresql.org/docs/9.1/libpq-ssl.html)

# Redis server host

# Redis server port

# Redis database

# Redis password

# Redis default timeout

4.2. When all variables are set, run docker exposing 8080 port:

docker run -d \
-p 8080:8080 \
--env-file env.list \
-v `pwd`/license.txt:/heartex/web/htx/settings/license_docker.txt \
-v `pwd`/logs:/tmp \
--name heartex \

Note: If you expose 80 port, you need to start docker with sudo.

Health check

You can check if Platform is available by sending requesting to the /health URL:

$ curl http://localhost:8080/health
{"status": "UP"}

Also, you can access the empty metric page with 200 status code if everything is ok:

$ curl http://localhost:8080/metrics

Updating server

Getting docker version

To check the version of Heartex Platform, run docker ps on the host.

Run the following command as root or by using the sudo command

$ docker ps
CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS                    NAMES
b1dd57a685fb        heartexlabs/heartex:latest   "./deploy/start.sh"      36 minutes ago      Up 36 minutes>8000/tcp   heartex

The docker version number is visible in the Image column, e.g., heartexlabs/heartex:latest has an image of version tagged latest

Creating a backup

You need to create a backup version of the current container if the update procedure does not complete successfully or if you decide to rollback your Heartex server.

The docker stop command stops the currently running heartex container:

docker stop heartex

The following command renames the current heartex container to avoid name conflicts during the update procedure:

docker rename heartex heartex-backup

Pulling a new image

docker pull heartexlabs/heartex:latest

Updating current container

docker run -d \
-p $EXPOSE_PORT:8080 \
-v `pwd`/license.txt:/heartex/web/htx/settings/license_docker.txt \
-v `pwd`/logs:/tmp \
--name heartex \

Restoring the previous version

If for whatever reason, you decide to keep using the old version, you just need to stop and remove the new heartex container.

docker stop heartex && docker rm heartex

Now, rename the heartex-backup to heartex and start it.

docker start heartex-backup

Machine learning backends deployment

Machine learning (ML) backend distributes as docker container. Web backend communicates with ML backend via HTTP REST API.
You can install multiple ML backends on the same or remote server, then connect them to Web backend by creating ML backend in the Web backend admin page.

ML backends available:

To install any of these models run docker container with specified name, redis host and queue name:

docker run -d -p 9090:9090 -e REDIS_HOST= -e RQ_QUEUE_NAME=tfhub-text-classifier --name ml-backend heartexlabs/tfhub-text-classifier:latest

ML backend server start listening at http://localhost:9090. Use this URL to connect ML backend.