Run
Step 3: Running the DAECOS Predictor
The DAECOS Predictor is shipped as a container with the UI, API and DB servers within the container. For resiliency we recommend moving the DB data folders outside of the containers using the DB_DIR environment variable.
Configuration
Environment Variables
The application needs to be configured using the following environment variables:
| Variable | Description | Required |
|---|---|---|
EPIC_CLIENT_ID | EPIC Client ID | Yes |
EPIC_BULK_GROUP_ID | Bulk Group ID for background job | Yes |
EPIC_TOKEN_URL | EPIC Token endpoint | Yes |
EPIC_FHIR_URL | FHIR endpoint | Yes |
EPIC_CRON_EXPRESSION | Frequency of Background job | Yes |
DB_DIR | Host folder to mount DB files | Yes |
WATCH_DIR | Host folder to mount watch folder | No |
LOG_DIR | Host folder to mount log folder | Yes |
KEYS_FILE | JWKS keys.json file path on host | Yes |
PORT | Port of local app | Yes |
Using Environment Files
Create a .env file on the host OS. Here’s a sample file to get you started:
# EPIC settings
EPIC_CLIENT_ID='epic-client-id'
EPIC_BULK_GROUP_ID='epic-bulk-group-id'
EPIC_TOKEN_URL='https://fhir.epic.com/interconnect-fhir-oauth/oauth2/token'
EPIC_FHIR_URL='https://fhir.epic.com/interconnect-fhir-oauth/api/FHIR'
# Background cron job time
EPIC_CRON_EXPRESSION="* */6 * * *" # This will run at every 6 hours
# Docker host settings
DB_DIR=./daecos-db-dir
WATCH_DIR=./daecos-watch-dir
LOG_DIR=./daecos-log-dir
KEYS_FILE=./keys.json
PORT=3211
# Your organization name in lowercase
ORG_NAME=orgname DB_DIR permissions
The database for Daecos is being powered by Postgres. We recommend setting the DB_DIR folder on the host machine. Note: The permissions of the host folder must match up with the permissions and users of the mapped container folder with the same permission set and the same user. For this to work correctly do the following
Permissions on Linux
# Add a new user named daecos on HOST OS
sudo adduser daecos
# Change password for new user daecos on HOST OS
sudo passwd daecos
# Change owner on DB_DIR
sudo chown daecos:daecos [daecos-db-dir | host-path-to-daecos-db-dir]
# Change permissions on DB_DIR
sudo chmod 700 [daecos-db-dir | host-path-to-daecos-db-dir]Make sure you run your docker compose commands (below) from the same location as the .env file.
Docker Compose Configuration
For easier management, you can use Docker Compose. Create the following docker-compose.yml file:
version: '3.8'
services:
daecos-app:
image: 713377909722.dkr.ecr.us-east-1.amazonaws.com/daecos-predictor/${ORG_NAME}:latest
env_file: .env #create .env file on local
container_name: ${CONTAINER_NAME:-daecos-predictor}
volumes:
- ${DB_DIR}:/var/lib/postgresql/data # Named volume for PostgreSQL Mount
- ${WATCH_DIR}:/tmp/watch-dir # Mount watch_folder mount for regular files)
- ${LOG_DIR}:/daecos-log-dir
- ${KEYS_FILE}:/app/epic-backend-app/keys.json #keys.json file for epic-backend-app
ports:
- "${PORT}:3210" # Map host port to container port 3210
healthcheck:
test: ["CMD-SHELL", "nc -z localhost 3210 || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
logging:
driver: json-file
options:
max-size: "10m"
max-file: "2"
restart: unless-stopped
deploy:
resources:
limits:
cpus: ${CPU_LIMIT:-2}
memory: ${MEM_LIMIT:-2G}
reservations:
memory: ${MEM_RESERVATION:-1G}Then run the following:
# Start the application
docker-compose up -dVerification
After starting the application, verify everything is working:
Check container status
docker-compose psView logs
docker-compose logs -f appTest the health endpoint
curl http://localhost:3000/healthExpected response:
{ "status": "healthy", "timestamp": "2025-08-10T12:00:00Z", "version": "1.0.0" }
Common Commands
# Stop the application
docker-compose down
# View real-time logs
docker-compose logs -f
# Execute commands in running container
docker-compose exec daecos-predictor bash
# Restart a specific service
docker-compose restart daecos-predictor