Run multiple instances of heavydb on one server

Comments

1 comment

  • Official comment
    Avatar
    Neill Lewis

    Hi George,

    In general, yes it's possible to run multiple instance of HEAVY.AI (regardless of edition) on the same machine. At a technical level, what's needed to make this work is to avoid port conflicts. This can be done in one of two ways.

    1. Use Docker based deployment (orchestrated through either docker run or docker-compose) to map the internal ports used by heavydb within the container to different values at the host level. This is the easiest approach and the one I would strongly recommend.

    2. Alternatively, if you'd like to run both instances on the host operating system level ("bare metal"), then you can would need to adjust the port used by each environment. Specifically, you'll need to change the binary port (6274), http port (6278), binary over http port (6276), and calcite port (6279) on the 2nd thru Nth deployment of HeavyDB. If the second - Nth environment runs the enterprise edition, you'll also have to adjust the port for Immerse http access (6273), and if there are multiple HeavyIQ deployments let us know so that we can provide additional guidance on this topic (still, using Docker is also highly recommended when deploying HeavyIQ as well)

    In terms of constraining resources for each deployment, ultimately, this is limited beyond managing GPU access, and thus why running a single instance of HEAVY.AI is recommended for all production deployments. GPU resources can be constrained per instance with num-gpus and start-gpu parameters, however.

    I hope this information is helpful, let us know if you have questions.

    Thanks,
    Neill

    Comment actions Permalink

Please sign in to leave a comment.