Run multiple instances of heavydb on one server
Is it possible to run an instance of heavy.ai enterprise edition and an instance of heavy ai open source edition or just multiple instances of heavydb?
How would you configure this setup?
For traditional rdbms, these are arguments for multiple instances:
- Resource Isolation: Each instance can be configured to use a specific amount of CPU, memory, and disk resources. This prevents one database from consuming all the resources and affecting the performance of other databases.
- Security and Access Control: Different instances can be configured with distinct security settings and access controls, providing better isolation and reducing the risk of unauthorized access.
- Testing and Development: Running multiple instances allows for separate environments for development, testing, and production. This ensures that changes can be tested in isolation before being deployed to the live environment.
- High Availability and Redundancy: Multiple instances can be set up in a master-slave or master-master replication configuration to provide high availability and redundancy. If one instance fails, another can take over, minimizing downtime.
- Version Management: Different instances can run different versions of MariaDB, allowing for compatibility testing and gradual upgrades without affecting all databases simultaneously.
- Performance Tuning: Instances can be individually tuned for specific workloads, optimizing performance based on the needs of each database application.
- Data Segregation: Keeping different types of data in separate instances can help manage and organize data more efficiently, making backup and recovery processes easier.
- Custom Configuration: Each instance can have its own configuration settings, such as buffer sizes, cache settings, and query optimizations, tailored to the specific requirements of the databases running on that instance.
-
Official comment
Hi George,
In general, yes it's possible to run multiple instance of HEAVY.AI (regardless of edition) on the same machine. At a technical level, what's needed to make this work is to avoid port conflicts. This can be done in one of two ways.
1. Use Docker based deployment (orchestrated through either docker run or docker-compose) to map the internal ports used by heavydb within the container to different values at the host level. This is the easiest approach and the one I would strongly recommend.
2. Alternatively, if you'd like to run both instances on the host operating system level ("bare metal"), then you can would need to adjust the port used by each environment. Specifically, you'll need to change the binary port (6274), http port (6278), binary over http port (6276), and calcite port (6279) on the 2nd thru Nth deployment of HeavyDB. If the second - Nth environment runs the enterprise edition, you'll also have to adjust the port for Immerse http access (6273), and if there are multiple HeavyIQ deployments let us know so that we can provide additional guidance on this topic (still, using Docker is also highly recommended when deploying HeavyIQ as well)
In terms of constraining resources for each deployment, ultimately, this is limited beyond managing GPU access, and thus why running a single instance of HEAVY.AI is recommended for all production deployments. GPU resources can be constrained per instance with num-gpus and start-gpu parameters, however.
I hope this information is helpful, let us know if you have questions.
Thanks,
NeillComment actions
Please sign in to leave a comment.
Comments
1 comment