This typically occurs when the system cannot keep the entire working set of columns in GPU memory.
HEAVY.AI provides two options when your system does not have enough GPU memory available to meet the requirements for executing a query.
The first option is to turn off the watchdog (--enable_watch_dog=0
). That allows the query to run in stages on the GPU. HEAVY.AI orchestrates the transfer of data through layers of abstraction and onto the GPU for execution. See Advanced Configuration Flags for HEAVY.AI Server
The second option is to set --allow-cpu-retry
. If a query does not fit in GPU memory, it falls back and executes on the CPU. See Configuration Flags for HEAVY.AI Server.
HEAVY.AI is an in-memory database. If your common use case exhausts the capabilities of the VRAM on the available GPUs, try re-estimating the scale of the implementation required to meet your needs.
HEAVY.AI can scale across multiple GPUs in a single machine: up to 20 physical GPUs (the most HEAVY.AI has found in one machine), and up to 64 using GPU visualization tools such as Bitfusion Flex. HEAVY.AI can scale across multiple machines in a distributed model, allowing for many servers, each with many cards. The operational data size limit is very flexible.
Comments
0 comments
Article is closed for comments.