Comments

8 comments

  • Avatar
    Candido Dessanti

    Hi @Xiaokang_Fu,

    On Windows 10 I succeded to install heavyai using WSL2 and the supplied Ubuntu 20.04 Image and Docker Desktop, making the server run in CPU and GPU mode but without the rendering (the WSL support just direct x and we use vulkan, so it's almost impossible to run right now; I tried compiling a branch with Dozen but with no luck).

    The cuda install on windows 10 is quite complicated because cuda in WSL2 is supported only in a developer ring. If you need it badly I can write a guide to help you run the server on windows, but I don't think it's the case, because it's a hack and it's quite complicated.

    On Arch Linux, I haven't tried yet. Do you plan a CPU or GPU installation? And in the case of GPU which hardware are you going to use?

    I'd follow the Ubuntu install guide to install the packages needed by Ubuntu in Arch Linux and then proceed with a regular tarball install. In the case of graphic driver, there are lots of guides out there

    Regards, Candido

    0
    Comment actions Permalink
  • Avatar
    Xiaokang Fu

    Hi, I am trying to install heavyai to my laptop which has Manjaro system.I just reinstall ubuntu and installed the heavayai successfully. But my laptop only has 2GB Ram GPU. When I build the dashboard, it tells me the full of size. The desktop in my lab has a windows system which I can't change the system. The desktop has 5GB Ram GPU (Quadro p2200). My dataset has 32GB!! Maybe it's a bad idea to use GPU Database on my laptop or desktop?

    0
    Comment actions Permalink
  • Avatar
    Candido Dessanti

    Do you use the renderer within Immerse or you are just using the database?

    I could think to reserve some memory for the rendering and letting the memory manager decide which queries can be run on the GPUs and which on the CPU, or you can force the executions of the queries on the CPU and use the GPU for the rendering only.

    There are also some configurations to force the stream of chunks of data from the system to gpu memory for the gpu execution, bit I am not sure hwo would be the performances.

    Which gou have you in your laptop?

    Ottenere Outlook per Android

    0
    Comment actions Permalink
  • Avatar
    Xiaokang Fu

    Nvidia GeForce MX350. I think I need to use the renderer within Immerse. So with the renderer, it must use GPU right? How much RAM does it need? And how to configure it? Is the renderer within Immerse using the same tech as datashader?

    0
    Comment actions Permalink
  • Avatar
    Xiaokang Fu

    Also, can I install it on an m1 MacBook? I have a Macbook that may have more GPU RAM.

    0
    Comment actions Permalink
  • Avatar
    Candido Dessanti

    Hi,

    for the real-time rendering within a dashboard, a GPU is required, while the queries can be run by the CPU or GPU; the best option is to run both queries and rendering in the GPU but also using the CPU+GPU is a viable option.

    The technology differs from the datashader, because we use something developed by us that's using Vulkan as Render API and is called using the vega API.

    In this example, the first three charts are rendered with the GPU, while every query is run in the CPU image|690x361

    To get this you have to download and install a package that supports the rendering, as an explae

    https://docs.heavy.ai/installation-and-configuration/installation/installing-on-ubuntu/centos-yum-gpu-ee#installing-with-apt

    image|690x468

    then after you activated the product, change the configuration of the server, adding

    rendering=true
    cpu-only=true
    

    to the /var/lib/heavyai/heavy.conf file

    port = 6274
    http-port = 6278
    calcite-port = 6279
    data = "/var/lib/heavyai/storage"
    null-div-by-zero = true
    rendering=true
    cpu-only=true
    
    [web]
    port = 6273
    frontend = "/opt/mapd/heavyai-ee-6.1.0-20220706-e4d6e61b20-Linux-x86_64-render/frontend"
    servers-json="/opt/mapd/servers.json"
    

    the restart the server with the command

    sudo systemctl restart heavydb.

    from now on your server would run the queries using the CPU while using the GPU to render the points map, scatterplots, line charts, and so on.

    About the M1, we don't support our product on MAC anymore, while we had a version of omnisci running on x86 MAC in the past, but it didn't use a GPU because our software doesn't use GPUs different than Nvidia.

    Let me know how is going, Best regards, Candido

    0
    Comment actions Permalink
  • Avatar
    Xiaokang Fu

    It worked! So how much data it could handle by this setting?

    0
    Comment actions Permalink
  • Avatar
    Candido Dessanti

    Hi @Xiaokang_Fu

    It depends on how much data is needed by the bigger query; by default, the server is going to use the 70% of the memory as data cache, but it's going to use some of that memory for grouping or joining data. Anyway, when that memory is full and some other data is needed, it discards the least used caches to accommodate the new one; it's a process called eviction and you should see in the logs when it happens, so in theory, you can handle more data than the 70% of RAM installed in your system, but it's better if everything needed by a dashboard (or an application) can fit in the cache.

    1
    Comment actions Permalink

Please sign in to leave a comment.