Comments

12 comments

  • Avatar
    Candido Dessanti

    Hi @Brian_Russo,

    There are different ways to add a choropleth map for Immerse, depending on which edition and which hardware you are running our suite.

    You can follow this little tutorial if you are on a CPU-only enabled edition like MAC or the Free edition on hardware lacking an Nvidia GPU, you can follow this little tutorial.

    I wrote to help a user add a new GeoJson to the maps available for Browser Render; it's needed to add and change files into the Omnisci Installation

    https://community.heavy.ai/t/maps-on-the-mac/2479/4?u=candido.dessanti

    Otherwise, if you are using the server-side render, you have to add a table, following this

    https://docs-new.omnisci.com/loading-and-exporting-data/supported-data-sources/import-geo

    and then create your Chopleth Map following our docs https://docs-new.omnisci.com/immerse/immerse-chart-types/choropleth#server-rendered-choropleth-example.

    I hope this information will fit your needs. If not, feel free to ask for more.

    Best Regards, Candido.

    1
    Comment actions Permalink
  • Avatar
    Brian R

    Thanks, that perfectly answers my question, appreciate it. I guess I didn't fully understand the limitations of the client vs server side chloropleth rendering.

    I guess I can experiment - but any thoughts on best practices on file size for client side rendering?

    We are trying to work with census block group data at a national scale. I guess we will have to chop up the geometry into smaller regional sections until we have funding for the server-side version (actually not that big of a deal now that I think about it).

    Thanks again.

    0
    Comment actions Permalink
  • Avatar
    Candido Dessanti

    Hi Brian,

    I'm happy that the answer answered your question; I guess we should include something on our docs to show how to add geometries usable with client-side choropleth rendering, because it's a question that it's asked quite frequently.

    Anyway, I cannot provide a best practice, but I suggest using quite simple geometries; you will face two kinds of problems with CSR, that limit the performance.

    The first is the join between the result of the query you are using, so you have to limit the number of geometries to join on; 200k sounds a little high; I never tried with such big numbers. The second is the rendering itself; it's likely that the more the number of polygons, the more the geometries will have to be simplified, so you have to find the right balance between those numbers.

    When I tried with just a few thousand polygons, I had to simplify the geometries to stay under 300ms of total dashboard response time (had just 100M records and seven objects on it, running gon a notebook). However, using the polys at the highest complexity, the dashboard was a little choppy because the Choropleth alone takes over 1500ms to run, and the client was using a lot of CPU too.

    With back-end rendering, you can use a higher number of complex polygons, get a faster response time, and higher rendering quality.

    You can use more than one layer, mixing a choropleth with a point map, or using more than 1 Choropleth at once., displaying The states, the counties, and the census blocks depending on the zoom level of the map.

    as an example, this render of the census sections around Rome is taking just 125ms to be rendered on a very low-end NVidia GPU (a gtx 1050ti), and we are using a pretty large number of polygons within average complexity of 81 points per polygon

    image|608x500, 100%

    to render the whole 402000 polygons of the Italians Census sections (I guess they are equivalent to the census blocks in the USA) takes just 590ms.

    image|634x500, 100%

    the GPU I'm using has just 768 Cores, while a typical gaming GPU has several thousand, so everything, from the query executions to the render would improve linearly.

    Another plus of BE Rendering is, that there are more Map's objects other than Choropleth (Line Map, Point Map, Geo Heat Map, and an advanced scatterplot) and it's likely that the GPU used for BE Rendering will also speed up everything because all the queries will run on GPU, so the whole dashboard will refresh faster becoming more interactive.

    I hope you will be able to share your thoughts with us, in the meantime, have fun with the maps.

    Regards, Candido

    1
    Comment actions Permalink
  • Avatar
    Brian Russo

    Thanks for the detailed response. It sounds like we will have to migrate to the GPU version faster than we had planned.

    0
    Comment actions Permalink
  • Avatar
    Brian R

    Quick follow-up question - obviously since GPUs are in high demand right now we are a little limited as this effort isn't well-funded - for cards like the K80, I know that's technically '2' cards stuck together from the perspective of CUDA - I'm assuming we could only use half for the free version?

    Also any insights on which performance metrics are most relevant for omnisci performance would be appreciated. I'm assuming memory performance is crucial, but I have no idea whether floating point vs half-precision, etc is more important.

    Thanks again!

    • bri
    0
    Comment actions Permalink
  • Avatar
    Candido Dessanti

    I guess a K80 is seen as two GPU, so just one is used; I am not 101% sure about that because I used them in the cloud only, but AFAIK each chip has a UUID, so a free edition is likely going to use just 1 of the 2 GPUs.

    Also, the Tesla Kepler series like k80, k40, and so on could be de-supported by Nvidia, as they did for the GTX Kepler recently. So if you have one in hand, use it, but I don't think it's worth investing too much money on such cards.

    Well, from the most important metrics are the number of shaders for single-èrecision computing and the bandwidth; IF you are planning to use a lot of geospatial functions that rely on double-precision operations, you should consider going for hardware that's loaded with DP hardware, so everything depends by the use and the kind of application you are going to build.

    I should experiment using spot instances on AWS to test which kind of hardware fits better for you (you can compare v100 instances with t4) and then o for something on-premise. To test, you can also gaming's class hardware.

    As an example of what you can do using multiple layers:

    the first image uses the census blocks, and it's used from zoom level 0 to 11 (it's covering a quite big area, compromised by two regions of Italy) image|690x328, 75%

    when you zoom in enough, the graphical representation is using the shapes of the building in the area (the city is Venice)on the same data (the total number of buildings shapes is 11 million); that's, in my opinion, more appealing than the borders of census blocks

    image|690x426, 100%

    This is running on a single RTX 2080ti card, and the rendering time is around 100ms for the most expensive one. I think the performance should be similar using a T4 GPU (except for the underlying query that computes the measure to color the choropleth map).

    So my suggestion if you are going with Free Edition is to get a card with more memory as possible and more single precision as possible, because if you are going to use a lot of polygons (in my case, the buildings), you are going to need memory (and they are not computed into the 32GB limit).

    Candido.

    1
    Comment actions Permalink
  • Avatar
    Brian Russo

    Was unaware the older cards would be unsupported soon so that's helpful. Thanks again for your great info.

    0
    Comment actions Permalink
  • Avatar
    Candido Dessanti

    Nothing is certain about the phase-out of older cards, but it's a recurrent rumor.

    There are also some potential performance problems using a Kepler card, because of the lack of some Atomics.

    0
    Comment actions Permalink
  • Avatar
    Brian Russo

    That makes sense since they're phasing out older CUDA version cards. Appreciate the info though as I hadn't really thought of that.

    I got omnisci working on GPU and rendering seems fine, but I can't get the Chloropleth map working the way I would expect it to. Specifically two things are confusing me:

    • Despite the palette showing as a beautiful rainbow palette, the actual chloropleth map does not interpolate - it seems to be rendering quantile (or similar discrete) classes rather than the smooth continuous chloropleth I expected.

    • For my color setting on my measure it forces aggregation (so I end up choosing sum), but my data is 1:1 in terms of the joined table and the associated geo data so I have no actual reason to use aggregation.

    Link here - sshot

    Dataset is the US Census American Community Survey (ACS) data and I am rendering it at the census block group level.

    I think it may have something to do with my data types? I am using text schema dict for the block group and decimals for my data.

    Thanks

    0
    Comment actions Permalink
  • Avatar
    Brian Russo

    Well I'm not sure the cause - but deleting the immerse_db_config.json seems to have fixed the color palette smooth issue. Leaving my original post for continuity sake if anyone has similar problem.

    Still haven't figured out a way to disable aggregation but there's no real problem with using SUM

    1
    Comment actions Permalink
  • Avatar
    Candido Dessanti

    Hi @Brian_Russo,

    Thanks for sharing your workaround with the rest of the community; this is very appreciated.

    I never got a problem like that, and neither did our customers. Can I ask you if immerse_db_config.json is a server JSON file that you have created or is it a file that you find already created on installation?

    About the aggregation, almost all the charts use an aggregation by default (probably just the point map it's not using). If you don't want to use SUM, AVG, or whatever, you can use SAMPLE, so no aggregation is performed and a sample value of the table is returned.

    Regards, Candido

    0
    Comment actions Permalink
  • Avatar
    Candido Dessanti

    Well It looks that Nvidia dropped the support for k80s and similar Kepler cards

     [  246.411001] NVRM: The NVIDIA Tesla K80 GPU installed in this system is
     NVRM:  supported through the NVIDIA 470.xx Legacy drivers. Please
     NVRM:  visit http://www.nvidia.com/object/unix.html for more
     NVRM:  information.  The 495.29.05 NVIDIA driver will ignore
     NVRM:  this GPU.  Continuing probe...
    
    0
    Comment actions Permalink

Please sign in to leave a comment.