About TweetMap data source

Comments

5 comments

  • Avatar
    Candido Dessanti

    Hi @calvindog,

    I asked internally and as I imagined we didn't use anything other than the official Twitter API, filtering for the one that has some kind of geo-coordinates associated with it wherever the precision is.

    So we are displaying only 600m of tweets for nearly 7 months, while we collect them for years for oblivious reasons.

    We are thrilled to see your dashboard using Twitter data; as you probably know this is one of the first demos released to show the capacity of the database coupled with a backend rendering so, it has a special place in our hearts.

    If we can do something to accelerate your project you have just to ask for.

    Regards, Candido and the rest of omnisci/mapd team

    0
    Comment actions Permalink
  • Avatar
    Calvin Chak

    Hi Candido,

    Thanks for your prompt reply. I managed to get some twitter datasets from https://archive.org/details/twitterstream.

    I started to work on designing the data model and parse the datasets. I am inspired by an interesting project called "We feel fine" some years ago.

    For the backend I think I will create a VM in GCP with GPU and install OmniSci Free. Do you know the most convenient way for the installation ? How can I prototype with a minimum cost (minimum CPU RAM/GPU RAM) for e.g. 10 million data

    For the frontend , I plan to code the data visualisation part using Three.js , but I guess the dashboards and charts in OmniSci Free are good already for some initial data explorations.

    Best Regards, Calvin Chak

    0
    Comment actions Permalink
  • Avatar
    Candido Dessanti

    Hi Calvin,

    I took a look at "We feel fine" (good for them ;), and I guess that your project will use something more complex to analyze the text (just tweets?). If you are un GPU, you have lots of power to process messages and get what's the feeling of the writer, without looking for a punctual string.

    Said that to do some prototyping with just 10 Million of records, you can get an idea by looking at this page on our docs, and if you want to be more precise, you have to know how many columns you are using and how much is the memory needed by each column.

    So, for example, if you are going to use 4 dictionaries encoded strings of 32bits, 2 DES of 16 buts, 1 point compressed (64 bits), and 2 smallint, the math is fairly simple; add all sizes, then multiply for the number of records you want to process.

    In this example, the number of bytes is 44 (DCE 32bits) + 22 (DCE 16bits) + 18 (POINT) + 22 (SMALLINTS), so 32 bytes; multiplied for 10 million is just 320 Megabytes, for 100 million 3,2 Gigabytes and so on.

    A community user shared a sheet to calculate the memory needed to store the data https://community.heavy.ai/t/resources-required-to-run-queries/2536/7?u=candido.dessanti

    Add on that some buffers needed for Joins and/or Group By and the space needed for the rendering (assuming you are using the back-end render of the maps), so every nVidia cards release in the latest years is going to fit and it's able to process such low number of records.

    To develop the prototype, you can use something like our recommended AWS configuration. Hence, a machine with 4/8 CPU and 64GB of ram and an Nvidia Tesla T4 has 16GB of ram and 2560 Cuda Cores (it also has some Tensor cores if you need to use it a previously trained model).

    The GPU Itself costs just 0.36 USD/hour. and a c2-8 would cost under 0.4 USD per hour and it would pack more than enough memory and compute power needed for less than a total of 1 USD.

    It would be nice to see your implementation of Three.js with ut database, but remember that moving a lot of data from server to client is going to limit the performance of your application (and that's the reason because we develop also a back-end rendering engine).

    Best Regatrds, Candido

    0
    Comment actions Permalink
  • Avatar
    Artem Bernatskyy

    @candido.dessanti , do you know how can we copy dashboard for tweetmap into heavyai enterprise (free)? Thx!

    P.S. there is an option to import dashboard there but on the tweetmap dashboard there is no option to export it (https://www.heavy.ai/demos/tweetmap)

    0
    Comment actions Permalink
  • Avatar
    Candido Dessanti

    Hi @Artem_Bernatskyy,

    The tweetmap app, is a custom developed map, that uses heavy charting, connector and crossfilter API.

    I guess it has been written before Immerse existed by @todd, that's one of the founders of Mapd, and the actual CTO of the company.

    You can read about the tweet map app in this blog post and you can try to install by yourself following this (very old) tutorial.

    The prefix name of the API has been changed from Mapd to Heavyai but it should be valid.

    Let me know if you need some helps installing or if we can help you in other ways.

    Best regards, Candido

    1
    Comment actions Permalink

Please sign in to leave a comment.