Although Explo can sit on top of almost any data model, here are a few tips to ensure better performance and ease of use when connecting to Explo.
Segment your data by user groups
It is very important for your data to be segmented by user groups. In most cases, a user group is your customer, but can also be different entities that you wish to split your data by.
Ensuring that tables in your database contains a field such ascustomer_idor organization_name can make it much easier and quicker to query and segment your data.
However, if all your tables do not have a user group field, you can always join tables in the Explo interface.
Alternatively, Explo also support data models where data for each customer is in a seperate database. In this case, it is not important to have a user group field in each of your tables. Instead you would refer to our API docs to tell us where data for each customer is stored.
Transform or process your data to create golden tables
Golden tables are tables that contain the relevant data for a specific business process that can be easily queried to pull insights or metrics without needing to join with other tables. These tables should all have a user group field so Explo can easily segment the data in each table.
These tables should not be aggregated views but rather raw data. Furthermore, each row should have the same granularity.
As an example, if you are creating a marketplace platform, you may have 3 golden tables with the following fields:
customer_name, customer_id ,email, address, payment info
SKU, product_name, description, seller_id
In the example above, seller_id would be used to segment the data for each user group.
Because you can run aggregations and calculations in Explo, it is best not to run these aggregations here, so you have the flexibility to do so in Explo.
If you do not have tables that are ready to query, you can leverage a tool such as DBT or Airflow as a transform layer to create these tables. This will dramatically improve the performance of your dashboards in Explo.
Handle schema complexity upstream
Perform more complex data transformations and calculations in the database or ETL layer rather than running those calculations in Explo. This will also dramatically improve load times.