Crunchy Bridge for Analytics: Your Data Lake in PostgreSQL

https://www.crunchydata.com/blog/crunchy-bridge-for-analytics-your-data-lake-in-postgresql

A lot of the world’s data lives in data lakes, huge collections of data files in object stores like Amazon S3. There are many tools for querying data lakes, but none are as versatile and have as wide an ecosystem as PostgreSQL. So, what if you could use PostgreSQL to easily query your data lake with state-of-the-art analytics performance?

Today we’re announcing Crunchy Bridge for Analytics, a new offering in Crunchy Bridge that lets you query and interact with your data lake using PostgreSQL commands via extensions, with a vectorized, parallel query engine.

With Bridge for Analytics you can easily set up tables that point directly to Parquet, CSV, or JSON files in object storage, without having to specify which columns are in the file(s), and run very fast analytical queries.

Moreover, Bridge for Analytics comes with powerful data import and export capabilities, enabling you to easily create regular or temporary tables from files in object storage, load additional data, or export tables and query results back into object storage. And of course, you have all the existing benefits of Crunchy Bridge, an enterprise-grade managed PostgreSQL service, including saved queries, built-in connection pooling, VPC, container apps, and much more.

Let’s dive in!

Querying files in your data lake

We wanted to make querying your data lake simple, fast, and well-integrated into PostgreSQL. We found foreign tables are ultimately the most suitable infrastructure for that. Bridge for Analytics gives you a simple interface for creating foreign tables from a wide variety of data files.

To get started with Bridge for Analytics, first set up an analytics instance in Bridge add your credentials, and connect using psql or your favorite PostgreSQL client. When you’re connected to your instance, you can create a foreign table that points to files in your data lake using server crunchy_lake_analytics:

-- create a table from a Parquet file, column definitions can be empty
create foreign table hits ()
server crunchy_lake_analytics
options (path 's3://mybucket/hits.parquet');

Notice that specifying the column names and types is unnecessary! For Parquet files, we’ll infer the schema directly from the file metadata if you leave the column definitions empty. For CSV and JSON, we’ll make an informed guess based on the file structure, which might take a bit longer.

Once your foreign table is created, you can immediately start querying your data and take advantage of lightning-fast analytics:

\d hits                                                                                                                                          [0/1998]
┌───────────────────────┬──────────────────────────┬───────────┬──────────┬─────────┐
│        Column         │           Type           │ Collation │ Nullable │ Default │
├───────────────────────┼──────────────────────────┼───────────┼──────────┼─────────┤
│ watchid               │ bigint                   │           │          │         │
│ javaenable            │ smallint                 │           │          │         │
│ title                 │ text                     │           │          │         │
│ goodevent             │ smallint                 │           │          │         │
...
-- count ~100M rows
select count(*) from hits;
┌──────────┐
│  count   │
├──────────┤
│ 99997497 │
└──────────┘
(1 row)
Time: 55.530 ms

You can also use a wildcard in the path (e.g. s3://mybucket/hits/*.parquet) to query a list of files.

Bridge for Analytics takes advantage of range requests to speed up queries on Parquet files. In the background, files will also be automatically cached on NVMe drives for improved performance. Once the download completes, the queries will get even faster.

Example:

-- Run a query on a ~100M row Parquet file in S3
select AdvEngineID, count(*) from hits where AdvEngineID <> 0
group by 1 order by 2 desc limit 5;
┌─────────────┬────────┐
│ advengineid │ count  │
├─────────────┼────────┤
│           2 │ 404602 │
│          27 │ 113167 │
│          13 │  45631 │
│          45 │  38960 │
│          44 │   9730 │
└─────────────┴────────┘
(5 rows)
Time: 317.460 ms
-- Add a file to the cache, or wait for background caching to be done.
select * from crunchy_file_cache.add('s3://mybucket/hits.parquet');
-- Run a query on a ~100M row Parquet file in cache
select AdvEngineID, count(*) from hits where AdvEngineID <> 0
group by 1 order by 2 desc limit 5;
┌─────────────┬────────┐
│ advengineid │ count  │
├─────────────┼────────┤
│           2 │ 404602 │
│          27 │ 113167 │
│          13 │  45631 │
│          45 │  38960 │
│          44 │   9730 │
└─────────────┴────────┘
(5 rows)
Time: 90.109 ms

Queries use a combination of the PostgreSQL executor and an analytical query engine. That way, all SQL queries are supported—including joins with regular PostgreSQL tables—but queries that cannot yet push down into the analytics engine may be slower. You’re most likely to see performance benefits when using Parquet due to its columnar format and compression.

Data lake import & export

Apart from querying your data lake directly, Bridge for Analytics also lets you easily import data from your data lake into regular PostgreSQL tables, or export tables and query results back into your data lake.

For easy import/export, we modified the copy and create table commands (via extensions) to accept URLs, and to add new formats. You can use copy .. to/from 's3://…' and specify format (csv, parquet, json) and compression (none, gzip, zstd, snappy). By default, the format and compression are inferred from the file extension. You can also load files directly in your create tablestatements using the load_from option, or definition_from if you only want the column definitions.

For instance, here are some data import examples:

-- Create a temporary table from compressed JSON
create temp table log_input ()
with (load_from = 's3://mybucket/logs/20240101.json', compression = 'zstd');
-- Alternatively, only infer columns and load data separately using the copy command
create temp table log_input ()
with (definition_from = 's3://mybucket/logs/20240101.json', compression = 'zstd');
copy log_input from 's3://mybucket/logs/20240101.json' WITH (compression 'zstd');
copy log_input from 's3://mybucket/logs/20240102.json' WITH (compression 'zstd');
-- Clean the input and insert into a heap table
insert into log_errors
select event_time, code, message from log_input where level = 'ERROR';

You can also use the COPY .. TO command to export tables and query results to object storage:

-- Export query result to a Parquet file, compressed using snappy by default
copy (
  select date_trunc('minute', event_time), code, count(*)
  from log_errors where event_time between '2024-01-01' and '2024-01-02'
  group by 1, 2
) to 's3://mybucket/summaries/log_errors/20240101.parquet';

Finally, we made sure you can easily import/export the new COPY formats from the client, for instance using the \copy meta-command in psql.

$ psql
-- Import a compressed newline-delimited JSON file from local disk
\copy data from '/tmp/data.json.gz' with (format 'json', compression 'gzip')
-- Export a Parquet file to local disk
\copy data to '/tmp/data.parquet' with (format 'parquet')
-- Note: always specify format & compression when using \copy in psql, because the
-- local file extension is not visible to the server.

These enhancements to copy and create table make it easy to set up back-and-forth integrations between PostgreSQL and the other applications that access your data lake.

Writing to analytics tables

While first-class support for transactions on analytics tables is still under development, you may have realized that it is possible to use the copy .. to command in Bridge for Analytics to write to a path that is included in an analytics table.

For instance:

-- create a table pointing to a URL with a wildcard
create foreign table events_historical (
    event_time timestamptz,
    user_id bigint,
    event_type text,
    message text
)
server crunchy_lake_analytics options (path 's3://mybucket/events_historical/*.parquet';
-- write data from a heap table to a file that falls under the wildcard
copy (select * from events where event_time between '2024-01-01' and '2024-01-02')
to 's3://mybucket/events_historical/202401.parquet';
-- data is now visible in the analytics table
select count(*) from events_historical;
┌───────────┐
│   count   │
├───────────┤
│ 148024181 │
└───────────┘
(1 row)

Using this approach, you can feed data from regular PostgreSQL tables into your analytics table. For example, you could create partitioned table of events for fast inserts and updates on your analytics server, and use pg_cron to periodically rotate partitions that are no longer receiving new inserts into a separate crunchy_lake_analytics table, for fast analytics and cheap long-term storage.

It is worth noting a few limitations around writes that we plan to address over time:

  • Caching is currently done after a read, not on write, so the first few analytical queries after writing data may be slower.
  • Once a file is cached, it is not automatically refreshed if it changes, but you can use cache control functions to force a refresh.
  • Exporting data is not transactional; rolling back will not delete the file. You may find it helpful to do some bookkeeping on which partitions were successfully exported.

Running ClickBench on Analytics and Regular tables

To give you a sense of the performance of Bridge for Analytics, we used ClickBench data and queries to compare running analytical queries on a crunchy_lake_analytics table and on a regular heap table, on the same machine.

ClickBench involves a set of 43 queries on a wide table with 105 columns and ~100M rows. Database systems import the data in different ways and adjust the query set. We used the original hits.parquet file and the postgresql queries provided by ClickBench.

The Parquet file does not 100% match the types that the postgresql queries expect. We therefore created a view to convert to the schema expected by the PostgreSQL queries.

create foreign table hits_parquet ()
server crunchy_lake_analytics
options (path 's3://mybucket/hits.parquet');
-- convert EventDate and EventTime to proper types using a view
-- See https://gist.github.com/marcoslot/9e3c21ddf95c93e20a6d3ad1d9193842
create view hits as select ... from hits_parquet;

We then ran the query set with and without caching on NVMe, and on heap tables. We summed the individual query times to obtain the overall runtime.

ClickBench total runtime on Crunchy Bridge analytics instance (32 vcpus).svg

As you can see, Bridge for Analytics outperforms vanilla PostgreSQL by over 20x in this benchmark. The reason for this huge difference is primarily that PostgreSQL’s regular storage format (heap tables) and executor are optimized for operational workloads, not analytics. Fortunately, PostgreSQL’s extensibility means we can make it good at anything!

Using Bridge for Analytics with saved queries

Bridge for Analytics is still just PostgreSQL and Crunchy Bridge. That means that you can use any of the Crunchy Bridge features including saved queries—queries you can write (with AI), save, organize, and run via the Bridge dashboard. Query results are stored and can be shared with your team, or publicly as a webpage or as CSV/JSON.

crunchy bridge for analytics saved query

We use saved queries all the time at Crunchy Data for internal business metrics. It’s a very low friction way to share insights from analytical query results.

Overview of your new PostgreSQL super powers

We packed a lot of new superpowers into Bridge for Analytics. In case you’re losing track, let’s review them one more time with example syntax:

  • Create analytics table from a Parquet/CSV/JSON files in object storage: CREATE FOREIGN TABLE data () SERVER crunchy_lake_analytics OPTIONS (path 's3://mybucket/data/*.parquet');
  • Create a regular table from a file and immediately load the data: CREATE TABLE data () WITH (load_from = 's3://mybucket/data.csv.gz');
  • Create a regular table whose columns are based on a file: CREATE TABLE data () WITH (definition_from = 's3://mybucket/data.json');
  • Load data into a regular table: COPY data FROM 's3://mybucket/data.parquet';
  • Export a table to a file: COPY data TO 's3://mybucket/data.csv.zst' WITH (header);
  • Save a query result in a file: COPY (SELECT * FROM data JOIN dim USING (id)) TO 's3://mybucket/joined.json.gz';
  • Import/export local files in Parquet and JSON format: \copy data TO 'data.parquet' WITH (format 'parquet')

You can use each command with Parquet (uncompressed/gzip/zstd/snappy), CSV (uncompressed/gzip/zstd) and newline delimited JSON (uncompressed/gzip/zstd), and of course every feature in PostgreSQL 16 and every extension on Crunchy Bridge.

Our goal is to give you a Swiss army knife for interacting with your data lake. By further combining Bridge for Analytics features with existing PostgreSQL features and extensions, like pg_cron and postgres_fdw, you can build sophisticated analytics pipelines entirely in PostgreSQL.

Get started with Crunchy Bridge for Analytics

It only takes a few minutes to register for Crunchy Bridge and create your first analytics instance. You can configure your AWS credentials via the dashboard and start creating analytics tables or import data from your data lake. Check out the Crunchy Bridge for analytics docs for more details.

We believe Crunchy Bridge for Analytics can help bridge the gap between your operational and analytical workloads, and thereby significantly simplify your stack and reduce costs. We look forward to going on this journey with you. Feel free to contact us with any questions, issues, or suggestions.

Avatar for Marco Slot
{
"by": "tosh",
"descendants": 23,
"id": 40212811,
"kids": [
40213185,
40213563,
40215491,
40213694,
40213617,
40217226,
40213367
],
"score": 76,
"time": 1714494399,
"title": "Crunchy Bridge for Analytics: Your Data Lake in PostgreSQL",
"type": "story",
"url": "https://www.crunchydata.com/blog/crunchy-bridge-for-analytics-your-data-lake-in-postgresql"
}
{
"author": "Marco Slot",
"date": "2024-04-30T13:00:00.000Z",
"description": "Today Crunchy Data announces a new analytics engine to read cloud object storage files like CSV, JSON, and Parquet with Postgres.",
"image": "https://imagedelivery.net/lPM0ntuwQfh8VQgJRu0mFg/d6978ea4-d47a-4202-b098-2418c2159600/public",
"logo": "https://logo.clearbit.com/crunchydata.com",
"publisher": "Crunchy Data",
"title": "Crunchy Bridge for Analytics: Your Data Lake in PostgreSQL | Crunchy Data Blog",
"url": "https://crunchydata.com/blog/crunchy-bridge-for-analytics-your-data-lake-in-postgresql"
}
{
"url": "https://crunchydata.com/blog/crunchy-bridge-for-analytics-your-data-lake-in-postgresql",
"title": "Crunchy Bridge for Analytics: Your Data Lake in PostgreSQL | Crunchy Data Blog",
"description": "A lot of the world’s data lives in data lakes, huge collections of data files in object stores like Amazon S3. There are many tools for querying data lakes, but none are as versatile and have as wide an...",
"links": [
"https://crunchydata.com/blog/crunchy-bridge-for-analytics-your-data-lake-in-postgresql",
"https://www.crunchydata.com/blog/crunchy-bridge-for-analytics-your-data-lake-in-postgresql"
],
"image": "https://imagedelivery.net/lPM0ntuwQfh8VQgJRu0mFg/d6978ea4-d47a-4202-b098-2418c2159600/public",
"content": "<div><article><section><p>A lot of the world’s data lives in data lakes, huge collections of data files in\nobject stores like Amazon S3. There are many tools for querying data lakes, but\nnone are as versatile and have as wide an ecosystem as PostgreSQL. So, what if\nyou could use PostgreSQL to easily query your data lake with state-of-the-art\nanalytics performance?</p>\n<p>Today we’re announcing\n<a target=\"_blank\" href=\"https://www.crunchydata.com/products/crunchy-bridge-for-analytics\"><strong>Crunchy Bridge for Analytics</strong></a>,\na new offering in Crunchy Bridge that lets you query and interact with your data\nlake using PostgreSQL commands via extensions, with a vectorized, parallel query\nengine.</p>\n<p>With Bridge for Analytics you can easily set up tables that point directly to\nParquet, CSV, or JSON files in object storage, without having to specify which\ncolumns are in the file(s), and run very fast analytical queries.</p>\n<p>Moreover, Bridge for Analytics comes with powerful data import and export\ncapabilities, enabling you to easily create regular or temporary tables from\nfiles in object storage, load additional data, or export tables and query\nresults back into object storage. And of course, you have all the existing\nbenefits of\n<a target=\"_blank\" href=\"https://www.crunchydata.com/products/crunchy-bridge\">Crunchy Bridge</a>, an\nenterprise-grade managed PostgreSQL service, including saved queries, built-in\nconnection pooling, VPC, container apps, and much more.</p>\n<p>Let’s dive in!</p>\n<h2 id=\"querying-files-in-your-data-lake\"><a target=\"_blank\" href=\"https://crunchydata.com/blog/crunchy-bridge-for-analytics-your-data-lake-in-postgresql#querying-files-in-your-data-lake\">Querying files in your data lake</a></h2>\n<p>We wanted to make querying your data lake simple, fast, and well-integrated into\nPostgreSQL. We found\n<a target=\"_blank\" href=\"https://www.postgresql.org/docs/current/sql-createforeigntable.html\">foreign tables</a>\nare ultimately the most suitable infrastructure for that. Bridge for Analytics\ngives you a simple interface for creating foreign tables from a wide variety of\ndata files.</p>\n<p>To get started with\n<a target=\"_blank\" href=\"https://docs.crunchybridge.com/analytics\">Bridge for Analytics</a>, first\n<a target=\"_blank\" href=\"https://docs.crunchybridge.com/analytics/getting-started\">set up an analytics instance in Bridge</a>\nadd your credentials, and connect using psql or your favorite PostgreSQL client.\nWhen you’re connected to your instance, you can create a foreign table that\npoints to files in your data lake using <code>server crunchy_lake_analytics</code>:</p>\n<pre><code>-- create a table from a Parquet file, column definitions can be empty\ncreate foreign table hits ()\nserver crunchy_lake_analytics\noptions (path 's3://mybucket/hits.parquet');\n</code></pre>\n<p>Notice that specifying the column names and types is unnecessary! For Parquet\nfiles, we’ll infer the schema directly from the file metadata if you leave the\ncolumn definitions empty. For CSV and JSON, we’ll make an informed guess based\non the file structure, which might take a bit longer.</p>\n<p>Once your foreign table is created, you can immediately start querying your data\nand take advantage of lightning-fast analytics:</p>\n<pre><code>\\d hits [0/1998]\n┌───────────────────────┬──────────────────────────┬───────────┬──────────┬─────────┐\n│ Column │ Type │ Collation │ Nullable │ Default │\n├───────────────────────┼──────────────────────────┼───────────┼──────────┼─────────┤\n│ watchid │ bigint │ │ │ │\n│ javaenable │ smallint │ │ │ │\n│ title │ text │ │ │ │\n│ goodevent │ smallint │ │ │ │\n...\n-- count ~100M rows\nselect count(*) from hits;\n┌──────────┐\n│ count │\n├──────────┤\n│ 99997497 │\n└──────────┘\n(1 row)\nTime: 55.530 ms\n</code></pre>\n<p>You can also use a wildcard in the path (e.g. <code>s3://mybucket/hits/*.parquet</code>) to\nquery a list of files.</p>\n<p>Bridge for Analytics takes advantage of range requests to speed up queries on\nParquet files. In the background, files will also be automatically cached on\nNVMe drives for improved performance. Once the download completes, the queries\nwill get even faster.</p>\n<p>Example:</p>\n<pre><code>-- Run a query on a ~100M row Parquet file in S3\nselect AdvEngineID, count(*) from hits where AdvEngineID &lt;&gt; 0\ngroup by 1 order by 2 desc limit 5;\n┌─────────────┬────────┐\n│ advengineid │ count │\n├─────────────┼────────┤\n│ 2 │ 404602 │\n│ 27 │ 113167 │\n│ 13 │ 45631 │\n│ 45 │ 38960 │\n│ 44 │ 9730 │\n└─────────────┴────────┘\n(5 rows)\nTime: 317.460 ms\n-- Add a file to the cache, or wait for background caching to be done.\nselect * from crunchy_file_cache.add('s3://mybucket/hits.parquet');\n-- Run a query on a ~100M row Parquet file in cache\nselect AdvEngineID, count(*) from hits where AdvEngineID &lt;&gt; 0\ngroup by 1 order by 2 desc limit 5;\n┌─────────────┬────────┐\n│ advengineid │ count │\n├─────────────┼────────┤\n│ 2 │ 404602 │\n│ 27 │ 113167 │\n│ 13 │ 45631 │\n│ 45 │ 38960 │\n│ 44 │ 9730 │\n└─────────────┴────────┘\n(5 rows)\nTime: 90.109 ms\n</code></pre>\n<p>Queries use a combination of the PostgreSQL executor and an analytical query\nengine. That way, all SQL queries are supported—including joins with regular\nPostgreSQL tables—but queries that cannot yet push down into the analytics\nengine may be slower. You’re most likely to see performance benefits when using\nParquet due to its columnar format and compression.</p>\n<h2 id=\"data-lake-import--export\"><a target=\"_blank\" href=\"https://crunchydata.com/blog/crunchy-bridge-for-analytics-your-data-lake-in-postgresql#data-lake-import--export\">Data lake import &amp; export</a></h2>\n<p>Apart from querying your data lake directly, Bridge for Analytics also lets you\neasily import data from your data lake into regular PostgreSQL tables, or export\ntables and query results back into your data lake.</p>\n<p>For easy import/export, we modified the <code>copy</code> and <code>create table</code> commands (via\nextensions) to accept URLs, and to add new formats. You can use\n<code>copy .. to/from 's3://…'</code> and specify format (csv, parquet, json) and\ncompression (none, gzip, zstd, snappy). By default, the format and compression\nare inferred from the file extension. You can also load files directly in your\n<code>create table</code>statements using the <code>load_from</code> option, or <code>definition_from</code> if\nyou only want the column definitions.</p>\n<p>For instance, here are some data import examples:</p>\n<pre><code>-- Create a temporary table from compressed JSON\ncreate temp table log_input ()\nwith (load_from = 's3://mybucket/logs/20240101.json', compression = 'zstd');\n-- Alternatively, only infer columns and load data separately using the copy command\ncreate temp table log_input ()\nwith (definition_from = 's3://mybucket/logs/20240101.json', compression = 'zstd');\ncopy log_input from 's3://mybucket/logs/20240101.json' WITH (compression 'zstd');\ncopy log_input from 's3://mybucket/logs/20240102.json' WITH (compression 'zstd');\n-- Clean the input and insert into a heap table\ninsert into log_errors\nselect event_time, code, message from log_input where level = 'ERROR';\n</code></pre>\n<p>You can also use the COPY .. TO command to export tables and query results to\nobject storage:</p>\n<pre><code>-- Export query result to a Parquet file, compressed using snappy by default\ncopy (\n select date_trunc('minute', event_time), code, count(*)\n from log_errors where event_time between '2024-01-01' and '2024-01-02'\n group by 1, 2\n) to 's3://mybucket/summaries/log_errors/20240101.parquet';\n</code></pre>\n<p>Finally, we made sure you can easily import/export the new COPY formats from the\nclient, for instance using the <code>\\copy</code> meta-command in psql.</p>\n<pre><code>$ psql\n-- Import a compressed newline-delimited JSON file from local disk\n\\copy data from '/tmp/data.json.gz' with (format 'json', compression 'gzip')\n-- Export a Parquet file to local disk\n\\copy data to '/tmp/data.parquet' with (format 'parquet')\n-- Note: always specify format &amp; compression when using \\copy in psql, because the\n-- local file extension is not visible to the server.\n</code></pre>\n<p>These enhancements to <code>copy</code> and <code>create table</code> make it easy to set up\nback-and-forth integrations between PostgreSQL and the other applications that\naccess your data lake.</p>\n<h2 id=\"writing-to-analytics-tables\"><a target=\"_blank\" href=\"https://crunchydata.com/blog/crunchy-bridge-for-analytics-your-data-lake-in-postgresql#writing-to-analytics-tables\">Writing to analytics tables</a></h2>\n<p>While first-class support for transactions on analytics tables is still under\ndevelopment, you may have realized that it is possible to use the <code>copy .. to</code>\ncommand in Bridge for Analytics to write to a path that is included in an\nanalytics table.</p>\n<p>For instance:</p>\n<pre><code>-- create a table pointing to a URL with a wildcard\ncreate foreign table events_historical (\n event_time timestamptz,\n user_id bigint,\n event_type text,\n message text\n)\nserver crunchy_lake_analytics options (path 's3://mybucket/events_historical/*.parquet';\n-- write data from a heap table to a file that falls under the wildcard\ncopy (select * from events where event_time between '2024-01-01' and '2024-01-02')\nto 's3://mybucket/events_historical/202401.parquet';\n-- data is now visible in the analytics table\nselect count(*) from events_historical;\n┌───────────┐\n│ count │\n├───────────┤\n│ 148024181 │\n└───────────┘\n(1 row)\n</code></pre>\n<p>Using this approach, you can feed data from regular PostgreSQL tables into your\nanalytics table. For example, you could create\n<a target=\"_blank\" href=\"https://www.crunchydata.com/blog/native-partitioning-with-postgres\">partitioned table of events</a>\nfor fast inserts and updates on your analytics server, and use\n<a target=\"_blank\" href=\"https://github.com/citusdata/pg_cron\">pg_cron</a> to periodically rotate\npartitions that are no longer receiving new inserts into a separate\ncrunchy_lake_analytics table, for fast analytics and cheap long-term storage.</p>\n<p>It is worth noting a few limitations around writes that we plan to address over\ntime:</p>\n<ul>\n<li>Caching is currently done after a read, not on write, so the first few\nanalytical queries after writing data may be slower.</li>\n<li>Once a file is cached, it is not automatically refreshed if it changes, but\nyou can use\n<a target=\"_blank\" href=\"https://docs.crunchybridge.com/analytics/caching\">cache control functions</a> to\nforce a refresh.</li>\n<li>Exporting data is not transactional; rolling back will not delete the file.\nYou may find it helpful to do some bookkeeping on which partitions were\nsuccessfully exported.</li>\n</ul>\n<h2 id=\"running-clickbench-on-analytics-and-regular-tables\"><a target=\"_blank\" href=\"https://crunchydata.com/blog/crunchy-bridge-for-analytics-your-data-lake-in-postgresql#running-clickbench-on-analytics-and-regular-tables\">Running ClickBench on Analytics and Regular tables</a></h2>\n<p>To give you a sense of the performance of Bridge for Analytics, we used\n<a target=\"_blank\" href=\"https://github.com/ClickHouse/ClickBench/\">ClickBench</a> data and queries to\ncompare running analytical queries on a crunchy_lake_analytics table and on a\nregular heap table, on the same machine.</p>\n<p>ClickBench involves a set of 43 queries on a wide table with 105 columns and\n~100M rows. Database systems import the data in different ways and adjust the\nquery set. We used the original\n<a target=\"_blank\" href=\"https://datasets.clickhouse.com/hits_compatible/hits.parquet\">hits.parquet</a>\nfile and the\n<a target=\"_blank\" href=\"https://github.com/ClickHouse/ClickBench/blob/main/postgresql/queries.sql\">postgresql queries</a>\nprovided by ClickBench.</p>\n<p>The Parquet file does not 100% match the types that the postgresql queries\nexpect. We therefore created a view to convert to the schema expected by the\nPostgreSQL queries.</p>\n<pre><code>create foreign table hits_parquet ()\nserver crunchy_lake_analytics\noptions (path 's3://mybucket/hits.parquet');\n-- convert EventDate and EventTime to proper types using a view\n-- See https://gist.github.com/marcoslot/9e3c21ddf95c93e20a6d3ad1d9193842\ncreate view hits as select ... from hits_parquet;\n</code></pre>\n<p>We then ran the query set with and without caching on NVMe, and on heap tables.\nWe summed the individual query times to obtain the overall runtime.</p>\n<p><img src=\"https://imagedelivery.net/lPM0ntuwQfh8VQgJRu0mFg/0e877818-17b3-4464-d835-b0a5ce6e1c00/public\" alt=\"ClickBench total runtime on Crunchy Bridge analytics instance (32 vcpus).svg\" /></p>\n<p>As you can see, Bridge for Analytics outperforms vanilla PostgreSQL by over 20x\nin this benchmark. The reason for this huge difference is primarily that\nPostgreSQL’s regular storage format (heap tables) and executor are optimized for\noperational workloads, not analytics. Fortunately, PostgreSQL’s extensibility\nmeans we can make it good at anything!</p>\n<h2 id=\"using-bridge-for-analytics-with-saved-queries\"><a target=\"_blank\" href=\"https://crunchydata.com/blog/crunchy-bridge-for-analytics-your-data-lake-in-postgresql#using-bridge-for-analytics-with-saved-queries\">Using Bridge for Analytics with saved queries</a></h2>\n<p>Bridge for Analytics is still just PostgreSQL and Crunchy Bridge. That means\nthat you can use any of the\n<a target=\"_blank\" href=\"https://docs.crunchybridge.com/concepts#features-and-functionality\">Crunchy Bridge features</a>\nincluding\n<a target=\"_blank\" href=\"https://docs.crunchybridge.com/concepts/saved-queries\">saved queries</a>—queries\nyou can write (with AI), save, organize, and run via the Bridge dashboard. Query\nresults are stored and can be shared with your team, or publicly as a webpage or\nas CSV/JSON.</p>\n<p><img src=\"https://imagedelivery.net/lPM0ntuwQfh8VQgJRu0mFg/ef9246b9-15db-4d8a-8ce0-0dd6f4d5f600/public\" alt=\"crunchy bridge for analytics saved query\" /></p>\n<p>We use saved queries all the time at Crunchy Data for internal business metrics.\nIt’s a very low friction way to share insights from analytical query results.</p>\n<h2 id=\"overview-of-your-new-postgresql-super-powers\"><a target=\"_blank\" href=\"https://crunchydata.com/blog/crunchy-bridge-for-analytics-your-data-lake-in-postgresql#overview-of-your-new-postgresql-super-powers\">Overview of your new PostgreSQL super powers</a></h2>\n<p>We packed <em>a lot</em> of new superpowers into Bridge for Analytics. In case you’re\nlosing track, let’s review them one more time with example syntax:</p>\n<ul>\n<li>Create analytics table from a Parquet/CSV/JSON files in object storage:\n<code>CREATE FOREIGN TABLE data () SERVER crunchy_lake_analytics OPTIONS (path 's3://mybucket/data/*.parquet');</code>\n</li>\n<li>Create a regular table from a file and immediately load the data:\n<code>CREATE TABLE data () WITH (load_from = 's3://mybucket/data.csv.gz');</code>\n</li>\n<li>Create a regular table whose columns are based on a file:\n<code>CREATE TABLE data () WITH (definition_from = 's3://mybucket/data.json');</code>\n</li>\n<li>Load data into a regular table: <code>COPY data FROM 's3://mybucket/data.parquet';</code>\n</li>\n<li>Export a table to a file:\n<code>COPY data TO 's3://mybucket/data.csv.zst' WITH (header);</code> </li>\n<li>Save a query result in a file:\n<code>COPY (SELECT * FROM data JOIN dim USING (id)) TO 's3://mybucket/joined.json.gz';</code>\n</li>\n<li>Import/export local files in Parquet and JSON format:\n<code>\\copy data TO 'data.parquet' WITH (format 'parquet')</code> </li>\n</ul>\n<p>You can use each command with Parquet (uncompressed/gzip/zstd/snappy), CSV\n(uncompressed/gzip/zstd) and newline delimited JSON (uncompressed/gzip/zstd),\nand of course every feature in PostgreSQL 16 and every\n<a target=\"_blank\" href=\"https://docs.crunchybridge.com/extensions-and-languages\">extension on Crunchy Bridge</a>.</p>\n<p>Our goal is to give you a Swiss army knife for interacting with your data lake.\nBy further combining Bridge for Analytics features with existing PostgreSQL\nfeatures and extensions, like pg_cron and postgres_fdw, you can build\nsophisticated analytics pipelines entirely in PostgreSQL.</p>\n<h2 id=\"get-started-with-crunchy-bridge-for-analytics\"><a target=\"_blank\" href=\"https://crunchydata.com/blog/crunchy-bridge-for-analytics-your-data-lake-in-postgresql#get-started-with-crunchy-bridge-for-analytics\">Get started with Crunchy Bridge for Analytics</a></h2>\n<p>It only takes a few minutes to\n<a target=\"_blank\" href=\"https://crunchybridge.com/register\">register for Crunchy Bridge</a> and create\nyour first analytics instance. You can configure your AWS credentials via the\n<a target=\"_blank\" href=\"https://docs.crunchybridge.com/analytics/getting-started\">dashboard</a> and start\ncreating analytics tables or import data from your data lake. Check out the\n<a target=\"_blank\" href=\"https://docs.crunchybridge.com/analytics\">Crunchy Bridge for analytics docs</a>\nfor more details.</p>\n<p>We believe Crunchy Bridge for Analytics can help bridge the gap between your\noperational and analytical workloads, and thereby significantly simplify your\nstack and reduce costs. We look forward to going on this journey with you. Feel\nfree to <a target=\"_blank\" href=\"https://www.crunchydata.com/contact\">contact us</a> with any questions,\nissues, or suggestions.</p><section><img alt=\"Avatar for Marco Slot\" src=\"https://crunchydata.com/build/_assets/marco-slot.png-TPE3Y5IS.webp\" /></section></section></article></div>",
"author": "",
"favicon": "",
"source": "crunchydata.com",
"published": "",
"ttr": 404,
"type": "article"
}