Good Morning
-
forget PostgreSQL’s existence until data corruption.
Oh, so about 2 hours then LMAO
...ok, I'm morbidly curious. How did you manage to do that?
-
pg can actually query into json fields!
And you can add indexes on those JSON fields too!
-
first thing i'd ask it is how to pronounce SQL
wrote last edited by [email protected]Sequel with external collaborators.
Squeal with the homies.
-
Just if you need to be able to take it with you.
The whole point of a database is that you leave it where it is though
-
The whole point of a database is that you leave it where it is though
I think the OP is trying to talk about SQLite, so yeah, he could really be talking about carrying it on his phone.
But it's just such a weird word to use there that I can't really be sure.
-
The whole point of a database is that you leave it where it is though
Ohhhh right, that's the base part right?
-
I think the OP is trying to talk about SQLite, so yeah, he could really be talking about carrying it on his phone.
But it's just such a weird word to use there that I can't really be sure.
Or portable like on a USB stick that you can put in any computer instead of installed on a single system.
-
This post did not contain any content.
As a (data) scientist I am not super familiar with most databases, but duckdb is great for what I need it for.
-
I used to agree, but recently tried out Clickhouse for high ingestion rate time series data in the financial sector and I’m super impressed by it. Postgres was struggling and we migrated.
This isn’t to say that it’s better overall by any means, but simply that I did actually find a better tool at a certain limit.
If you can, share your experience!
I also do finance, so if there is anything more to explore, I'm here to listen and learn.
-
As a complete newb to Postgres, I LOVE arrays.
Postgres feels like all of the benefits of a database and a document store.
Yeah, that was the goal.
First make it feature-complete document-oriented database, then make if peroformant.
And you can feel the benefits in every step of the way. Things just work, features actually complement each other... and there's always a way to make any crazy idea stick.
-
Oracle:
Only the best in B2B marketing for our shit software.
EDIT:
hah ok, round two, more directly playing on the actual company name:
Oracle:
We tell you what you think you want to hear.
I have to admit though, I've never admined the Oracle DB, but I did talk a lot with people who did.
I remember over 10 years ago discussing transactional DDLs as I heard Oracle does it, too, just to listen to 5 minute lecture about how it's nowhere near as simple.
-
After having suffered with T SQL at MSFT for a number of years... yep, PostGres is almost always the best for almost any enterprise setup, despite what most other corpos seem to think.
Usually their reasons for not using it boil down to:
We would rather pay exorbitant licescing fees of some kind, forever, than rework a few APIs.
Those few APIs already having a fully compatible rewrite, done by me, working in test, prior to that meeting.
Gotta love corpo logic.
Yes, had those issues as well, though lately not a big corp, but mid-sized company.
One manager just wanted MySQL. We had trouble getting required performance from MySQL, when Postgres had good numbers. I had the app fully ready, just to be told no, you make it work in MySQL. So we dropped some 'useless stuff' like deferring flushing to disk and such.
-
I have a colleague like that too, and then the other camp that loves MySQL.
Why do you like postgres
I usually tell people running MySQL that they would probably be better off using a NoSQL key-value store, SQLite, or PostgreSQL, in that order. Most people using MySQL don't actually need an RDBMS. MySQL occupies this weird niche of being optimised for mostly reads, not a lot of concurrency and cosplaying as a proper database while being incompatible with SQL standards.
-
If you can, share your experience!
I also do finance, so if there is anything more to explore, I'm here to listen and learn.
Clickhouse has a unique performance gain when you have a system that isn’t operational data that is normalized and updated often. But rather tables of timeseries data being ingested for write only.
An example, stock prices or order books in real-time. Tens of thousands per second. Clickhouse can write, merge, aggregate records really nicely.
Then selects against ordered data with aggregates are lightning fast. It has lots of nuances to learn and has really powerful capability, but only for this type of use case.
It doesn’t have atomic transactions. Updates and deletes are very poor performing.
-
And you can add indexes on those JSON fields too!
Kind of. I hope you don't like performance...
-
If you can, share your experience!
I also do finance, so if there is anything more to explore, I'm here to listen and learn.
For high ingestion (really high) you have to start sharding. It's nice to have a DB that can do that natively, MongoDB and Influx are very popular, depending on the exact application.
-
I used to agree, but recently tried out Clickhouse for high ingestion rate time series data in the financial sector and I’m super impressed by it. Postgres was struggling and we migrated.
This isn’t to say that it’s better overall by any means, but simply that I did actually find a better tool at a certain limit.
wrote last edited by [email protected]I've been using ClickHouse too and it's significantly faster than Postgres for certain analytical workloads. I benchmarked it and while Postgres took 47 seconds, ClickHouse finished within 700ms when performing a query on the OpenFoodFacts dataset (~9GB). Interestingly enough TimescaleDB (Postgres extension) took 6 seconds.
Insertion Query speed Clickhouse 23.65 MB/s ≈650ms TimescaleDB 12.79 MB/s ≈6s Postgres - ≈47s SQLite 45.77 MB/s^1^ ≈22s DuckDB 8.27 MB/s^1^ crashed ^All^ ^actions^ ^were^ ^performed^ ^through^ ^Datagrip^
^1^ ^Insertion^ ^speed^ ^is^ ^influenced^ ^by^ ^reduced^ ^networking^ ^overhead^ ^due^ ^to^ ^the^ ^databases^ ^being^ ^in-process.^
Updates and deletes don't work as well and not being able to perform an upsert can be quite annoying. However, I found the ReplacingMergeTree and AggregatingMergeTree table engines to be good replacements so far.
Also there's [email protected]
-
pg can actually query into json fields!
Mysql can too, slow af tho.
-
I usually tell people running MySQL that they would probably be better off using a NoSQL key-value store, SQLite, or PostgreSQL, in that order. Most people using MySQL don't actually need an RDBMS. MySQL occupies this weird niche of being optimised for mostly reads, not a lot of concurrency and cosplaying as a proper database while being incompatible with SQL standards.
incompatible with SQL standards.
Wait... Wait a minute, is that Oracle's entrance music‽
-
Mysql can too, slow af tho.
oh i didn't know that. iirc postgres easily beats mongo in json performance which is a bit embarrassing.