Good Morning
-
Oracle:
Only the best in B2B marketing for our shit software.
EDIT:
hah ok, round two, more directly playing on the actual company name:
Oracle:
We tell you what you think you want to hear.
I have to admit though, I've never admined the Oracle DB, but I did talk a lot with people who did.
I remember over 10 years ago discussing transactional DDLs as I heard Oracle does it, too, just to listen to 5 minute lecture about how it's nowhere near as simple.
-
After having suffered with T SQL at MSFT for a number of years... yep, PostGres is almost always the best for almost any enterprise setup, despite what most other corpos seem to think.
Usually their reasons for not using it boil down to:
We would rather pay exorbitant licescing fees of some kind, forever, than rework a few APIs.
Those few APIs already having a fully compatible rewrite, done by me, working in test, prior to that meeting.
Gotta love corpo logic.
Yes, had those issues as well, though lately not a big corp, but mid-sized company.
One manager just wanted MySQL. We had trouble getting required performance from MySQL, when Postgres had good numbers. I had the app fully ready, just to be told no, you make it work in MySQL. So we dropped some 'useless stuff' like deferring flushing to disk and such.
-
I have a colleague like that too, and then the other camp that loves MySQL.
Why do you like postgres
I usually tell people running MySQL that they would probably be better off using a NoSQL key-value store, SQLite, or PostgreSQL, in that order. Most people using MySQL don't actually need an RDBMS. MySQL occupies this weird niche of being optimised for mostly reads, not a lot of concurrency and cosplaying as a proper database while being incompatible with SQL standards.
-
If you can, share your experience!
I also do finance, so if there is anything more to explore, I'm here to listen and learn.
Clickhouse has a unique performance gain when you have a system that isn’t operational data that is normalized and updated often. But rather tables of timeseries data being ingested for write only.
An example, stock prices or order books in real-time. Tens of thousands per second. Clickhouse can write, merge, aggregate records really nicely.
Then selects against ordered data with aggregates are lightning fast. It has lots of nuances to learn and has really powerful capability, but only for this type of use case.
It doesn’t have atomic transactions. Updates and deletes are very poor performing.
-
And you can add indexes on those JSON fields too!
Kind of. I hope you don't like performance...
-
If you can, share your experience!
I also do finance, so if there is anything more to explore, I'm here to listen and learn.
For high ingestion (really high) you have to start sharding. It's nice to have a DB that can do that natively, MongoDB and Influx are very popular, depending on the exact application.
-
I used to agree, but recently tried out Clickhouse for high ingestion rate time series data in the financial sector and I’m super impressed by it. Postgres was struggling and we migrated.
This isn’t to say that it’s better overall by any means, but simply that I did actually find a better tool at a certain limit.
wrote last edited by [email protected]I've been using ClickHouse too and it's significantly faster than Postgres for certain analytical workloads. I benchmarked it and while Postgres took 47 seconds, ClickHouse finished within 700ms when performing a query on the OpenFoodFacts dataset (~9GB). Interestingly enough TimescaleDB (Postgres extension) took 6 seconds.
Insertion Query speed Clickhouse 23.65 MB/s ≈650ms TimescaleDB 12.79 MB/s ≈6s Postgres - ≈47s SQLite 45.77 MB/s^1^ ≈22s DuckDB 8.27 MB/s^1^ crashed ^All^ ^actions^ ^were^ ^performed^ ^through^ ^Datagrip^
^1^ ^Insertion^ ^speed^ ^is^ ^influenced^ ^by^ ^reduced^ ^networking^ ^overhead^ ^due^ ^to^ ^the^ ^databases^ ^being^ ^in-process.^
Updates and deletes don't work as well and not being able to perform an upsert can be quite annoying. However, I found the ReplacingMergeTree and AggregatingMergeTree table engines to be good replacements so far.
Also there's [email protected]
-
pg can actually query into json fields!
Mysql can too, slow af tho.
-
I usually tell people running MySQL that they would probably be better off using a NoSQL key-value store, SQLite, or PostgreSQL, in that order. Most people using MySQL don't actually need an RDBMS. MySQL occupies this weird niche of being optimised for mostly reads, not a lot of concurrency and cosplaying as a proper database while being incompatible with SQL standards.
incompatible with SQL standards.
Wait... Wait a minute, is that Oracle's entrance music‽
-
Mysql can too, slow af tho.
oh i didn't know that. iirc postgres easily beats mongo in json performance which is a bit embarrassing.
-
oh i didn't know that. iirc postgres easily beats mongo in json performance which is a bit embarrassing.
Holy, never knew, and never would expect. Postgres truly is king.
-
Or portable like on a USB stick that you can put in any computer instead of installed on a single system.
Either way, the funny thing is that Postgres can do both too. You may not want to use it for those, but you absolutely can.
-
Kind of. I hope you don't like performance...
The performance is actually not bad. You're far better off using conventional columns but in the one off cases where you have to store queryable JSON data, it actually performs quite well.
-
The performance is actually not bad. You're far better off using conventional columns but in the one off cases where you have to store queryable JSON data, it actually performs quite well.
Quite well is very subjective. It's much slower than columns or specialized databases like MongoDB.
-
It isn't pronounceable as a word, it is an initialism because the letters that compromise it do not allow it to be pronounced as a word. Unlike something like NASA which is a full blown acronym because it can be pronounced
Do you say hetips for HTTPS?
The sequel thing didn't even start naturally, it picked up this sequel moniker because of some ancient trademark beef in the 70s between the original devs when it was named "Sequel" and some company (That isn't even in business anymore)
They renamed it SQL and out of protest against the company people continued to call it sequel even though it makes no sense and 50 damn years later here we are. Everybody involved with direct involvement is probably dead or longggg since retired. It wasn't termed because it was easier to say and it sure as hell wasn't termed because its proper.
If it was originally called SQL and the above never happened, I guarantee it would just be another DNS or HTTP and many many pointless debates about it would have never happened
Disclaimer, this doesn't apply to the MS product that is called sequel
Do you say hetips for HTTPS?
No but now I want to start (though I'd go hittips instead, and its insecure alternative, hittip). HTTPS has always been a mouthful lol
-
Kind of. I hope you don't like performance...
Sure, if you use a field often it is most likely better to extract it into a column with auto-updates from the JSON data.
But you have to tune it and see what is best for your use case. Just saying that you can add indexes to JSON fields as well!
-
This is literally me at every possible discussion regarding any other RDBMS.
My coworkers joked that I got paid for promoting Postgres.
Then we switched from Percona to Patroni and everyone agreed that... fuck yes, PostgreSQL is the best.
Sure, once you make the move it’s great. It’s just that it takes time and resources to actually make the move
-
Sure, once you make the move it’s great. It’s just that it takes time and resources to actually make the move
I mean, with mysql_fwd, I migrated the data quickly, and apart from manual 'on duplicate update' queries (or rare force index) it works the same.