You know, some are saying that AI will one day get free will, but they have been overlooking ceph all this time! The average ceph installation seems to have way too much free will and has no problem in automatically doing things that either block or generate a lot of IOPS at seemingly the worst possible time
noah@mastodon.despis..
replied 22 Jan 2025 02:08 +0000
in reply to: https://benjojo.co.uk/u/benjojo/h/g1911jJvdpyMRccrFJ
noah@mastodon.despis..
replied 22 Jan 2025 02:09 +0000
in reply to: https://mastodon.despise.computer/users/noah/statuses/113869567753145662
benjojo
replied 22 Jan 2025 12:07 +0000
in reply to: https://mastodon.despise.computer/users/noah/statuses/113869567753145662
@noah I do trust ceph to not lose my data. But I do at this point view ceph as something that will sometimes (and with no warning, or any good explanation) just go on strike for a while to remind me who really stores the data
benjojo
replied 21 Jan 2025 21:23 +0000
in reply to: https://benjojo.co.uk/u/benjojo/h/g1911jJvdpyMRccrFJ
This post has been brought to you by: Me learning what a rados bucket resharding was when my 37 Million file bucket suddenly decided to do it, blocking writes on that bucket for seemingly 1:30 hours
tim@wants.coffee
replied 22 Jan 2025 02:38 +0000
in reply to: https://benjojo.co.uk/u/benjojo/h/cf8cT6XLtCb8nYsqQl
@benjojo is this a production cluster? I can't for the life of me imagine what a 37 million file bucket would be used for personally!
benjojo
replied 22 Jan 2025 12:05 +0000
in reply to: https://wants.coffee/users/tim/statuses/113869686671360593
@tim This is the bucket that bgp.tools uses to store BGP message captures/dumps. There are 2000+ sessions (not all are recorded), every 15 mins a new file is made, every 4 hours a dump is made. I've yet to delete files since like ~2019. So this file count multiplies out fast! I don't think 37M files in a bucket is unreasonable. I've worked at places with larger file counts and stuff in the petabytes in S3, mostly full of SSTable backups.