We continue to receive numerous and consistent reports from our clients regarding their experience of sub-optimal backup performance, reliability, and scalability issues with Backblaze B2 on our application. We have found that 90% of the Backblaze backup performance issues reported to us involved intermittent connection issues on high inode workloads (1M+ and higher) - most of which ended up switching providers after we recommended trying out another provider that is better suited to handle their backup workload and fit their performance needs.
After carefully evaluating these reports - including the continued investigations and comparisons by our development team with other S3 vendors - It has become increasingly evident that Backblaze B2 is not a good fit with our application requirements. As a result, we believe it is in the best interest of our clients to have us officially designate Backblaze B2 as an unsupported destination.
As an unsupported destination - we do NOT recommend using Backblaze B2 as a destination and will no longer provide any technical support for issues that directly involves Backblaze B2.
Just to clarify, there are no issues with JetBackup's implementation as it follows Backblaze's guidelines (retries with exponential backoff), and this issue is expected according to Backblaze's design. BackBlaze mentions that it moved away from load balancing and "well-known URLs" commonly used by S3 providers in favor of what they call "Contract Architecture" which is inherently more prone to receive 500/503 errors, especially for high inode workloads as it requires JetBackup to open up more connections. Their proposed solution is to keep retrying to upload data to a different URL ("vault") when a 500/503 error occurs. You can find more information regarding BackBlaze's Contract Architecture and comparison to a standard S3 Vendor such as Amazon here: https://www.backblaze.com/blog/design-thinking-b2-apis-the-hidden-costs-of-s3-compatibility/
To provide more insight into the S3 upload process in JetBackup 5, our software utilizes a new Backup engine for S3-compatible destinations which opens up 10 HTTP threads for each fork, with each thread uploading a single chunk of up to 5GB), for a maximum of 10 forks (Concurrent Backup Forks). We also provide the option to specify the number of times JetBackup will retry uploading per HTTP thread, up to 10 times. Please note that increasing the number of retries may also increase your backup time accordingly, as JetBackup tries to upload for a maximum set number of times until either getting a successful response from the S3 provider or reaching the set number of retries. For large files, JetBackup also handles multipart uploads, automatically splitting a file into a maximum of 5GB chunks then uploading each chunk to your S3 destination.
Please note, if you continue to get partially completed or failed backups due to 500/503 errors after changing the number of retries, please check the S3.log located at /usr/local/jetapps/var/log/jetbackup5/s3 to get more information on the error you receive and reach out to your S3 provider for further assistance.
You may also need to consider the region where your S3 provider is located as we do generally recommend choosing a provider that is geographically closer to your server(s) to minimize possible network performance issues.
It may also be worth considering other S3 destination providers and taking advantage of their trial, if available, as we have noticed significantly fewer reports of S3 errors with providers such as Wasabi, Google Cloud Storage, Amazon S3, etc.