Data Protection Idea Exchange
cancel

Block-based backup for filesystems

Brief description:  reduce the size of backups by using Block based backups for filesystems.

For large filesystems, supporting efficient backups where we only backup what's changed is a powerful feature.

How:  identify the changed data and only queue up those changed blocks from just those modfied files to be sent to the Data Protector engine for backup.

4 Comments
Moderator
Status changed to: Waiting for Votes

Waiting for votes

Micro Focus Expert
For large files, this is a win. But for many files, per-file approach incurs large file-access penalty and large catalog penalty. In that scenario, a more suitable alternative would be to do volume-level or raw disk backups of used/changed blocks. Couple that with: - a suitable backend device that can discard expired blocks to enable incremental forever, and - GRE from such backups
Moderator
Status changed to: Accepted

This idea has been accepted into the roadmap. We will communicate when the idea is delivered in a release

Acclaimed Contributor..

Hello,

If this is going to be implemented it must support multi-streaming and incremental backups otherwise the solution will not be different from using the VSS Online Integration for volume level backups on Windows, which also allows single file restores. It works good for file systems with millions of files, but has limited scale on very large file systems.

Regards,
Sebastian Koehler