Data Protection Idea Exchange
cancel

Block-based backup for filesystems

Brief description:  reduce the size of backups by using Block based backups for filesystems.

For large filesystems, supporting efficient backups where we only backup what's changed is a powerful feature.

How:  identify the changed data and only queue up those changed blocks from just those modfied files to be sent to the Data Protector engine for backup.

5 Comments
 Super Contributor...
Status changed to: Waiting for Votes

Waiting for votes

Outstanding Contributor..
For large files, this is a win. But for many files, per-file approach incurs large file-access penalty and large catalog penalty. In that scenario, a more suitable alternative would be to do volume-level or raw disk backups of used/changed blocks. Couple that with: - a suitable backend device that can discard expired blocks to enable incremental forever, and - GRE from such backups
Senior Member...
Status changed to: Accepted

This idea has been accepted into the roadmap. We will communicate when the idea is delivered in a release

Acclaimed Contributor..

Hello,

If this is going to be implemented it must support multi-streaming and incremental backups otherwise the solution will not be different from using the VSS Online Integration for volume level backups on Windows, which also allows single file restores. It works good for file systems with millions of files, but has limited scale on very large file systems.

Regards,
Sebastian Koehler

Acclaimed Contributor..

Hello Sheetal,

Can you please comment on the customer request if this change will also include optimized backup for Windows Deduplication on Windows 2012, 2012 R2 and 2016. See https://msdn.microsoft.com/en-us/library/hh769304(v=vs.85).aspx for details.

Regards,
Sebastian Koehler