Skip to content

database / backend storage fragility safeguards #182

@mrpops2ko

Description

@mrpops2ko

is your feature request related to a problem? Please describe.
i have my storage (nas) and copyparty docker container on separate computers. the indexing db, if it encounters that the storage is down (say a host critical failure / kernel panic or something similar)

will then dump the entire database, if this happened during a normal up scan (i.e. if copyparty was just starting)

this in turn them causes you to have to rescan everything from scratch, the process of which i'd been doing for the past week until i ran into some kind of kernel failure / storage issue

img

at which point it dropped the entire db again, i don't have the docker log anymore of it doing so but if it happens again i'll capture it

Describe the idea / solution you'd like
some kind of chunking / incremental 'saves' and a flag which doesn't drop the db or just has some kind of awareness baked into it for this kind of storage fragility. maybe periodically save DB snapshots or deltas, making recovery from the last known-good state faster?

something like the DB to support "partial re-index," restoring only directories that have changed since the last snapshot, based on timestamp checks and/or hashes?

or instead it could behave something like upon detecting storage is gone (or a significant amount of files all disappear at once, which could be used as a proxy to signify more than just regular file deletion), it waits until it sees the storage again with those files and then starts to scrutinise them more (in terms of doing increased hash check focus)

or it could instead of marking them as deleted, silently mark them as missing until the next [n] reruns of copyparty, at which point it can partial / in depth rescan them or flag them as fully gone?

Describe any alternatives you've considered
disabling the db entirely, but i don't think thats a good idea - the db makes directory traversal snappy and search performant, i would like to keep the db given a choice

maybe before initiating a rescan or at Copyparty startup or mid scan, perform a storage health check? try a simple test read from the root of the storage and if its all gone, then that is a good signifier that storage failure has occured rather than the admin deleting everything

If the test fails, postpone the scan and flag an error rather than acting on incomplete storage?

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions