|
4 | 4 |
|
5 | 5 | Using the backup feature in Binero cloud, you are able to backup and restore volumes.
|
6 | 6 |
|
7 |
| -Backups will be stored on the object storage of your account in :doc:`availability zone </storage/regions-and-availability-zones>` |
8 |
| -**europe-se-1b** using the gp.archive :doc:`storage policy </storage/object-storage/storage-policy>`. |
| 7 | +The system stores the backups in the object storage of your account in |
| 8 | +:doc:`availability zone </storage/regions-and-availability-zones>` **europe-se-1b** |
| 9 | +using the gp.archive :doc:`storage policy </storage/object-storage/storage-policy>`. |
9 | 10 |
|
10 | 11 | .. important::
|
11 | 12 |
|
12 | 13 | When backing up volumes from availability zone *europe-se-1b*, note that the backups currently
|
13 | 14 | will *also end up in the same availability zone* (and, in part, on the same storage).
|
14 | 15 |
|
15 | 16 | If you have data that is not also stored in zone europe-se-1a, we recommend using a different
|
16 |
| - backup in order to secure your data for a potential storage outage in zone europe-se-1b. |
| 17 | + backup to secure your data for a potential storage outage in zone europe-se-1b. |
17 | 18 |
|
18 | 19 | Data integrity
|
19 | 20 | --------------
|
20 | 21 |
|
21 | 22 | The backup service in Binero cloud backs up :doc:`volumes </storage/persistent-block-storage/index>`.
|
22 | 23 |
|
23 |
| -When backing up a volume, data will be copied bit-by-bit from the source to the destination over some time (it |
24 |
| -takes time to copy large amounts of data). |
| 24 | +When backing up a volume the system copies the data bit-by-bit from the source to the destination, it |
| 25 | +takes time when copying large amounts of data. |
25 | 26 |
|
26 |
| -If data is also being written to the source disk during copying, the data on the destination server may get |
27 |
| -corrupted. A way to work around this is to shut down the server but since this may not always be an option, taking |
28 |
| -some time to consider the potential impact of corrupted data might be advisable. |
| 27 | +The system first takes snapshot of the data if it's a volume. If you're writing data to the volume while |
| 28 | +taking the snapshot or if you are not using a volume and are writing data while a backup is running the |
| 29 | +data might be corrupt before the data is even copied. |
29 | 30 |
|
30 |
| -For a file server, the risk of corrupted files should be minimal (and in case a file is compromised, the damage is |
31 |
| -limited to that file). |
| 31 | +It's recommend to not write data while taking a backup and if you can power off your instance during |
| 32 | +a backup that makes it more safe, but that's not always an option so you need to consider the impact |
| 33 | +if your data in a backup is corrupt. |
32 | 34 |
|
33 |
| -When backing up a database, however, the entire database might be reliant on a few files and should one of them be |
34 |
| -compromised, the database will not start again. In this scenario, the usual solution is to first run a dump of the |
35 |
| -database data onto a file on the filesystem. This file would then be safe (as its not written to after the dump) and |
36 |
| -in case of a restore, the file could just be read back into the database. |
| 35 | +For a file server the risk of corrupted files should be minimal and in case of file damage or data loss |
| 36 | +it's limited to that file. |
37 | 37 |
|
38 |
| -A way to work around this problem (in part) is to first :doc:`snapshot </storage/snapshots/index>` your volume and then |
39 |
| -backup the snapshot. That way, new writes will be moved away from the snapshot (which is read-only) and data will be |
40 |
| -safer. Bare in mind that a snapshot could also cut writes in half so exporting databases is still advisable. |
| 38 | +When backing up a database the entire database might be reliant on a few files and should one of them |
| 39 | +become corrupt, the database will not start again. In this scenario, the usual solution is to first run |
| 40 | +a dump of the database data onto a file on the filesystem. This file would then be safe (as its not written |
| 41 | +to after the dump) and in case of a restore, you can import the data back into the database from the dump. |
41 | 42 |
|
42 | 43 | Shutting of your instance will make the backup entirely safe from above issues.
|
43 | 44 |
|
44 | 45 | .. tip::
|
45 | 46 |
|
46 |
| - We strongly recommend doing a restore test of your backups. Since :doc:`restoring <restore-volume>` to a new volume |
47 |
| - and creating a new instance that could run parallel to your production workload, this is a good way to ensure data |
48 |
| - consistency in your backup. |
| 47 | + We strongly recommend doing a restore test of your backups. Since :doc:`restoring <restore-volume>` to a |
| 48 | + new volume and creating a new instance that could run parallel to your production workload, this is a good |
| 49 | + way to ensure data consistency in your backup. |
49 | 50 |
|
50 | 51 | Setting up backup
|
51 | 52 | -----------------
|
52 | 53 |
|
53 |
| -The Binero cloud backup feature can be used in two main ways, see below. |
| 54 | +You can use the Binero cloud backup feature in two main ways, see below. |
54 | 55 |
|
55 | 56 | Manual backup
|
56 | 57 | ^^^^^^^^^^^^^
|
57 | 58 |
|
58 |
| -When doing a manual backup, your data would be pushed to the backup storage once upon request. This could be done for |
59 |
| -example, before doing a migration or upgrade (although a snapshot might also be a good alternative for this) or if you want |
60 |
| -to have a copy of an installed server stored offline in case of the real server for example being deleted - and then backing |
61 |
| -up the data separately. |
| 59 | +A manual backup is useful before doing a migration, upgrade or when retire an old system but want to |
| 60 | +keep a copy of the data. |
62 | 61 |
|
63 | 62 | More information in our :doc:`manual-backup` article.
|
64 | 63 |
|
65 | 64 | Automated backup
|
66 | 65 | ^^^^^^^^^^^^^^^^
|
67 | 66 |
|
68 |
| -The platform is able to automatically run backup jobs for you. This can be setup via the :doc:`platform automation tool </platform-automation/index>`, however |
69 |
| -we strongly recommend using our :doc:`service catalog </service-catalog/schedule-backup>` to simplify creating an automated backup job. |
| 67 | +The platform is able to automatically create backups for you. This can be setup via the |
| 68 | +:doc:`platform automation tool </platform-automation/index>`, we strongly recommend using |
| 69 | +our :doc:`service catalog </service-catalog/schedule-backup>` to enable automated backup. |
70 | 70 |
|
71 | 71 | The platform will not do incremental backups when using the built-in workflow to run backups.
|
72 | 72 |
|
73 | 73 | More information in our :doc:`automatic-backup` article.
|
74 | 74 |
|
75 | 75 | .. note::
|
76 | 76 |
|
77 |
| - Before creating a backup, please note that you are strongly recommended to also dump any database to disk before running |
78 |
| - the job. If the job is ran on a schedule, also dump your database on a schedule inside your instance (and do it before |
79 |
| - the backup is created). |
| 77 | + Before creating a backup, note that you are strongly recommended to also dump any database to disk. If |
| 78 | + the backup is ran on a schedule, also dump your database on a schedule inside your instance before the |
| 79 | + system takes the backup. |
80 | 80 |
|
81 | 81 | .. toctree::
|
82 | 82 | :caption: Available services
|
|
0 commit comments