2013-04-28 07:06:37 |
Michele Simionato |
description |
The idea is to perform an expensive hazard calculation on Hope, zip the outputs and give them to a scientist which then can run a fast risk calculation on his laptop. |
Add two scripts which are able to dump a hazard computation from a
database and to restore it into another. The idea is that a heavy
hazard computation can be performed on a source db (the cluster) and
then copied on a target db (a scientist laptop), where several light
weight risk computations can be performed.
Here is the workflow.
1. Identify the hazard_calculation_id you want to copy
2. Dump the associated data to a .tar file with the command
python dump_hazards.py <hc_id> <output> <remotehost> <dbname> <user> <pwd>
3. Restore the data from the tarfile with the command
python restore_hazards.py <output.tar> localhost <dbname> <user> <pwd>
<output> is the name of a temporary directory were the files are
stored (must have enough space and must not already exists).
<output.tar> is the name of the tarfile containing the output.
Internally the tarfile contains several .csv.gz files, one
for each table to restore.
In the present implementations the following table are dumped:
admin.organization
admin.oq_user
uiapi.hazard_calculation
hzrdr.lt_realization
uiapi.oq_job
uiapi.output
hzrdr.gmf_collection
hzrdr.gmf_agg
hzrdr.hazard_curve
hzrdr.hazard_curve_data
hzrdr.gmf_scenario |
|
2013-04-28 07:12:37 |
Michele Simionato |
description |
Add two scripts which are able to dump a hazard computation from a
database and to restore it into another. The idea is that a heavy
hazard computation can be performed on a source db (the cluster) and
then copied on a target db (a scientist laptop), where several light
weight risk computations can be performed.
Here is the workflow.
1. Identify the hazard_calculation_id you want to copy
2. Dump the associated data to a .tar file with the command
python dump_hazards.py <hc_id> <output> <remotehost> <dbname> <user> <pwd>
3. Restore the data from the tarfile with the command
python restore_hazards.py <output.tar> localhost <dbname> <user> <pwd>
<output> is the name of a temporary directory were the files are
stored (must have enough space and must not already exists).
<output.tar> is the name of the tarfile containing the output.
Internally the tarfile contains several .csv.gz files, one
for each table to restore.
In the present implementations the following table are dumped:
admin.organization
admin.oq_user
uiapi.hazard_calculation
hzrdr.lt_realization
uiapi.oq_job
uiapi.output
hzrdr.gmf_collection
hzrdr.gmf_agg
hzrdr.hazard_curve
hzrdr.hazard_curve_data
hzrdr.gmf_scenario |
Add two scripts which are able to dump a hazard computation from a
database and to restore it into another. The idea is that a heavy
hazard computation can be performed on a source db (the cluster) and
then copied on a target db (a scientist laptop), where several light
weight risk computations can be performed.
Here is the workflow.
1. Identify the hazard_calculation_id you want to copy
2. Dump the associated data to a .tar file with the command
python dump_hazards.py <hc_id> <output> <remotehost> <dbname> <user> <pwd>
3. Restore the data from the tarfile with the command
python restore_hazards.py <output.tar> localhost <dbname> <user> <pwd>
<output> is the name of a temporary directory were the files are
stored (must have enough space and must not already exists).
<output.tar> is the name of the tarfile containing the output.
Internally the tarfile contains several .csv.gz files, one
for each table to restore.
The <user> must have sufficient permissions to write on <dbname>.
If your database already contains a hazard calculation
with the same id, the script will not override it and will not restore the
new data. If you think that the hazard calculation on your database
is not important and can removed together with all of its outputs, then
remove it by using ``bin/openquake --delete-hazard-calculation`` (which
must be run by a user with sufficient permissions). Then run again
``restore_hazards.py``.
In the present implementations the following table are dumped:
admin.organization
admin.oq_user
uiapi.hazard_calculation
hzrdr.lt_realization
uiapi.oq_job
uiapi.output
hzrdr.gmf_collection
hzrdr.gmf_agg
hzrdr.hazard_curve
hzrdr.hazard_curve_data
hzrdr.gmf_scenario |
|