Activity log for bug #1156998

Date Who What changed Old value New value Message
2013-03-19 05:45:06 Michele Simionato bug added bug
2013-03-20 14:54:20 Lars Butler openquake: status New Confirmed
2013-03-20 14:55:39 Lars Butler openquake: assignee Michele Simionato (michele-simionato)
2013-03-20 14:55:43 Lars Butler openquake: milestone 1.0.0
2013-04-09 12:59:14 Michele Simionato openquake: importance Undecided High
2013-04-16 09:45:16 Lars Butler openquake: status Confirmed In Progress
2013-04-28 07:06:37 Michele Simionato description The idea is to perform an expensive hazard calculation on Hope, zip the outputs and give them to a scientist which then can run a fast risk calculation on his laptop. Add two scripts which are able to dump a hazard computation from a database and to restore it into another. The idea is that a heavy hazard computation can be performed on a source db (the cluster) and then copied on a target db (a scientist laptop), where several light weight risk computations can be performed. Here is the workflow. 1. Identify the hazard_calculation_id you want to copy 2. Dump the associated data to a .tar file with the command python dump_hazards.py <hc_id> <output> <remotehost> <dbname> <user> <pwd> 3. Restore the data from the tarfile with the command python restore_hazards.py <output.tar> localhost <dbname> <user> <pwd> <output> is the name of a temporary directory were the files are stored (must have enough space and must not already exists). <output.tar> is the name of the tarfile containing the output. Internally the tarfile contains several .csv.gz files, one for each table to restore. In the present implementations the following table are dumped: admin.organization admin.oq_user uiapi.hazard_calculation hzrdr.lt_realization uiapi.oq_job uiapi.output hzrdr.gmf_collection hzrdr.gmf_agg hzrdr.hazard_curve hzrdr.hazard_curve_data hzrdr.gmf_scenario
2013-04-28 07:12:37 Michele Simionato description Add two scripts which are able to dump a hazard computation from a database and to restore it into another. The idea is that a heavy hazard computation can be performed on a source db (the cluster) and then copied on a target db (a scientist laptop), where several light weight risk computations can be performed. Here is the workflow. 1. Identify the hazard_calculation_id you want to copy 2. Dump the associated data to a .tar file with the command python dump_hazards.py <hc_id> <output> <remotehost> <dbname> <user> <pwd> 3. Restore the data from the tarfile with the command python restore_hazards.py <output.tar> localhost <dbname> <user> <pwd> <output> is the name of a temporary directory were the files are stored (must have enough space and must not already exists). <output.tar> is the name of the tarfile containing the output. Internally the tarfile contains several .csv.gz files, one for each table to restore. In the present implementations the following table are dumped: admin.organization admin.oq_user uiapi.hazard_calculation hzrdr.lt_realization uiapi.oq_job uiapi.output hzrdr.gmf_collection hzrdr.gmf_agg hzrdr.hazard_curve hzrdr.hazard_curve_data hzrdr.gmf_scenario Add two scripts which are able to dump a hazard computation from a database and to restore it into another. The idea is that a heavy hazard computation can be performed on a source db (the cluster) and then copied on a target db (a scientist laptop), where several light weight risk computations can be performed. Here is the workflow. 1. Identify the hazard_calculation_id you want to copy 2. Dump the associated data to a .tar file with the command     python dump_hazards.py <hc_id> <output> <remotehost> <dbname> <user> <pwd> 3. Restore the data from the tarfile with the command     python restore_hazards.py <output.tar> localhost <dbname> <user> <pwd> <output> is the name of a temporary directory were the files are stored (must have enough space and must not already exists). <output.tar> is the name of the tarfile containing the output. Internally the tarfile contains several .csv.gz files, one for each table to restore. The <user> must have sufficient permissions to write on <dbname>. If your database already contains a hazard calculation with the same id, the script will not override it and will not restore the new data. If you think that the hazard calculation on your database is not important and can removed together with all of its outputs, then remove it by using ``bin/openquake --delete-hazard-calculation`` (which must be run by a user with sufficient permissions). Then run again ``restore_hazards.py``. In the present implementations the following table are dumped:    admin.organization    admin.oq_user    uiapi.hazard_calculation    hzrdr.lt_realization    uiapi.oq_job    uiapi.output    hzrdr.gmf_collection    hzrdr.gmf_agg    hzrdr.hazard_curve    hzrdr.hazard_curve_data    hzrdr.gmf_scenario
2013-05-13 14:34:39 Michele Simionato openquake: status In Progress Fix Committed
2013-07-01 07:54:13 Lars Butler openquake: status Fix Committed Fix Released