2011-08-24 13:50:33 |
Lars Butler |
bug |
|
|
added bug |
2011-08-24 14:03:26 |
Lars Butler |
openquake: milestone |
|
0.4.3 |
|
2011-08-24 14:03:35 |
Lars Butler |
openquake: assignee |
|
Lars Butler (lars-butler) |
|
2011-08-24 14:03:38 |
Lars Butler |
openquake: status |
New |
Confirmed |
|
2011-08-24 14:03:41 |
Lars Butler |
openquake: status |
Confirmed |
In Progress |
|
2011-08-24 16:01:05 |
Lars Butler |
description |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters. |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0 |
|
2011-08-24 16:08:11 |
Lars Butler |
description |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0 |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0
-----
Cases
-----
There are 3 cases which can be tested to analyze computation time.
Worst case:
- Compute on a list of sites
- The list of sites is equal to the all of the locations defined in the source model.
- In other words, we compute hazard for sites which are right on top of each source (1 site per source).
Reasonable case 1:
- Compute on a rectangular region just large enough to contain all of the source sites in a source model.
- What should the grid spacing be?
Reasonable case 2:
- Given the same region constraints defined in 'Reasonable case 1', pick a random list of sites.
- The number of sites chosen shall be equal the number of sources (thus, equal to the number of sites in 'Worst case'). |
|
2011-08-24 16:12:12 |
Lars Butler |
description |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0
-----
Cases
-----
There are 3 cases which can be tested to analyze computation time.
Worst case:
- Compute on a list of sites
- The list of sites is equal to the all of the locations defined in the source model.
- In other words, we compute hazard for sites which are right on top of each source (1 site per source).
Reasonable case 1:
- Compute on a rectangular region just large enough to contain all of the source sites in a source model.
- What should the grid spacing be?
Reasonable case 2:
- Given the same region constraints defined in 'Reasonable case 1', pick a random list of sites.
- The number of sites chosen shall be equal the number of sources (thus, equal to the number of sites in 'Worst case'). |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0
-----
Cases
-----
There are 3 cases which can be tested to analyze computation time.
Worst Case:
- Compute on a list of sites
- The list of sites is equal to the all of the locations defined in the source model.
- In other words, we compute hazard for sites which are right on top of each source (1 site per source).
Reasonable Case 1:
- Compute on a rectangular region just large enough to contain all of the source sites in a source model.
- What should the grid spacing be?
Reasonable Case 2:
- Given the same region constraints defined in 'Reasonable case 1', pick a random list of sites.
- The number of sites chosen shall be equal the number of sources (thus, equal to the number of sites in 'Worst case').
------------------
Test data required
------------------
- Source model containing at least 1 of each type of source (point, area, simple fault, complex fault)
- Source model logic tree
- 1 branch should be sufficient; what really matters is the total number of sources
- GMPE logic tree
- Configuration files for each case (Worst Case, Reasonable Case 1, Reasonable Case 2) |
|
2011-08-24 16:20:39 |
Lars Butler |
description |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0
-----
Cases
-----
There are 3 cases which can be tested to analyze computation time.
Worst Case:
- Compute on a list of sites
- The list of sites is equal to the all of the locations defined in the source model.
- In other words, we compute hazard for sites which are right on top of each source (1 site per source).
Reasonable Case 1:
- Compute on a rectangular region just large enough to contain all of the source sites in a source model.
- What should the grid spacing be?
Reasonable Case 2:
- Given the same region constraints defined in 'Reasonable case 1', pick a random list of sites.
- The number of sites chosen shall be equal the number of sources (thus, equal to the number of sites in 'Worst case').
------------------
Test data required
------------------
- Source model containing at least 1 of each type of source (point, area, simple fault, complex fault)
- Source model logic tree
- 1 branch should be sufficient; what really matters is the total number of sources
- GMPE logic tree
- Configuration files for each case (Worst Case, Reasonable Case 1, Reasonable Case 2) |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0
-----
Cases
-----
There are 3 cases which can be tested to analyze computation time.
Worst Case:
- Compute on a list of sites
- The list of sites is equal to the all of the locations defined in the source model.
- In other words, we compute hazard for sites which are right on top of each source (1 site per source).
Reasonable Case 1:
- Compute on a rectangular region just large enough to contain all of the source sites in a source model.
- What should the grid spacing be?
Reasonable Case 2:
- Given the same region constraints defined in 'Reasonable case 1', pick a random list of sites.
- The number of sites chosen shall be equal the number of sources (thus, equal to the number of sites in 'Worst case').
------------------
Test data required
------------------
- Source model containing at least 1 of each type of source (point, area, simple fault, complex fault)
- Source model logic tree
- 1 branch should be sufficient; what really matters is the total number of sources
- GMPE logic tree
- Configuration files for each case (Worst Case, Reasonable Case 1, Reasonable Case 2)
-------------
Hazard Curves
-------------
Hazard curves are the primary output of a hazard calculation and thus deserve their own section.
There are 3 types of curves:
- Hazard curves
- Mean curves
- Quantile curves
One of the measures we may want report (pre-calculation) is the estimated number of hazard curves which will be produced. (TODO: Figure out how to estimate hazard curve calculation _time_ based on the estimated curve _number_.) For Classical PSHA, the total number of curves can estimated as follows:
Total number of curves = num_of_sites * num_of_logic_tree_samples * (num_of_quantile_levels or 1) * (2 if COMPUTE_MEAN_HAZARD_CURVE else 1) |
|
2011-08-24 16:21:35 |
Lars Butler |
description |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0
-----
Cases
-----
There are 3 cases which can be tested to analyze computation time.
Worst Case:
- Compute on a list of sites
- The list of sites is equal to the all of the locations defined in the source model.
- In other words, we compute hazard for sites which are right on top of each source (1 site per source).
Reasonable Case 1:
- Compute on a rectangular region just large enough to contain all of the source sites in a source model.
- What should the grid spacing be?
Reasonable Case 2:
- Given the same region constraints defined in 'Reasonable case 1', pick a random list of sites.
- The number of sites chosen shall be equal the number of sources (thus, equal to the number of sites in 'Worst case').
------------------
Test data required
------------------
- Source model containing at least 1 of each type of source (point, area, simple fault, complex fault)
- Source model logic tree
- 1 branch should be sufficient; what really matters is the total number of sources
- GMPE logic tree
- Configuration files for each case (Worst Case, Reasonable Case 1, Reasonable Case 2)
-------------
Hazard Curves
-------------
Hazard curves are the primary output of a hazard calculation and thus deserve their own section.
There are 3 types of curves:
- Hazard curves
- Mean curves
- Quantile curves
One of the measures we may want report (pre-calculation) is the estimated number of hazard curves which will be produced. (TODO: Figure out how to estimate hazard curve calculation _time_ based on the estimated curve _number_.) For Classical PSHA, the total number of curves can estimated as follows:
Total number of curves = num_of_sites * num_of_logic_tree_samples * (num_of_quantile_levels or 1) * (2 if COMPUTE_MEAN_HAZARD_CURVE else 1) |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0
-----
Cases
-----
There are 3 cases which can be tested to analyze computation time.
Worst Case:
- Compute on a list of sites
- The list of sites is equal to the all of the locations defined in the source model.
- In other words, we compute hazard for sites which are right on top of each source (1 site per source).
Reasonable Case 1:
- Compute on a rectangular region just large enough to contain all of the source sites in a source model.
- What should the grid spacing be?
Reasonable Case 2:
- Given the same region constraints defined in 'Reasonable case 1', pick a random list of sites.
- The number of sites chosen shall be equal the number of sources (thus, equal to the number of sites in 'Worst case').
------------------
Test data required
------------------
- Source model containing at least 1 of each type of source (point, area, simple fault, complex fault)
- Source model logic tree
- 1 branch should be sufficient; what really matters is the total number of sources
- GMPE logic tree
- Configuration files for each case (Worst Case, Reasonable Case 1, Reasonable Case 2)
-------------
Hazard Curves
-------------
Hazard curves are the primary output of a hazard calculation and thus deserve their own section.
There are 3 types of curves:
- Hazard curves
- Mean curves
- Quantile curves
One of the measures we may want report (pre-calculation) is the estimated number of hazard curves which will be produced. (TODO: Figure out how to estimate hazard curve calculation _time_ based on the estimated curve _number_.) For Classical PSHA, the total number of curves can estimated as follows:
total_curves = num_of_sites * num_of_logic_tree_samples
* (num_of_quantile_levels or 1)
* (2 if COMPUTE_MEAN_HAZARD_CURVE==true else 1) |
|
2011-08-24 16:22:03 |
Lars Butler |
description |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0
-----
Cases
-----
There are 3 cases which can be tested to analyze computation time.
Worst Case:
- Compute on a list of sites
- The list of sites is equal to the all of the locations defined in the source model.
- In other words, we compute hazard for sites which are right on top of each source (1 site per source).
Reasonable Case 1:
- Compute on a rectangular region just large enough to contain all of the source sites in a source model.
- What should the grid spacing be?
Reasonable Case 2:
- Given the same region constraints defined in 'Reasonable case 1', pick a random list of sites.
- The number of sites chosen shall be equal the number of sources (thus, equal to the number of sites in 'Worst case').
------------------
Test data required
------------------
- Source model containing at least 1 of each type of source (point, area, simple fault, complex fault)
- Source model logic tree
- 1 branch should be sufficient; what really matters is the total number of sources
- GMPE logic tree
- Configuration files for each case (Worst Case, Reasonable Case 1, Reasonable Case 2)
-------------
Hazard Curves
-------------
Hazard curves are the primary output of a hazard calculation and thus deserve their own section.
There are 3 types of curves:
- Hazard curves
- Mean curves
- Quantile curves
One of the measures we may want report (pre-calculation) is the estimated number of hazard curves which will be produced. (TODO: Figure out how to estimate hazard curve calculation _time_ based on the estimated curve _number_.) For Classical PSHA, the total number of curves can estimated as follows:
total_curves = num_of_sites * num_of_logic_tree_samples
* (num_of_quantile_levels or 1)
* (2 if COMPUTE_MEAN_HAZARD_CURVE==true else 1) |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0
-----
Cases
-----
There are 3 cases which can be tested to analyze computation time.
Worst Case:
- Compute on a list of sites
- The list of sites is equal to the all of the locations defined in the source model.
- In other words, we compute hazard for sites which are right on top of each source (1 site per source).
Reasonable Case 1:
- Compute on a rectangular region just large enough to contain all of the source sites in a source model.
- What should the grid spacing be?
Reasonable Case 2:
- Given the same region constraints defined in 'Reasonable case 1', pick a random list of sites.
- The number of sites chosen shall be equal the number of sources (thus, equal to the number of sites in 'Worst case').
------------------
Test data required
------------------
- Source model containing at least 1 of each type of source (point, area, simple fault, complex fault)
- Source model logic tree
- 1 branch should be sufficient; what really matters is the total number of sources
- GMPE logic tree
- Configuration files for each case (Worst Case, Reasonable Case 1, Reasonable Case 2)
-------------
Hazard Curves
-------------
Hazard curves are the primary output of a hazard calculation and thus deserve their own section.
There are 3 types of curves:
- Hazard curves
- Mean curves
- Quantile curves
One of the measures we may want report (pre-calculation) is the estimated number of hazard curves which will be produced. (TODO: Figure out how to estimate hazard curve calculation _time_ based on the estimated curve _number_.) For Classical PSHA, the total number of curves can estimated as follows:
total_curves = num_of_sites
* num_of_logic_tree_samples
* (num_of_quantile_levels or 1)
* (2 if COMPUTE_MEAN_HAZARD_CURVE==true else 1) |
|
2011-08-24 16:25:56 |
Lars Butler |
description |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0
-----
Cases
-----
There are 3 cases which can be tested to analyze computation time.
Worst Case:
- Compute on a list of sites
- The list of sites is equal to the all of the locations defined in the source model.
- In other words, we compute hazard for sites which are right on top of each source (1 site per source).
Reasonable Case 1:
- Compute on a rectangular region just large enough to contain all of the source sites in a source model.
- What should the grid spacing be?
Reasonable Case 2:
- Given the same region constraints defined in 'Reasonable case 1', pick a random list of sites.
- The number of sites chosen shall be equal the number of sources (thus, equal to the number of sites in 'Worst case').
------------------
Test data required
------------------
- Source model containing at least 1 of each type of source (point, area, simple fault, complex fault)
- Source model logic tree
- 1 branch should be sufficient; what really matters is the total number of sources
- GMPE logic tree
- Configuration files for each case (Worst Case, Reasonable Case 1, Reasonable Case 2)
-------------
Hazard Curves
-------------
Hazard curves are the primary output of a hazard calculation and thus deserve their own section.
There are 3 types of curves:
- Hazard curves
- Mean curves
- Quantile curves
One of the measures we may want report (pre-calculation) is the estimated number of hazard curves which will be produced. (TODO: Figure out how to estimate hazard curve calculation _time_ based on the estimated curve _number_.) For Classical PSHA, the total number of curves can estimated as follows:
total_curves = num_of_sites
* num_of_logic_tree_samples
* (num_of_quantile_levels or 1)
* (2 if COMPUTE_MEAN_HAZARD_CURVE==true else 1) |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0
-----
Cases
-----
There are 3 cases which can be tested to analyze computation time.
Worst Case:
- Compute on a list of sites
- The list of sites is equal to the all of the locations defined in the source model.
- In other words, we compute hazard for sites which are right on top of each source (1 site per source).
Reasonable Case 1:
- Compute on a rectangular region just large enough to contain all of the source sites in a source model.
- What should the grid spacing be?
Reasonable Case 2:
- Given the same region constraints defined in 'Reasonable case 1', pick a random list of sites.
- The number of sites chosen shall be equal the number of sources (thus, equal to the number of sites in 'Worst case').
------------------
Test data required
------------------
- Source model containing at least 1 of each type of source (point, area, simple fault, complex fault)
- Source model logic tree
- 1 branch should be sufficient; what really matters is the total number of sources
- GMPE logic tree
- Configuration files for each case (Worst Case, Reasonable Case 1, Reasonable Case 2)
-------------
Hazard Curves
-------------
Hazard curves are the primary output of a hazard calculation.
There are 3 types of curves:
- Hazard curves
- Mean curves
- Quantile curves
One of the measures we may want report (pre-calculation) is the estimated number of hazard curves which will be produced. (TODO: Figure out how to estimate hazard curve calculation _time_ based on the estimated curve _number_.) For Classical PSHA, the total number of curves can estimated as follows:
total_curves = num_of_sites
* num_of_logic_tree_samples
* (num_of_quantile_levels or 1)
* (2 if COMPUTE_MEAN_HAZARD_CURVE==true else 1)
-----------
Hazard Maps
-----------
Hazard maps are less computationally expensive to produce than hazard curves.
Maps are derived from curve data by interpolating an IML value from each curve given a fixed PoE value. When compared to hazard curve computation, the cost of interpolation is less significant, but we still need to take this into account (particularly for calculations over a large set of sites).
The number of hazard maps produced by OpenQuake is equal to the number of POES_HAZARD_MAPS values defined (if any). |
|
2011-08-24 16:28:46 |
Lars Butler |
description |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0
-----
Cases
-----
There are 3 cases which can be tested to analyze computation time.
Worst Case:
- Compute on a list of sites
- The list of sites is equal to the all of the locations defined in the source model.
- In other words, we compute hazard for sites which are right on top of each source (1 site per source).
Reasonable Case 1:
- Compute on a rectangular region just large enough to contain all of the source sites in a source model.
- What should the grid spacing be?
Reasonable Case 2:
- Given the same region constraints defined in 'Reasonable case 1', pick a random list of sites.
- The number of sites chosen shall be equal the number of sources (thus, equal to the number of sites in 'Worst case').
------------------
Test data required
------------------
- Source model containing at least 1 of each type of source (point, area, simple fault, complex fault)
- Source model logic tree
- 1 branch should be sufficient; what really matters is the total number of sources
- GMPE logic tree
- Configuration files for each case (Worst Case, Reasonable Case 1, Reasonable Case 2)
-------------
Hazard Curves
-------------
Hazard curves are the primary output of a hazard calculation.
There are 3 types of curves:
- Hazard curves
- Mean curves
- Quantile curves
One of the measures we may want report (pre-calculation) is the estimated number of hazard curves which will be produced. (TODO: Figure out how to estimate hazard curve calculation _time_ based on the estimated curve _number_.) For Classical PSHA, the total number of curves can estimated as follows:
total_curves = num_of_sites
* num_of_logic_tree_samples
* (num_of_quantile_levels or 1)
* (2 if COMPUTE_MEAN_HAZARD_CURVE==true else 1)
-----------
Hazard Maps
-----------
Hazard maps are less computationally expensive to produce than hazard curves.
Maps are derived from curve data by interpolating an IML value from each curve given a fixed PoE value. When compared to hazard curve computation, the cost of interpolation is less significant, but we still need to take this into account (particularly for calculations over a large set of sites).
The number of hazard maps produced by OpenQuake is equal to the number of POES_HAZARD_MAPS values defined (if any). |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0
-----
Cases
-----
There are 3 cases which can be tested to analyze computation time.
Worst Case:
- Compute on a list of sites
- The list of sites is equal to the all of the locations defined in the source model.
- In other words, we compute hazard for sites which are right on top of each source (1 site per source).
Reasonable Case 1:
- Compute on a rectangular region just large enough to contain all of the source sites in a source model.
- What should the grid spacing be?
Reasonable Case 2:
- Given the same region constraints defined in 'Reasonable case 1', pick a random list of sites.
- The number of sites chosen shall be equal the number of sources (thus, equal to the number of sites in 'Worst case').
------------------
Test data required
------------------
- Source model containing at least 1 of each type of source (point, area, simple fault, complex fault)
- Source model logic tree
- 1 branch should be sufficient; what really matters is the total number of sources
- GMPE logic tree
- TODO: What role do GMPEs play in Classical hazard calculations?
- Configuration files for each case (Worst Case, Reasonable Case 1, Reasonable Case 2)
-------------
Hazard Curves
-------------
Hazard curves are the primary output of a hazard calculation.
There are 3 types of curves:
- Hazard curves
- Mean curves
- Quantile curves
One of the measures we may want report (pre-calculation) is the estimated number of hazard curves which will be produced. (TODO: Figure out how to estimate hazard curve calculation _time_ based on the estimated curve _number_.) For Classical PSHA, the total number of curves can estimated as follows:
total_curves = num_of_sites
* num_of_logic_tree_samples
* (num_of_quantile_levels or 1)
* (2 if COMPUTE_MEAN_HAZARD_CURVE==true else 1)
-----------
Hazard Maps
-----------
Hazard maps are less computationally expensive to produce than hazard curves.
Maps are derived from curve data by interpolating an IML value from each curve given a fixed PoE value. When compared to hazard curve computation, the cost of interpolation is less significant, but we still need to take this into account (particularly for calculations over a large set of sites).
The number of hazard maps produced by OpenQuake is equal to the number of POES_HAZARD_MAPS values defined (if any). |
|
2011-08-24 16:31:51 |
Lars Butler |
description |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0
-----
Cases
-----
There are 3 cases which can be tested to analyze computation time.
Worst Case:
- Compute on a list of sites
- The list of sites is equal to the all of the locations defined in the source model.
- In other words, we compute hazard for sites which are right on top of each source (1 site per source).
Reasonable Case 1:
- Compute on a rectangular region just large enough to contain all of the source sites in a source model.
- What should the grid spacing be?
Reasonable Case 2:
- Given the same region constraints defined in 'Reasonable case 1', pick a random list of sites.
- The number of sites chosen shall be equal the number of sources (thus, equal to the number of sites in 'Worst case').
------------------
Test data required
------------------
- Source model containing at least 1 of each type of source (point, area, simple fault, complex fault)
- Source model logic tree
- 1 branch should be sufficient; what really matters is the total number of sources
- GMPE logic tree
- TODO: What role do GMPEs play in Classical hazard calculations?
- Configuration files for each case (Worst Case, Reasonable Case 1, Reasonable Case 2)
-------------
Hazard Curves
-------------
Hazard curves are the primary output of a hazard calculation.
There are 3 types of curves:
- Hazard curves
- Mean curves
- Quantile curves
One of the measures we may want report (pre-calculation) is the estimated number of hazard curves which will be produced. (TODO: Figure out how to estimate hazard curve calculation _time_ based on the estimated curve _number_.) For Classical PSHA, the total number of curves can estimated as follows:
total_curves = num_of_sites
* num_of_logic_tree_samples
* (num_of_quantile_levels or 1)
* (2 if COMPUTE_MEAN_HAZARD_CURVE==true else 1)
-----------
Hazard Maps
-----------
Hazard maps are less computationally expensive to produce than hazard curves.
Maps are derived from curve data by interpolating an IML value from each curve given a fixed PoE value. When compared to hazard curve computation, the cost of interpolation is less significant, but we still need to take this into account (particularly for calculations over a large set of sites).
The number of hazard maps produced by OpenQuake is equal to the number of POES_HAZARD_MAPS values defined (if any). |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0
-----
Cases
-----
There are 3 cases which can be tested to analyze computation time.
Worst Case:
- Compute on a list of sites
- The list of sites is equal to the all of the locations defined in the source model.
- In other words, we compute hazard for sites which are right on top of each source (1 site per source).
Reasonable Case 1:
- Compute on a rectangular region just large enough to contain all of the source sites in a source model.
- TODO: What should the grid spacing be?
Reasonable Case 2:
- Given the same region constraints defined in 'Reasonable case 1', pick a random list of sites.
- The number of sites chosen shall be equal the number of sources (thus, equal to the number of sites in 'Worst case').
------------------
Test data required
------------------
- Source model containing at least 1 of each type of source (point, area, simple fault, complex fault)
- Source model logic tree
- 1 branch should be sufficient; what really matters is the total number of sources
- GMPE logic tree
- TODO: What role do GMPEs play in Classical hazard calculations?
- Configuration files for each case (Worst Case, Reasonable Case 1, Reasonable Case 2)
-------------
Hazard Curves
-------------
Hazard curves are the primary output of a hazard calculation.
There are 3 types of curves:
- Hazard curves
- Mean curves
- Quantile curves
One of the measures we may want report (pre-calculation) is the estimated number of hazard curves which will be produced. (TODO: Figure out how to estimate hazard curve calculation _time_ based on the estimated curve _number_.) For Classical PSHA, the total number of curves can estimated as follows:
total_curves = num_of_sites
* num_of_logic_tree_samples
* (num_of_quantile_levels or 1)
* (2 if COMPUTE_MEAN_HAZARD_CURVE==true else 1)
-----------
Hazard Maps
-----------
Hazard maps are less computationally expensive to produce than hazard curves.
Maps are derived from curve data by interpolating an IML value from each curve given a fixed PoE value. When compared to hazard curve computation, the cost of interpolation is less significant, but we still need to take this into account (particularly for calculations over a large set of sites).
The number of hazard maps produced by OpenQuake is equal to the number of POES_HAZARD_MAPS values defined (if any). |
|
2011-08-24 16:40:29 |
Lars Butler |
description |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0
-----
Cases
-----
There are 3 cases which can be tested to analyze computation time.
Worst Case:
- Compute on a list of sites
- The list of sites is equal to the all of the locations defined in the source model.
- In other words, we compute hazard for sites which are right on top of each source (1 site per source).
Reasonable Case 1:
- Compute on a rectangular region just large enough to contain all of the source sites in a source model.
- TODO: What should the grid spacing be?
Reasonable Case 2:
- Given the same region constraints defined in 'Reasonable case 1', pick a random list of sites.
- The number of sites chosen shall be equal the number of sources (thus, equal to the number of sites in 'Worst case').
------------------
Test data required
------------------
- Source model containing at least 1 of each type of source (point, area, simple fault, complex fault)
- Source model logic tree
- 1 branch should be sufficient; what really matters is the total number of sources
- GMPE logic tree
- TODO: What role do GMPEs play in Classical hazard calculations?
- Configuration files for each case (Worst Case, Reasonable Case 1, Reasonable Case 2)
-------------
Hazard Curves
-------------
Hazard curves are the primary output of a hazard calculation.
There are 3 types of curves:
- Hazard curves
- Mean curves
- Quantile curves
One of the measures we may want report (pre-calculation) is the estimated number of hazard curves which will be produced. (TODO: Figure out how to estimate hazard curve calculation _time_ based on the estimated curve _number_.) For Classical PSHA, the total number of curves can estimated as follows:
total_curves = num_of_sites
* num_of_logic_tree_samples
* (num_of_quantile_levels or 1)
* (2 if COMPUTE_MEAN_HAZARD_CURVE==true else 1)
-----------
Hazard Maps
-----------
Hazard maps are less computationally expensive to produce than hazard curves.
Maps are derived from curve data by interpolating an IML value from each curve given a fixed PoE value. When compared to hazard curve computation, the cost of interpolation is less significant, but we still need to take this into account (particularly for calculations over a large set of sites).
The number of hazard maps produced by OpenQuake is equal to the number of POES_HAZARD_MAPS values defined (if any). |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0
-----
Cases
-----
There are 3 cases which can be tested to analyze computation time.
Worst Case:
- Compute on a list of sites
- The list of sites is equal to the all of the locations defined in the source model.
- In other words, we compute hazard for sites which are right on top of each source (1 site per source).
Reasonable Case 1:
- Compute on a rectangular region just large enough to contain all of the source sites in a source model.
- TODO: What should the grid spacing be?
Reasonable Case 2:
- Given the same region constraints defined in 'Reasonable case 1', pick a random list of sites.
- The number of sites chosen shall be equal the number of sources (thus, equal to the number of sites in 'Worst case').
------------------
Test data required
------------------
- Source model containing at least 1 of each type of source (point, area, simple fault, complex fault)
- Source model logic tree
- 1 branch should be sufficient; what really matters is the total number of sources
- GMPE logic tree
- TODO: What role do GMPEs play in Classical hazard calculations?
- Configuration files for each case (Worst Case, Reasonable Case 1, Reasonable Case 2)
-------------
Hazard Curves
-------------
Hazard curves are the primary output of a hazard calculation.
There are 3 types of curves:
- Hazard curves
- Mean curves
- Quantile curves
One of the measures we may want report (pre-calculation) is the estimated number of hazard curves which will be produced. (TODO: Figure out how to estimate hazard curve calculation _time_ based on the estimated curve _number_.) For Classical PSHA, the total number of curves can estimated as follows:
total_curves = num_of_sites
* num_of_logic_tree_samples
* (num_of_quantile_levels or 1)
* (2 if COMPUTE_MEAN_HAZARD_CURVE==true else 1)
-----------
Hazard Maps
-----------
Hazard maps are less computationally expensive to produce than hazard curves.
Maps are derived from curve data by interpolating an IML value from each curve given a fixed PoE value. When compared to hazard curve computation, the cost of interpolation is less significant, but we still need to take this into account (particularly for calculations over a large set of sites).
The number of hazard maps produced by OpenQuake is equal to the number of POES_HAZARD_MAPS values defined (if any).
-----------------------------
Breakdown of calculation time
-----------------------------
Initialization time:
- Engine startup
- Processing job input
- Loading the KVS cache
Calculation time:
- Computation of hazard curves and maps
Results creation time:
- Serializing map and curve data to the specified output destination (DB or XML)
Total time = initialization time + calculation time + results creation time |
|
2011-08-24 16:49:56 |
Lars Butler |
description |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0
-----
Cases
-----
There are 3 cases which can be tested to analyze computation time.
Worst Case:
- Compute on a list of sites
- The list of sites is equal to the all of the locations defined in the source model.
- In other words, we compute hazard for sites which are right on top of each source (1 site per source).
Reasonable Case 1:
- Compute on a rectangular region just large enough to contain all of the source sites in a source model.
- TODO: What should the grid spacing be?
Reasonable Case 2:
- Given the same region constraints defined in 'Reasonable case 1', pick a random list of sites.
- The number of sites chosen shall be equal the number of sources (thus, equal to the number of sites in 'Worst case').
------------------
Test data required
------------------
- Source model containing at least 1 of each type of source (point, area, simple fault, complex fault)
- Source model logic tree
- 1 branch should be sufficient; what really matters is the total number of sources
- GMPE logic tree
- TODO: What role do GMPEs play in Classical hazard calculations?
- Configuration files for each case (Worst Case, Reasonable Case 1, Reasonable Case 2)
-------------
Hazard Curves
-------------
Hazard curves are the primary output of a hazard calculation.
There are 3 types of curves:
- Hazard curves
- Mean curves
- Quantile curves
One of the measures we may want report (pre-calculation) is the estimated number of hazard curves which will be produced. (TODO: Figure out how to estimate hazard curve calculation _time_ based on the estimated curve _number_.) For Classical PSHA, the total number of curves can estimated as follows:
total_curves = num_of_sites
* num_of_logic_tree_samples
* (num_of_quantile_levels or 1)
* (2 if COMPUTE_MEAN_HAZARD_CURVE==true else 1)
-----------
Hazard Maps
-----------
Hazard maps are less computationally expensive to produce than hazard curves.
Maps are derived from curve data by interpolating an IML value from each curve given a fixed PoE value. When compared to hazard curve computation, the cost of interpolation is less significant, but we still need to take this into account (particularly for calculations over a large set of sites).
The number of hazard maps produced by OpenQuake is equal to the number of POES_HAZARD_MAPS values defined (if any).
-----------------------------
Breakdown of calculation time
-----------------------------
Initialization time:
- Engine startup
- Processing job input
- Loading the KVS cache
Calculation time:
- Computation of hazard curves and maps
Results creation time:
- Serializing map and curve data to the specified output destination (DB or XML)
Total time = initialization time + calculation time + results creation time |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0
--------------
Analysis Cases
--------------
There are 3 cases which can be tested to analyze computation time.
Worst Case:
- Compute on a list of sites
- The list of sites is equal to the all of the locations defined in the source model.
- In other words, we compute hazard for sites which are right on top of each source (1 site per source).
Reasonable Case 1:
- Compute on a rectangular region just large enough to contain all of the source sites in a source model.
- TODO: What should the grid spacing be?
Reasonable Case 2:
- Given the same region constraints defined in 'Reasonable case 1', pick a random list of sites.
- The number of sites chosen shall be equal the number of sources (thus, equal to the number of sites in 'Worst case').
These cases are simplified to assume that all sources are taken into account. It should be noted that there is one parameter (MAXIMUM_DISTANCE) which determines whether or not a source is taken into account. For example:
MAXIMUM_DISTANCE = 200.0 # kilometers
for site in sites:
for source in sources:
if distance between site and source is > MAXIMUM_DISTANCE:
ignore the source
I suspect that this is meant to keep computations within realistic bounds. For instance, if you were to compute the seismic hazard of the entire USA, it would not make any sense for sites in California to consider sources in New York.
------------------
Test data required
------------------
- Source model containing at least 1 of each type of source (point, area, simple fault, complex fault)
- Source model logic tree
- 1 branch should be sufficient; what really matters is the total number of sources
- GMPE logic tree
- TODO: What role do GMPEs play in Classical hazard calculations?
- Configuration files for each case (Worst Case, Reasonable Case 1, Reasonable Case 2)
-------------
Hazard Curves
-------------
Hazard curves are the primary output of a hazard calculation.
There are 3 types of curves:
- Hazard curves
- Mean curves
- Quantile curves
One of the measures we may want report (pre-calculation) is the estimated number of hazard curves which will be produced. (TODO: Figure out how to estimate hazard curve calculation _time_ based on the estimated curve _number_.) For Classical PSHA, the total number of curves can estimated as follows:
total_curves = num_of_sites
* num_of_logic_tree_samples
* (num_of_quantile_levels or 1)
* (2 if COMPUTE_MEAN_HAZARD_CURVE==true else 1)
-----------
Hazard Maps
-----------
Hazard maps are less computationally expensive to produce than hazard curves.
Maps are derived from curve data by interpolating an IML value from each curve given a fixed PoE value. When compared to hazard curve computation, the cost of interpolation is less significant, but we still need to take this into account (particularly for calculations over a large set of sites).
The number of hazard maps produced by OpenQuake is equal to the number of POES_HAZARD_MAPS values defined (if any).
-----------------------------
Breakdown of calculation time
-----------------------------
Initialization time:
- Engine startup
- Processing job input
- Loading the KVS cache
Calculation time:
- Computation of hazard curves and maps
Results creation time:
- Serializing map and curve data to the specified output destination (DB or XML)
Total time = initialization time + calculation time + results creation time |
|
2011-08-30 08:08:24 |
Lars Butler |
openquake: importance |
Undecided |
Medium |
|
2011-09-01 16:00:58 |
Lars Butler |
openquake: assignee |
Lars Butler (lars-butler) |
|
|
2011-09-01 16:01:01 |
Lars Butler |
openquake: status |
In Progress |
Confirmed |
|
2011-09-06 16:44:59 |
John Tarter |
openquake: milestone |
0.4.3 |
0.4.4 |
|
2011-09-22 11:31:16 |
John Tarter |
openquake: assignee |
|
beatpanic (kpanic) |
|
2011-10-05 11:48:18 |
John Tarter |
openquake: milestone |
0.4.4 |
0.4.5 |
|
2011-10-25 13:01:26 |
John Tarter |
openquake: importance |
Medium |
High |
|
2011-10-25 13:03:08 |
beatpanic |
openquake: status |
Confirmed |
In Progress |
|
2011-10-25 13:15:06 |
beatpanic |
description |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES_HAZARD_MAPS
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0
--------------
Analysis Cases
--------------
There are 3 cases which can be tested to analyze computation time.
Worst Case:
- Compute on a list of sites
- The list of sites is equal to the all of the locations defined in the source model.
- In other words, we compute hazard for sites which are right on top of each source (1 site per source).
Reasonable Case 1:
- Compute on a rectangular region just large enough to contain all of the source sites in a source model.
- TODO: What should the grid spacing be?
Reasonable Case 2:
- Given the same region constraints defined in 'Reasonable case 1', pick a random list of sites.
- The number of sites chosen shall be equal the number of sources (thus, equal to the number of sites in 'Worst case').
These cases are simplified to assume that all sources are taken into account. It should be noted that there is one parameter (MAXIMUM_DISTANCE) which determines whether or not a source is taken into account. For example:
MAXIMUM_DISTANCE = 200.0 # kilometers
for site in sites:
for source in sources:
if distance between site and source is > MAXIMUM_DISTANCE:
ignore the source
I suspect that this is meant to keep computations within realistic bounds. For instance, if you were to compute the seismic hazard of the entire USA, it would not make any sense for sites in California to consider sources in New York.
------------------
Test data required
------------------
- Source model containing at least 1 of each type of source (point, area, simple fault, complex fault)
- Source model logic tree
- 1 branch should be sufficient; what really matters is the total number of sources
- GMPE logic tree
- TODO: What role do GMPEs play in Classical hazard calculations?
- Configuration files for each case (Worst Case, Reasonable Case 1, Reasonable Case 2)
-------------
Hazard Curves
-------------
Hazard curves are the primary output of a hazard calculation.
There are 3 types of curves:
- Hazard curves
- Mean curves
- Quantile curves
One of the measures we may want report (pre-calculation) is the estimated number of hazard curves which will be produced. (TODO: Figure out how to estimate hazard curve calculation _time_ based on the estimated curve _number_.) For Classical PSHA, the total number of curves can estimated as follows:
total_curves = num_of_sites
* num_of_logic_tree_samples
* (num_of_quantile_levels or 1)
* (2 if COMPUTE_MEAN_HAZARD_CURVE==true else 1)
-----------
Hazard Maps
-----------
Hazard maps are less computationally expensive to produce than hazard curves.
Maps are derived from curve data by interpolating an IML value from each curve given a fixed PoE value. When compared to hazard curve computation, the cost of interpolation is less significant, but we still need to take this into account (particularly for calculations over a large set of sites).
The number of hazard maps produced by OpenQuake is equal to the number of POES_HAZARD_MAPS values defined (if any).
-----------------------------
Breakdown of calculation time
-----------------------------
Initialization time:
- Engine startup
- Processing job input
- Loading the KVS cache
Calculation time:
- Computation of hazard curves and maps
Results creation time:
- Serializing map and curve data to the specified output destination (DB or XML)
Total time = initialization time + calculation time + results creation time |
Devise an algorithm for estimating computation time based on Classical PSHA Hazard calculation parameters.
----------
Parameters
----------
The following parameters affect computation time (some more than others):
SITES or (REGION_VERTEX and REGION_GRID_SPACING)
INTENSITY_MEASURE_LEVELS
INCLUDE_AREA_SOURCES
TREAT_AREA_SOURCE_AS
AREA_SOURCE_DISCRETIZATION
INCLUDE_GRID_SOURCES
TREAT_GRID_SOURCE_AS
INCLUDE_FAULT_SOURCE
FAULT_RUPTURE_OFFSET
FAULT_SURFACE_DISCRETIZATION
RUPTURE_FLOATING_TYPE
INCLUDE_SUBDUCTION_FAULT_SOURCE
SUBDUCTION_RUPTURE_OFFSET
SUBDUCTION_SURFACE_DISCRETIZATION
SUBDUCTION_RUPTURE_FLOATING_TYPE
NUMBER_OF_LOGIC_TREE_SAMPLES
QUANTILE_LEVELS
COMPUTE_MEAN_HAZARD_CURVE
POES
For more details about _how_ each of these parameters affects the computation time, have a look at this table:
https://docs.google.com/spreadsheet/ccc?key=0AgmeiGIi49FLdEVaMEZ2S1VUOWUwanMzQW0zWDNkbFE&hl=en_US#gid=0
--------------
Analysis Cases
--------------
There are 3 cases which can be tested to analyze computation time.
Worst Case:
- Compute on a list of sites
- The list of sites is equal to the all of the locations defined in the source model.
- In other words, we compute hazard for sites which are right on top of each source (1 site per source).
Reasonable Case 1:
- Compute on a rectangular region just large enough to contain all of the source sites in a source model.
- TODO: What should the grid spacing be?
Reasonable Case 2:
- Given the same region constraints defined in 'Reasonable case 1', pick a random list of sites.
- The number of sites chosen shall be equal the number of sources (thus, equal to the number of sites in 'Worst case').
These cases are simplified to assume that all sources are taken into account. It should be noted that there is one parameter (MAXIMUM_DISTANCE) which determines whether or not a source is taken into account. For example:
MAXIMUM_DISTANCE = 200.0 # kilometers
for site in sites:
for source in sources:
if distance between site and source is > MAXIMUM_DISTANCE:
ignore the source
I suspect that this is meant to keep computations within realistic bounds. For instance, if you were to compute the seismic hazard of the entire USA, it would not make any sense for sites in California to consider sources in New York.
------------------
Test data required
------------------
- Source model containing at least 1 of each type of source (point, area, simple fault, complex fault)
- Source model logic tree
- 1 branch should be sufficient; what really matters is the total number of sources
- GMPE logic tree
- TODO: What role do GMPEs play in Classical hazard calculations?
- Configuration files for each case (Worst Case, Reasonable Case 1, Reasonable Case 2)
-------------
Hazard Curves
-------------
Hazard curves are the primary output of a hazard calculation.
There are 3 types of curves:
- Hazard curves
- Mean curves
- Quantile curves
One of the measures we may want report (pre-calculation) is the estimated number of hazard curves which will be produced. (TODO: Figure out how to estimate hazard curve calculation _time_ based on the estimated curve _number_.) For Classical PSHA, the total number of curves can estimated as follows:
total_curves = num_of_sites
* num_of_logic_tree_samples
* (num_of_quantile_levels or 1)
* (2 if COMPUTE_MEAN_HAZARD_CURVE==true else 1)
-----------
Hazard Maps
-----------
Hazard maps are less computationally expensive to produce than hazard curves.
Maps are derived from curve data by interpolating an IML value from each curve given a fixed PoE value. When compared to hazard curve computation, the cost of interpolation is less significant, but we still need to take this into account (particularly for calculations over a large set of sites).
The number of hazard maps produced by OpenQuake is equal to the number of POES values defined (if any).
-----------------------------
Breakdown of calculation time
-----------------------------
Initialization time:
- Engine startup
- Processing job input
- Loading the KVS cache
Calculation time:
- Computation of hazard curves and maps
Results creation time:
- Serializing map and curve data to the specified output destination (DB or XML)
Total time = initialization time + calculation time + results creation time |
|
2011-11-01 15:04:44 |
John Tarter |
openquake: milestone |
0.4.5 |
0.4.6 |
|
2011-12-09 16:05:48 |
John Tarter |
openquake: milestone |
0.4.6 |
0.5.0 |
|
2012-01-11 12:15:01 |
John Tarter |
openquake: milestone |
0.5.0 |
0.5.1 |
|
2012-01-11 13:47:14 |
John Tarter |
openquake: status |
In Progress |
New |
|
2012-01-11 13:47:21 |
John Tarter |
openquake: assignee |
beatpanic (kpanic) |
|
|
2012-01-20 16:01:04 |
John Tarter |
openquake: milestone |
0.5.1 |
0.6.0 |
|
2012-03-05 13:28:04 |
John Tarter |
openquake: milestone |
0.6.0 |
0.7.0 |
|
2013-03-11 13:37:27 |
Lars Butler |
openquake: status |
New |
Won't Fix |
|