UCVM 14.3.0 User Guide
Contents
Overview
Released on March 31, 2014, UCVM 14.3.0 represents the third major release of the Unified Community Velocity Model (UCVM) framework. UCVM is a collection of software utilities that are designed to make querying velocity models, building meshes, and visualizing velocity models easier to do through a uniform software interface. UCVM has been used extensively to generate meshes and e-trees that are then used for 3D wave propagation simulations within California.
The full feature list is as follows:
- Seamlessly combine two or more models into a composite model for querying
- Optionally include a California statewide geotechnical layer and interpolate it with the crustal velocity models
- Extract a 3D mesh or CVM Etree (octree) of material properties from any combination of models
- Standard California statewide elevation and Vs30 data map is provided
- Miscellaneous tools are provided for creating 2D etree maps, and optimizing etrees
- Numerically smooth discontinuities at the interfaces between different regional models
- Add support for future velocity models with the extendable interface
- Add small-scale heterogeneities to meshes
- Visualize cross-sections, horizontal slices, Z1.0, Z2.5, and Vs30 maps
- Query for Z1.0 and Z2.5 values
Currently, we support CVM-S4, CVM-H 11.9.1, CVM-S4.26, CenCal 0.8.0, Broadband Whittier Narrows 1D model, and the Hadley-Kanamori 1D model as part of our automated installation package. Other models, such as SCEC CVM-NCI, Magistrale WFCVM, Graves Cape Mendocino, Lin-Thurber Statewide, and Tape SoCal are supported, however they will require a manual installation. We have also tested and include support for three major high-performance computing resources: NICS Kraken, TACC Stampede, and NCSA Blue Waters. It should work with other HPC machines as well.
The API itself is written entirely in C. We will show how to query UCVM through both C and Fortran in this user guide. The visualization scripts are written in Python, and can be easily called from the command line as part of an automated workflow.
Finally, UCVM requires either Linux or OS X, GCC 4.3+, Python 2.5+, and an active internet connection to download and retrieve the models. For parallel mesh or e-tree extraction, the MPI libraries are also required.
Four Linux systems are officially supported:
- Ubuntu 13.10
- Debian 7
- CentOS 6.5
- Red Hat Enterprise Linux 6.5
Other systems, while not officially supported, should still work.
If you are installing UCVM on OS X, you must have the OS X Developer Tools (Xcode and Command Line Tools) and you must also have gfortran installed.
Download
Platform | File | Download | Mirror |
---|---|---|---|
Linux | SCEC UCVM 14.3.0 Official Release (391 Mb) | ucvm-14.3.0.tar.gz | N/A |
Linux | SCEC UCVM 14.3.0 md5 checksum (< 1Kb) | ucvm-14.3.0.tar.gz.md5 | N/A |
Installation Instructions
Easy Method
If you are installing UCVM on Linux or OS X and only need CVM-S4, CVM-H 11.9.1, CVM-S4.26, and/or CenCal, we strongly recommend following the easy installation method. Simply download UCVM 14.3.0 and run the following commands:
tar zxvf ucvm-14.3.0.tar.gz
cd ./UCVM
./ucvm_setup.py
You will then be asked a series of questions:
It looks like you are installing UCVM for the first time.
Where would you like UCVM to be installed?
(Default: /your/home/dir)>
Enter path or blank to use default:
Hit enter to use the default path or type your own. On high-performance computing resources you must change this path to be in the scratch or work directory so that UCVM can be seen by the compute nodes.
You will then be asked which models you'd like to download, such as:
Would you like to download and install CVM-S4?
Enter yes or no:
Enter "yes" (without quotation marks) or "no" if you would like to download this model or not.
After answering a few of these questions, UCVM will then begin the process of downloading and building the models you have requested. When the process is completed, you will be advised on how to test to verify that your installation is working and also any modifications that may be needed to your ~/.bash_profile or ~/.bashrc file.
Custom Method
Please see this page on how to install UCVM with models other than CVM-S4, CVM-H, CVM-S4.26, and CenCal.
Tutorial
A detailed tutorial on using UCVM can be viewed on our tutorial page. This tutorial covers common tasks such as querying for material properties, generating Z1.0 and Z2.5 maps, and so on.
Plotting Utility Command Reference
cross_section.py
This utility plots a cross-section given two latitude, longitude points, a depth, the CVM to plot, and a few other settings. The output of this command is either an image or a text file containing the data from the underlying velocity model. It can be run in interactive mode, simply as follows:
./cross_section.py
Alternatively it can be run with command line arguments:
-s, --starting: starting depth for cross-section (meters)
-e, --ending: ending depth for cross-section (meters)
-h, --horizontal: horizontal spacing for cross-section (meters)
-v, --vertical: vertical spacing for cross-section (meters)
-d, --datatype: either 'vs', 'vp', 'rho', or 'poisson', without quotation marks
-c, --cvm: select one of the installed CVMs
-a, --scale: color scale, either 's' for smooth or 'd' for discretized, without quotes
-o, --origin: origin latitude, longitude from which to start plot (e.g. 34,-118)
-f, --final: destination latitude, longitude to end plot (e.g. 35,-117)
-i, --image: save to image file, or
-t, --text: save to text file
Please note that both the image and text flags are not required. If you use the image flag the output will be saved to the image location you specify. If you use the text flag, it will be saved as text output to the file you specify.
Example:
./cross_section.py -s 0 -e 10000 -h 100 -v 100 -d vs -c cvms -a d -o 34,-118 -f 35,-117 -i image.png
horizontal_slice.py
This utility plots a horizontal slice given two bounding latitude, longitude points, a depth, the CVM to plot, and a few other settings. The output of this command is either an image or a text file containing the data from the underlying velocity model. It can be run in interactive mode, simply as follows:
./horizontal_slice.py
Alternatively it can be run with command line arguments:
-b, --bottomleft: bottom-left latitude, longitude (e.g. 34,-118)
-u, --upperright: upper-right latitude, longitude (e.g. 35,-117)
-s, --spacing: grid spacing in degrees (typically 0.01)
-e, --depth: depth for horizontal slice in meters (e.g. 1000)
-d, --datatype: either 'vs', 'vp', 'rho', or 'poisson', without quotation marks
-c, --cvm: select one of the installed CVMs
-a, --scale: color scale, either 's' for smooth or 'd' for discretized, without quotes
-i, --image: save to image file, or
-t, --text: save to text file
Please note that both the image and text flags are not required. If you use the image flag the output will be saved to the image location you specify. If you use the text flag, it will be saved as text output to the file you specify.
Example:
./horizontal_slice.py -b 34,-118 -u 35,-117 -s 0.01 -e 1000 -d vs -c cvms -a d -i image.png
vs30.py
This utility generates a Vs30 map (average Vs velocity over the top 30 meters) given two bounding latitude, longitude points, the CVM to plot, and a few other settings. The output of this command is either an image or a text file containing the data from the underlying velocity model. It can be run in interactive mode, simply as follows:
./vs30.py
Alternatively it can be run with command line arguments:
-b, --bottomleft: bottom-left latitude, longitude (e.g. 34,-118)
-u, --upperright: upper-right latitude, longitude (e.g. 35,-117)
-s, --spacing: grid spacing in degrees (typically 0.01)
-z, --interval: Z-interval, in meters, that the Vs30 value will be averaged over (typically 0.1)
-c, --cvm: select one of the installed CVMs
-a, --scale: color scale, either 's' for smooth or 'd' for discretized, without quotes
-i, --image: save to image file, or
-t, --text: save to text file
Please note that both the image and text flags are not required. If you use the image flag the output will be saved to the image location you specify. If you use the text flag, it will be saved as text output to the file you specify.
Example:
./vs30.py -b 34,-118 -u 35,-117 -s 0.01 -z 0.1 -c cvms -a d -i image.png
z10.py
This utility generates a Z1.0 map (depth to Vs = 1km/s) given two bounding latitude, longitude points, the CVM to plot, and a few other settings. The output of this command is either an image or a text file containing the data from the underlying velocity model. It can be run in interactive mode, simply as follows:
./z10.py
Alternatively it can be run with command line arguments:
-b, --bottomleft: bottom-left latitude, longitude (e.g. 34,-118)
-u, --upperright: upper-right latitude, longitude (e.g. 35,-117)
-s, --spacing: grid spacing in degrees (typically 0.01)
-z, --interval: Z-interval, in meters, for Z1.0 to be queried (lower value means more precision) -c, --cvm: select one of the installed CVMs
-i, --image: save to image file, or
-t, --text: save to text file
Please note that both the image and text flags are not required. If you use the image flag the output will be saved to the image location you specify. If you use the text flag, it will be saved as text output to the file you specify.
Example:
./z10.py -b 34,-118 -u 35,-117 -s 0.01 -z 20 -c cvms -i image.png
z25.py
This utility behaves the same as the z10.py script, however it generates a Z2.5 (Vs = 2.5 km/s) map instead.
Single Core Command Reference
basin_query
The command basin_query allows you to retrieve the depth at which vs-threshold is first crossed. By default, vs-threshold is set to be 1000m/s, but that can easily be changed with the "-v" flag.
Example usage:
./basin_query ../conf/ucvm.conf -m cvms -v 2500
ecoalesce
The command ecoalesce helps compact an e-tree file that conforms either to CMU or SCEC standards. It does this by replacing eight adjacent octants with identical material properties at level N with a single octant containing the same material properties at level N-1. Usually, this command is run with ecompact as well.
Example usage:
./ecoalesce chino_hills.etree compacted_chino_hills.etree
ecompact
The command ecompact helps compact an e-tree file that conforms either to CMU or SCEC standards. It does this by removing empty space in the Etree data structure. Usually, this command is run with ecoalesce as well.
Example usage:
./ecompact chino_hills.etree compacted_chino_hills.etree
grd_query
The command grd_query queries data from a set of ArcGIS grid files in GridFloat format.
grd2etree
The command grd2etree extracts a SCEC-formatted Etree map from a set of DEM and Vs30 grid files in ArcGIS Gridfloat format.
Example usage:
./grd2etree -f ./grd2float_sample.conf
mesh-check
The command mesh-check does a basic quality assurance check of a mesh file. It checks to make sure that each record in the file is of the correct size. Furthermore, it checks to make sure that each value is not NaN, infinity, or negative.
Example usage:
./mesh-check new_mesh.mesh IJK-12
mesh-op
The command mesh-op subtracts a mesh from another mesh and outputs the difference.
Example usage:
./mesh-op diff ./inmesh1 ./inmesh2 IJK-12 ./outmesh
mesh-strip-ijk
The command mesh-strip-ijk converts an IJK-20 or IJK-32 mesh to an IJK-12 formatted mesh.
Example usage:
./mesh-strip-ijk ijk20_mesh IJK-20 output_mesh
ssh_generate
This command generates a binary float file of heterogeneities. This command is typically run as a two step process. First, the heterogeneities are generated using ssh_generate, then they are added to an underlying mesh using ssh_merge.
The parameters for the heterogeneous medium are:
--d1: Sample distance
--hurst: Defines how rough the heterogeneities are
--l1: Correlation length in the vertical direction
--st23: Factor by which the heterogeneities are stretched
--seed: Random seed value
--n1 Distributions along the vertical axis
--n2 Distributions along the EW axis
--n3 Distributions along the NS axis
-m File to save the heterogeneities to
To generate a 40km x 40km x 20km heterogeneous grid with 20m grid spacing, you would use d1 = 20, n1 = 1000, n2 = 2000, n3 = 2000.
Example usage:
./ssh_generate --hurst 0.1 --d1 20 --l1 1000 --st23 5 --seed 3 --n1 1000--n2 2000 --n3 2000 -m ssh.out
ssh_merge
Adds a small-scale heterogeneous medium to an already extracted mesh (extracted either via ucvm2mesh or ucvm2mesh-mpi). This adds the heterogeneities multiplied by some scaling factor, std, to the slowness of the underlying mesh velocities.
Example usage:
./ssh_merge --input ssh.out --output combined.mesh --mesh extracted.mesh --std 0.1 --x 2000 --y 2000 --z 1000
ucvm_query
This is the command-line tool for querying CVMs. Any set of crustal and GTL velocity models may be selected and queried in order of preference. Points may be queried by (lon,lat,dep) or (lon,lat,elev) and the coordinate conversions for a particular model are handled transparently. A configuration file needs to be passed to ucvm_query. Typically, this would be /path/to/ucvm-14.3.0/conf/ucvm.conf.
Example usage:
./ucvm_query -f ../conf/ucvm.conf -m cvms < ../tests/cvms_input.txt
Using Geo Depth coordinates as default mode.
-118.0000 34.0000 0.000 280.896 390.000 cvms 696.491 213.000 1974.976 none 0.000 0.000 0.000 crust 696.491 213.000 1974.976
ucvm2etree
The command ucvm2etree builds an e-tree from the specifications in a given configuration file, config.
Note that this is the serial version of the command, meaning that it will only run on a single process. As such, building a large e-tree can be very slow. For large e-trees, we strongly recommend using ucvm2etree-extract-MPI, ucvm2etree-sort-MPI, and ucvm2etree-merge-MPI.
Example usage:
./ucvm2etree -f ./ucvm2etree_example.conf
ucvm2mesh
The command ucvm2mesh generates a mesh in either IJK-12, IJK-20, IJK-32, or SORD format. It does so by reading in the specifications in a given configuration file, config.
This mesh can then be used in forward 3D wave propagation simulation software such as AWP-ODC.
Example usage:
./ucvm2mesh -f ./ucvm2mesh_example.conf
MPI Command Reference
basin_query_mpi
Notice: This command is intended to be run as a MPI job (e.g. mpirun ./basin_query_mpi). Please do not attempt to run it as a regular process.
This utility creates a binary float file listing the depths at which Vs crosses the user-defined threshold (these are typically referred to as "Z1.0" for 1000m/s and "Z2.5" for 2500m/s). It works by partitioning the points across the available processors, querying each point in parallel, and then amalgamating the results into one file.
For example, suppose we wanted the Z1.0 map from -118, 34 to -117, 35. We would likely run this as:
mpirun -np 4 ./basin_query_mpi -b out.file -f ../conf/ucvm.conf -m cvms -i 20 -v 1000 -l 34,-118 -s 0.01 -x 101 -y 101
This command outputs the data to "out.file" using the "cvms" model defined in "../conf/ucvm.conf". It queries each point at 20m increments on the Z-axis for when the Vs value is equal to or exceeds 1000m/s. It starts at 34, -118 and then goes by 0.01 in both the latitude and longitude directions for 101 points along the x and y axis, respectively.
ucvm2etree-extract-MPI
Notice: This command is intended to be run as a MPI job (e.g. mpirun ./ucvm-extract-MPI). Please do not attempt to run it as a regular process.
The command ucvm2etree-extract-MPI extracts components of an e-tree from the specifications in a given configuration file, config.
Specifically, it divides the etree region into C columns for extraction. This is an embarrassingly parallel operation. A dispatcher (rank 0) farms out each column to a worker in a pool of N cores for extraction. Each worker queries UCVM for the points in its column and writes a flat-file formatted etree. After program execution, there are N sub-etree files, each locally unsorted. The extractor must be run on 2^Y + 1 cores where Y>0 and (2^Y) < C. The output flat file format is a list of octants (24 byte addr, 16 byte key, 12 byte payload) in arbitrary Z-order.
Since the number of points in a column depends on the minimum Vs values within that column, some columns will have high octant counts and others will have very low octant counts. Having sub-etrees that vary greatly in size is not optimal for the sorting operations that follow, so ucvm2etree-extract-MPI implements a simple octant balancing mechanism. When a worker has extracted more than X octants (the default 16M octants), it reports to the dispatcher that it cannot extract any more columns and terminates. This strategy approximately balances the sub-etrees so that they may be loaded into memory by ucvm2etree-sort-MPI. In the case of very large extractions where the dispatcher reports that all workers have reported they are full yet columns remain to be extracted, increase the job size by a factor of 2 until there is room for all the columns.
You would typically run this command, followed by ucvm2etree-sort-MPI and ucvm2etree-merge-MPI.
mpirun -np 769 ucvm2etree-extract-MPI -f ./ucvm2etree_example.conf
ucvm2etree-sort-MPI
Notice: This command is intended to be run as a MPI job (e.g. mpirun ./ucvm-sort-MPI). Please do not attempt to run it as a regular process.
The command ucvm2etree-sort-MPI sorts the extracted components of an e-tree from the ucvm2etree-extract-MPI command. It does so by reading in the specifications in a given configuration file, config.
Specifically, it sorts the sub-etrees produced by ucvm2etree-extract-MPI so that each file is in local pre-order (Z-order). Again, the is an embarrassingly parallel operation. Each rank in the job reads in one of the sub-etrees produced by the previous program, sorts the octants in Z-order, and writes the sorted octants to a new sub-etree. The sorter must be run on 2^Y cores where Y>0. The worker pool must be large enough to allow each worker to load all the octants from its assigned file into memory. By default, this octant limit is 20M octants. If a rank reports that the size of the sub-etree exceeds memory capacity, the 20M buffer size constant may be increased if memory allows, or alternatively, cvm-bycols-extract-MPI may be rerun with a larger job size to reduce the number of octants per file.
You would typically run this command after ucvm2etree-extract-MPI and before ucvm2etree-merge-MPI.
mpirun -np 768 ucvm2etree-sort-MPI -f ./ucvm2etree_example.conf
ucvm2etree-merge-MPI
Notice: This command is intended to be run as a MPI job (e.g. mpirun ./ucvm-merge-MPI). Please do not attempt to run it as a regular process.
The command ucvm2etree-merge-MPI merges the sorted components of an e-tree from the ucvm2etree-sort-MPI command. It does so by reading in the specifications in a given configuration file, config.
Specifically, it merges N locally sorted etrees in flat file format into a final, compacted etree. This is essentially a merge sort on the keys from the addresses read from the local files. The cores at the lowest level of the merge tree each read in octants from two flat files in pre-order, merge sort the two sets of addresses, then pass the locally sorted list of addresses to a parent node for additional merging. This proceeds until the points rise to rank 1 which has a completely sorted list of etree addresses. Rank 0 takes this sorted list and performs a transactional append on the final Etree.
The merger must be run on 2^N cores. The program reads in input files that are in flat file format. It can output a merged Etree in either Etree format or flat file format. Although, due to space considerations, it strips the output flat file format to a pre-order list ot octants(16 byte key, 12 byte pay-load). The missing addr field is redundant and can be regenerated from the key field.
You would typically run this command after ucvm2etree-extract-MPI and ucvm2etree-merge-MPI.
Example usage:
mpirun -np 768 ucvm2etree-merge-MPI -f ./ucvm2etree_example.conf
ucvm2mesh-mpi
Notice: ucvm2mesh-mpi is meant to be run as a MPI job, not as a standalone executable.
The command ucvm2mesh-mpi generates a mesh in either IJK-12, IJK-20, IJK-32, or SORD format. Unlike its serial version, ucvm2mesh this command can use multiple cores to generate the mesh. It does so by reading in the specifications in a given configuration file.
This mesh can then be used in forward 3D wave propagation simulation software such as AWP-ODC.
mpirun -np 768 ucvm2mesh-mpi -f ./ucvm2mesh_example.conf
Troubleshooting
Failed Tests
At the end of the installation process, we strongly recommend that you run "make check" without the quotation marks to test your UCVM installation. The vast majority of the time the install works correctly. However, if you see an error referring to allocating the model, such as:
Test: UCVM lib add model SCEC CVM-SI
Failed to allocate model buffer
Failed to initialize model
Failed to init model cvmsi with config '/home/maechlin/ucvm-14.3.0/model/cvms426/model/i26' and extconfig . Config keys cvmsi_modelpath and/or cvmsi_extmodelpath are likely undefined.
FAIL: Failed to enable model cvmsi
This means that your system does not have enough memory to load and run the model. CVM-S4.26 and CVM-H are the biggest models, each requiring 1.5-2GB to use.
If you see an error similar to the following while running either the tests or the UCVM programs:
error while loading shared libraries: libsomelib.so: cannot open shared object file: No such file or directory
This indicates that UCVM was linked against one or more shared libraries and the dynamic library loader cannot find the actual .so library at run-time. The solution is to update your LD_LIBRARY_PATH to include the directory containing the library mentioned in the error message. For example, the following command adds a new search directory to LD_LIBRARY_PATH in a csh shell:
$ setenv LD_LIBRARY_PATH /home/USER/opt/somepkg/lib:${LD_LIBRARY_PATH}
By default, the ucvm_setup.py script uses static libraries. UCVM is only compiled dynamically if you pass ucvm_setup.py the "-d" option or if your system does not support static linking.
Proj. 4 Error: major axis or radius = 0 or not given
On systems with home filesystems that are not viewable to compute nodes (such as NICS Kraken), you may encounter errors with Proj.4 when trying to run any component of UCVM on compute nodes. This is due to the fact that Proj.4 actually relies on a file called proj_defs.dat which is located in the ${MAKE_INSTALL_LOCATION}/share/proj directory. So for example, suppose you had configured Proj.4 with ./configure --prefix=/not_viewable_to_compute_nodes/proj-4.7.0, Proj. 4 would then search for proj_defs.dat in /not_viewable_to_compute_nodes/proj-4.7.0/share/proj/proj_defs.dat. This will cause UCVM to throw the error "Proj.4 Error: major axis or radius = 0 or not given" and your job will fail.
The only way it seems to solve this issue is to actually make sure your --prefix directory is actually visible to the compute nodes and do a make install there. Documentation suggests that you can set the PROJ_LIB environment variable, however this seems to not work correctly without modifications to the Proj.4 source code.
API Examples
Please note that the examples shown below are basic examples of how the API works. More extensive documentation can be found in the UCVM 14.3.0 Developer Guide.
C
Calling UCVM from C code is relatively trivial. Minimally, you need the UCVM library as well as "ucvm.h".
Using UCVM involves three fundamental steps: 1) Initializing UCVM through the ucvm_init function, 2) adding appropriate models through the ucvm_add_model function, and 3) retrieving material properties through the ucvm_query function. These functions are shown in the example below.
Example.c
#include <stdio.h> #include <stdlib.h> #include <string.h> #include <getopt.h> #include <sys/time.h> #include "ucvm.h" int main(int argc, char **argv) { int nn = 1; ucvm_point_t pnts; ucvm_data_t data; char cmb_label[UCVM_MAX_LABEL_LEN]; printf("Init\n"); if (ucvm_init("../conf/test/ucvm.conf") != UCVM_CODE_SUCCESS) { fprintf(stderr, "Init failed\n"); return(1); } printf("Query Mode\n"); if (ucvm_setparam(UCVM_PARAM_QUERY_MODE, UCVM_COORD_GEO_DEPTH) != UCVM_CODE_SUCCESS) { fprintf(stderr, "Failed to set z mode\n"); return(1); } printf("Add Crustal Model 1D\n"); if (ucvm_add_model(UCVM_MODEL_1D) != UCVM_CODE_SUCCESS) { fprintf(stderr, "Retrieval of 1D failed\n"); return(1); } printf("Add GTL Model Ely\n"); if (ucvm_add_model(UCVM_MODEL_ELYGTL) != UCVM_CODE_SUCCESS) { fprintf(stderr, "Retrieval of Ely GTL failed\n"); return(1); } /* Change GTL interpolation function from default (linear) to Ely interpolation */ if (ucvm_assoc_ifunc(UCVM_MODEL_ELYGTL, UCVM_IFUNC_ELY) != UCVM_CODE_SUCCESS) { fprintf(stderr, "Failed to associate interpolation function with Ely GTL\n"); return(1); } /* Change interpolation z range from 0,0 to 0,350 */ if (ucvm_setparam(UCVM_PARAM_IFUNC_ZRANGE, 0.0, 350.0) != UCVM_CODE_SUCCESS) { fprintf(stderr, "Failed to set interpolation range\n"); return(1); } printf("Create point\n"); pnts.coord[0] = -118.0; pnts.coord[1] = 34.0; pnts.coord[2] = 2000.0; printf("Query Model\n"); if (ucvm_query(nn, &pnts, &data) != UCVM_CODE_SUCCESS) { fprintf(stderr, "Query UCVM failed\n"); return(1); } /* Get cmb data label */ ucvm_ifunc_label(data.cmb.source, cmb_label, UCVM_MAX_LABEL_LEN); printf("Results:\n"); printf("\tsource=%s, vp=%lf, vs=%lf, rho=%lf\n", cmb_label, data.cmb.vp, data.cmb.vs, data.cmb.rho); return(0); }
Fortran
Calling UCVM using Fortran is a relatively trivial procedure. After you have installed UCVM as per this user guide, you must include the UCVM library, the Proj.4 library, the e-tree library, as well as any velocity model libraries that you have compiled into UCVM. For CVM-H, please note that there are actually two libraries required: lvxapi and lgeo. Because the default convention for calling C programs from Fortran automatically appends an underscore to the end of the function name, you must turn that off via a flag called "fno-underscoring". This will make the Fortran compiler try and find foo() instead of foo_().
As an example, suppose we have a Fortran file, example.f, that calls UCVM. We have compiled UCVM with CVM-S and CVM-H. The code to compile example.f would be as follows:
gfortran example.f -o ./example -L/path/to/ucvm-14.3.0/lib -L./path/to/ucvm-14.3.0/model/cvms4/lib -L/path/to/ucvm-14.3.0/model/cvmh1191/lib -L/path/to/ucvm-14.3.0/lib/proj-4/lib -L/path/to/ucvm-14.3.0/lib/euclid3/libsrc -lucvm -lcvms -lvxapi -lgeo -lproj -letree -fno-underscoring
The basic structure of how to call UCVM within Fortran is outlined in the example below.
Example.f
program example c UCVM Configuration Location CHARACTER(LEN=80) ucvmconf c Model Name CHARACTER(LEN=4) model c Number of points we're passing to ucvm_query INTEGER pts c The UCVM point data structure. c coord(1) is longitude c coord(2) is latitutde c coord(3) is depth TYPE :: ucvm_point_t REAL*8 coord(3) END TYPE ucvm_point_t c Generic property structure c Source is where it comes from c vp is P-wave velocity in m/s c vs is S-wave velocity in m/s c rho is density in kg/m^3 TYPE :: ucvm_prop_t INTEGER source REAL*8 vp REAL*8 vs REAL*8 rho END TYPE ucvm_prop_t c Returned data structure TYPE :: ucvm_data_t REAL*8 surf REAL*8 vs30 REAL*8 depth INTEGER domain REAL*8 shift_cr REAL*8 shift_gtl type(ucvm_prop_t) crust type(ucvm_prop_t) gtl type(ucvm_prop_t) cmb END TYPE ucvm_data_t c For our example we'll query five points type(ucvm_point_t) point(5) c And we'll get back five sets of material properties type(ucvm_data_t) returnData(5) c Number of points is 5. pts = 5 c We'll start at -118, 34 at 0 depth and go down by 1000m c each step do 10 i = 1, 5 point(i)%coord(1) = -118 point(i)%coord(2) = 34 point(i)%coord(3) = (i - 1) * 1000 10 continue c Where is our configuration file? ucvmconf = "/home/scec-01/davidgil/ucvm.conf" // CHAR(0) c What model are we querying? model = "cvms" c Initialize UCVM call ucvm_init(ucvmconf) c Add the model to UCVM call ucvm_add_model(model) c Query the model. Note that the number of points is passed c by value, not reference. call ucvm_query(%VAL(pts), point, returnData) print *, model, " results for lon -118, lat 34" c Print out the results. do 20 i = 1, 5 print *, "Depth ", (i - 1) * 1000 print *, "Vs ", returnData(i)%crust%vs print *, "Vp ", returnData(i)%crust%vp print *, "Rho ", returnData(i)%crust%rho 20 continue c Close UCVM now that we've queried the points call ucvm_finalize() end
GCC Fortran 4.3+ is required for this example to work.
History of UCVM
- 14.3.0 (Mar 31, 2014): CVM-S4.26 model, Broadband 1D model used in the CyberShake 14.2 study, small-scale heterogeneity generation, easy visualization scripts to visualize velocity models, Z1.0 and Z2.5 query utilities, official support for multiple Linux distributions
- 13.9.0 (Sep 8, 2013): Improved installation, basin_query utility, OS X compliance
- 12.2.0 (Feb 9, 2012): Initial release
Acknowldgements and Contact Info
Support for the development and maintenance of the UCVM framework has been provided by the Southern California Earthquake Center (SCEC). SCEC is funded by NSF Cooperative Agreement EAR-0106924 and USGS Cooperative Agreement 02HQAG0008.
Contributions to this manual were made by: David Gill, Patrick Small, and Philip Maechling.
Please email software@scec.org for help on downloading and using UCVM, and for any suggestions for the delivery of the code or for this manual.
Please reference at least Small et al (2011) if you use this software framework; other references should be considered, depending on the purpose.
References
- Ely, G., T. H. Jordan, P. Small, P. J. Maechling (2010), A Vs30-derived Near-surface Seismic Velocity Model Abstract S51A-1907, presented at 2010 Fall Meeting, AGU, San Francisco, Calif., 13-17 Dec. [Ely2010-AGU.pdf]
- Graves, R. (1994), Rupture History and Strong Motion Modeling of the 1992 Cape Mendocino Earthquake, USGS External Grant Report
- Lin, G., C. H. Thurber, H. Zhang, E. Hauksson, P. Shearer, F. Waldhauser, T. M. Brocher, and J. Hardebeck (2010), A California statewide three-dimensional seismic velocity model from both absolute and differential Times, Bull. Seism. Soc. Am., 100, in press. supplemental
- Small, P., P. Maechling, T. Jordan, G. Ely, and R. Taborda (2011), SCEC UCVM - Unified California Velocity Model, in 2011 Southern California Earthquake Center Annual Meeting, Proceedings and Abstracts, vol. TBD, p. TBD.
- Taborda R., López J., O'Hallaron D., Tu T. and Bielak J. (2007), A review of the current approach to CVM-Etrees, SCEC Annual Meeting, Palm Springs, CA, USA, September 8–12. [1]
- Wald, D. J., and T. I. Allen (2007), Topographic slope as a proxy for seismic site conditions and amplification, Bull. Seism. Soc. Am., 97 (5), 1379-1395, doi:10.1785/0120060267.
- Wills, C. J., and K. B. Clahan (2006), Developing a map of geologically defined site-condition categories for California, Bull. Seism. Soc. Am., 96 (4A), 1483-1501, doi:10.1785/0120050179.
- Yong, A., Hough, S.E., Iwahashi, J., and A. Braverman (2012), A terrain-based site conditions map of California with implications for the contiguous United States, Bull. Seism. Soc. Am., Vol. 102, No. 1, pp. 114–128, February 2012, doi: 10.1785/0120100262.