Skip to content
Snippets Groups Projects
Commit 8ad1258b authored by Patrick Godwin's avatar Patrick Godwin Committed by Patrick Godwin
Browse files

docs: address issues with offline cbc analysis instructions - order of operations, typos, etc

parent 3e139878
No related branches found
No related tags found
1 merge request!131Address issues with offline analysis guide
Pipeline #318692 passed
......@@ -18,11 +18,12 @@ Running Workflows
1. Build Singularity image (optional)
""""""""""""""""""""""""""""""""""""""
NOTE: If you are using a reference Singularity container (suitable in most cases), you can skip this step.
The ``<image>`` throughout this doc refers to ``singularity-image`` specified in the ``condor`` section of your configuration.
NOTE: If you are using a reference Singularity container (suitable in most
cases), you can skip this step. The ``<image>`` throughout this doc refers to
``singularity-image`` specified in the ``condor`` section of your configuration.
If not using the reference Singularity container, say for local development, you can specify a path
to a local container and use that for the workflow (non-OSG).
If not using the reference Singularity container, say for local development, you
can specify a path to a local container and use that for the workflow (non-OSG).
To pull a container with gstlal installed, run:
......@@ -40,8 +41,10 @@ First, we create a new analysis directory and switch to it:
$ mkdir <analysis-dir>
$ cd <analysis-dir>
Default configuration files and data files (template bank/mass model) for a variety of different banks
are contained in the `offline-configuration <https://git.ligo.org/gstlal/offline-configuration>`_ repository.
Default configuration files and data files (template bank/mass model) for a
variety of different banks are contained in the
`offline-configuration <https://git.ligo.org/gstlal/offline-configuration>`_
repository.
For example, to grab the configuration and data files for the BNS test bank:
......@@ -51,9 +54,11 @@ For example, to grab the configuration and data files for the BNS test bank:
$ curl -O https://git.ligo.org/gstlal/offline-configuration/-/raw/main/bns-small/mass_model/mass_model_small.h5
$ curl -O https://git.ligo.org/gstlal/offline-configuration/-/raw/main/bns-small/bank/gstlal_bank_small.xml.gz
Alternatively, one can clone the repository and copy files as needed into the analysis directory.
Alternatively, one can clone the repository and copy files as needed into the
analysis directory.
Now, we'll need to modify the configuration as needed to run the analysis. At the very least, setting the start/end times and the instruments to run over:
Now, we'll need to modify the configuration as needed to run the analysis. At
the very least, setting the start/end times and the instruments to run over:
.. code-block:: yaml
......@@ -62,7 +67,8 @@ Now, we'll need to modify the configuration as needed to run the analysis. At th
instruments: H1L1
We also required template bank(s) and a mass model. Ensure these are pointed to the right place in the configuration:
We also required template bank(s) and a mass model. Ensure these are pointed to
the right place in the configuration:
.. code-block:: yaml
......@@ -74,14 +80,17 @@ We also required template bank(s) and a mass model. Ensure these are pointed to
prior:
mass-model: mass_model_small.h5
If you're creating a summary page for results, you'll need to point at a location where they are web-viewable:
If you're creating a summary page for results, you'll need to point at a
location where they are web-viewable:
.. code-block:: yaml
summary:
webdir: /path/to/summary
If you're running on LIGO compute resources and your username doesn't match your albert.einstein username, you'll also additionally need to specify the accounting group user for condor to track accounting information:
If you're running on LIGO compute resources and your username doesn't match your
albert.einstein username, you'll also additionally need to specify the
accounting group user for condor to track accounting information:
.. code-block:: yaml
......@@ -95,67 +104,73 @@ In addition, update the ``singularity-image`` the ``condor`` section of your con
condor:
singularity-image: /cvmfs/singularity.opensciencegrid.org/lscsoft/gstlal:master
If not using the reference Singularity image, you can replace this line with the full path to a local container.
If not using the reference Singularity image, you can replace this line with the
full path to a local container.
For more detailed configuration options, take a look at the `configuration section <analysis-confiuration>` below.
For more detailed configuration options, take a look at the :ref:`configuration
section <analysis-configuration>` below.
Once you have the configuration and data products needed, you can set up the Makefile using the configuration,
which we'll then use for everything else, including the data file needed for the workflow, the workflow itself,
the summary page, etc.
If you haven't installed site-specific profiles yet (per-user), you can run:
.. code:: bash
$ gstlal_inspiral_workflow init -c config.yml
By default, this will generate the full workflow. If you want to only run the filtering step, a rerank, or an injection-only
workflow, you can instead specify the workflow as well, e.g.
.. code:: bash
$ singularity exec <image> gstlal_grid_profile install
$ gstlal_inspiral_workflow init -c config.yml -w injection
which will install configurations that are site-specific, i.e. ``ldas`` and ``icds``.
You can select which profile to use in the ``condor`` section:
for an injection-only workflow.
.. code-block:: yaml
If you already have a Makefile and need to update it based on an updated configuration, run ``gstlal_inspiral_workflow`` with ``--force``.
condor:
profile: ldas
Next, if you accessing non-public (GWOSC) data, you'll need to set up your proxy to ensure you can get access to LIGO data:
To view which profiles are available, you can run:
.. code:: bash
$ X509_USER_PROXY=/path/to/x509_proxy ligo-proxy-init -p albert.einstein
$ singularity exec <image> gstlal_grid_profile list
Note that we are running this step outside of Singularity. This is because ``ligo-proxy-init``
is not installed within the image currently.
Also update the configuration accordingly (if needed):
Note, you can install :ref:`custom profiles <install-custom-profiles>` as well.
.. code-block:: yaml
Once you have the configuration, data products, and grid profiles installed, you
can set up the Makefile using the configuration, which we'll then use for
everything else, including the data file needed for the workflow, the workflow
itself, the summary page, etc.
source:
x509-proxy: /path/to/x509_proxy
.. code:: bash
If you haven't installed site-specific profiles yet, you can run:
$ singularity exec <image> gstlal_inspiral_workflow init -c config.yml
.. code:: bash
By default, this will generate the full workflow. If you want to only run the
filtering step, a rerank, or an injection-only workflow, you can instead specify
the workflow as well, e.g.
$ singularity exec <image> gstlal_grid_profile install
.. code:: bash
which will install configurations that are site-specific, i.e. ``ldas`` and ``ics``.
You can select which profile to use in the ``condor`` section:
$ singularity exec <image> gstlal_inspiral_workflow init -c config.yml -w injection
.. code-block:: yaml
for an injection-only workflow.
condor:
profile: ldas
If you already have a Makefile and need to update it based on an updated
configuration, run ``gstlal_inspiral_workflow`` with ``--force``.
To view which profiles are available, you can run:
Next, if you accessing non-public (GWOSC) data, you'll need to set up your proxy
to ensure you can get access to LIGO data:
.. code:: bash
$ singularity exec <image> gstlal_grid_profile list
$ X509_USER_PROXY=/path/to/x509_proxy ligo-proxy-init -p albert.einstein
Note that we are running this step outside of Singularity. This is because ``ligo-proxy-init``
is not installed within the image currently.
Note, you can install `custom-profiles <install-custom-profiles>` as well.
Also update the configuration accordingly (if needed):
.. code-block:: yaml
source:
x509-proxy: /path/to/x509_proxy
Finally, set up the rest of the workflow including the DAG for submission:
......@@ -174,6 +189,8 @@ is important as some of the steps will leverage a temporary space to generate fi
$ make launch
This is simply a thin wrapper around `condor_submit_dag` launching the DAG in question.
You can monitor the dag with Condor CLI tools such as ``condor_q``.
4. Generate Summary Page
......@@ -183,7 +200,7 @@ After the DAG has completed, you can generate the summary page for the analysis:
.. code:: bash
$ make summary
$ singularity exec -B $TMPDIR <image> make summary
.. _analysis-configuration:
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment