diff --git a/docs/admin_docs/source/client.rst b/docs/admin_docs/source/client.rst
new file mode 100644
index 0000000000000000000000000000000000000000..49345105bd3fcbb91408a4e7864ca2d442c65f18
--- /dev/null
+++ b/docs/admin_docs/source/client.rst
@@ -0,0 +1,13 @@
+.. Information on GraceDB and LVAlert client packages
+
+Client packages
+===============
+
+Contents:
+
+.. toctree::
+    :maxdepth: 2
+
+    client_release
+    shibbolized_client
+
diff --git a/docs/admin_docs/source/client_release.rst b/docs/admin_docs/source/client_release.rst
index 31a2d29c434362036c5f6c8e49ce63f4d32c01ab..1074bba3f2b27d4b96c4b957d43804aff45db264 100644
--- a/docs/admin_docs/source/client_release.rst
+++ b/docs/admin_docs/source/client_release.rst
@@ -1,31 +1,55 @@
+.. _client_release:
+
 ================================
 Preparing a new client release
 ================================
 
+*Last updated 26 June 2017*
+
+This section describes how to prepare new releases of ``gracedb-client`` and ``lvalert-client``.
+We use ``gracedb-client`` as an example throughout and provide specifics when a particular step is different for each package.
+
 .. NOTE::
     The steps here are only suggestions. You will undoubtedly discover better 
     and/or different ways to go about this.
 
-Develop
-=======
+Development
+===========
 
 Implement the features and bug fixes you wish to include in the new
 client release. It's easiest to do this within a virtual environment on 
 your workstation. That way you can make changes to the code and then::
 
     cd gracedb-client
-    python setup.py install
+    python setup.py develop
+
+which will install the code into your virtual environment and update it with any changes you make.
+When you are satisfied with the changes, commit and push.
+
+Testing
+=======
+
+It's a good idea to test your new version on Scientific Linux (ldas-pcdev4 at CIT) and Debian (atlas9 on ATLAS) before proceeding.
+The versions of Python there may be a bit behind the one on your workstation, and that can cause complications.
+I've been burned by this before.
+You can do it by cloning the package's git repository on a cluster headnode and building in a virtual environment::
+
+    mkdir gracedb_testing
+    cd gracedb_testing
+    git clone https://git.ligo.org/lscsoft/gracedb-client.git
+    cd gracedb-client
+    PYTHONPATH=. python ligo/gracedb/test/test.py
+    PYTHONPATH=. GRACEDB='python bin/gracedb' ./ligo/gracedb/test/test.sh
 
-which will install into your virtual environment. When you are satisfied 
-with the changes, commit and push.
+    virtualenv gracedb_virtualenv --system-site-packages
+    source gracedb_virtualenv/bin/activate
+    cd gracedb-client
+    python setup.py develop
+    cd ligo/gracedb/test
+    python test.py
 
-.. NOTE:: 
-    It's a good idea to test this version on Scientific Linux and Debian
-    at the clusters before proceeding. 
-    The versions of Python there may be a bit behind the one on your workstation,
-    and that can cause complications. I've been burned by this before.
-    You can do it by cloning the ``gracedb-client`` package on a cluster
-    headnode and building in a virtual environment as show above.
+For ``gracedb-client``, you should run the unit tests; any other tests you may want to run should be added to the unit tests if not already present.
+For ``lvalert-client``, test basic functions like subscribing/unsubscribing from a node, sending and receiving messages, etc., as well as anything specific that you may have modified when adding new features.
 
 Changes for packaging
 =====================
@@ -33,6 +57,9 @@ Changes for packaging
 Update the source code for the new version number, and update the changelog.
 Here are the files you will need to change:
 
+gracedb-client
+----------------
+
 * ``setup.py``: bump the version number
 * ``debian/changelog``: list your changes in the prescribed format
 * ``ligo-gracedb.spec``: check version, unmangled version, and release number
@@ -40,76 +67,118 @@ Here are the files you will need to change:
 * ``ligo/gracedb/cli.py``: update ``GIT_TAG``
 * ``ligo/gracedb/test/test.py``: update the version number in the ``GIT_TAG`` test
 
-After editing these files, make sure to commit and push.  Also make sure the
-client still passes the unit tests::
+lvalert-client
+----------------
 
-    python setup.py install
-    cd gracedb-client/ligo/gracedb/test
-    unset TEST_SERVICE
-    python test.py
+* ``setup.py``: bump the version number
+* ``debian/changelog``: list your changes in the prescribed format
+* ``ligo-lvalert.spec``: check version, unmangled version, and release number
+
+.. NOTE::
+    Updating ``debian/changelog``: ``DEBEMAIL="Albert Einstein <albert.einstein@ligo.org>" dch -v 1.24-1``.
+    Make sure to mimic the formatting exactly!
+
+Final steps
+-----------
+
+After editing these files, make sure to commit and push.
+For ``gracedb-client``, make sure the client still passes the unit tests.
+Go to the root of the repository and see ``ligo/gracedb/test/README`` for 
+more instructions on running the unit tests.
 
 Tag this version of the repo and push the tag::
 
     git tag --list
-    git tag -a YOUR_GIT_TAG
+    git tag -a "gracedb-1.24-1" -m "notes on your changes"
     git push --tags
 
 .. NOTE::
-    Git tags look like this: ``gracedb-1.20-1``, where 1.20 is the version and
-    the last number corresponds to the build number (here, 1).
+    Git tags look like this: ``gracedb-1.24-1``, where 1 is the major version and 24 is the minor version. The last number corresponds to the build number (here, 1). Sometimes the format goes as 1.24.0-1, where 0 is typically referred to as the patch number.
 
-Prepare to upload to PyPI
-=========================
+Uploading to PyPI
+=====================
 
-Clear everything out of the directory ``gracedb-client/dist`` and then
-build the source tarball::
+Configure your machine
+----------------------
+The simplest way to upload to PyPI is with the Python package ``twine``.
+First, get an account on both `PyPI <https://pypi.python.org/pypi>`__ and `test PyPI <https://testpypi.python.org/pypi>`__.
+Then, create a ``$HOME/.pypirc`` file that looks like::
 
-    python setup.py sdist
+    [distutils]
+    index-servers=
+        pypi
+        testpypi
 
-Log into ``testpypi.python.org`` and change the version number. Click on the package
-name in the right-hand menu, then click 'edit' near the top. Bump the version 
-number as appropriate, and then click 'Add Information' at the bottom.
+    [pypi]
+    username = username1
+    password = userpassword1
 
-Upload the package to the test PyPI instance.  This is easier if you install
-the python package ``twine``. I tend to do this in a special virtual environment
-used only for this purpose::
+    [testpypi]
+    repository = https://test.pypi.org/legacy/
+    username = username2
+    password = userpassword2
 
-    deactivate
-    cd
-    cd my_virtual_envs
-    virtualenv --system-site-packages pypi-upload
-    source pypi-upload/bin/activate
-    pip install twine
+This will be used below when uploading the packages.
 
-    cd /path/to/gracedb-client
-    twine upload dist/*.gz -r test
+.. NOTE::
+    No repository is needed for the main PyPi if you're using twine 1.8.0+.
+    If you aren't, set repository = https://upload.pypi.org/legacy/.
 
-Make sure that you can install and use the package from the test PyPI::
+Preparing the release
+---------------------
 
-    deactivate
-    cd 
-    cd my_virtual_envs
+To build the package and upload it to PyPi testing::
+
+    # Check out the new tag
+    git checkout gracedb-1.24-1
+
+    # Clean up your repository
+    git clean -dxf
+
+    # Build the source tarball
+    python setup.py sdist
+
+    # Upload to test PyPI
+    twine upload dist/*.gz -r testpypi
+
+Testing
+-------
+Make sure that you can install and use the package from the test PyPI.
+Login to one of the LIGO clusters and do the following:
+
+.. code-block:: bash
+
+    # Set up virtual environment with install from test PyPi
+    mkdir gracedb_testing
+    cd gracedb_testing
     virtualenv --system-site-packages test
     source test/bin/activate
     pip install -i https://testpypi.python.org/pypi ligo-gracedb --upgrade
 
-    cd /path/to/gracedb-client
-    git pull
+    # Clone the git repository (needed for git tag unittest to work)
+    git clone https://git@git.ligo.org/lscsoft/gracedb-client.git
+    # Check out tag
+    cd gracedb-client
+    git checkout gracedb-1.24-1
+
+    # Run tests
     cd ligo/gracedb/test
     python test.py
-    cd ~/my_virtual_envs
+    ./test.sh
+
+    # Cleanup
     deactivate
-    rm -f -r test
+    cd ../
+    rm -rf gracedb_testing
 
-Log into ``pypi.python.org`` (the non-test instance) and update the version number
-as you did above for the test instance.  Next, upload the package to the
-regular, non-test PyPI::
+Final upload
+------------
+This step should only be done **after** the release has gone through the entire
+LIGO packaging and SCCB approval process (see below).
 
-    deactivate 
-    cd ~/my_virtual_envs
-    source pypi-upload/bin/activate
-    cd /path/to/gracedb-client
-    twine upload dist/*.gz
+Upload to the real PyPI::
+
+    twine upload dist/*.gz -r pypi
 
 Lastly, make sure you can pip install the package::
 
@@ -124,9 +193,11 @@ Lastly, make sure you can pip install the package::
 Steps for LIGO packaging
 ========================
 
+Uploading the source
+--------------------
 Move the source tarball to ``software.ligo.org``. I do this with a script
-I obtained from Adam Mercer, ``lscsrc_new_file.sh``. I have added a version
-of this to the GraceDB ``admin-tools`` repo::
+I obtained from Adam Mercer, ``lscsrc_new_file.sh``.
+I have added a version of this to the GraceDB ``admin-tools`` repo::
 
     cd /path/to/gracedb-client/dist
     cp /path/to/admin-tools/releases/lscsrc_new_file.sh .
@@ -135,59 +206,39 @@ of this to the GraceDB ``admin-tools`` repo::
 .. NOTE::
     You must run the script in the same directory where the tarball lives.
     Otherwise it will put it onto the server in a weird subdirectory rather
-    than just the file.
+    than just uploading the file directly.
 
 Make sure that the file is accessible in the expected location, something
-like ``http://software.ligo.org/lscsoft/source/ligo-gracedb-1.20.tar.gz``.
-
-Send an email to the packagers notifying them of the new package. You will
-probably want to include the information that you put into the changelog.
-Here's an example of one that I sent::
-
-    to daswg+announce@ligo.org
-
-    There is a new release of the GraceDB client tools.
-
-    New features are:
-        Improved error handling for expired or missing credentials
-        Improved error handling when server returns non-JSON response
-        Added --use-basic-auth option to command-line client
-
-
-    The release tag is: ligo-lvalert-1.20-1
+like ``http://software.ligo.org/lscsoft/source/ligo-gracedb-1.24.tar.gz``.
 
-    The source is available at:
+SCCB packaging and approval
+---------------------------
+Create a new issue on the `SCCB project page <https://bugs.ligo.org/redmine/projects/sccb>`__.
+The title of the issue should be the name of the tag (ex: ``ligo-gracedb-1.24-1``).
+The description should include an overview of the new features or modifications along with a list of the tests you have performed.
+An example is shown below::
 
-    http://software.ligo.org/lscsoft/source/ligo-gracedb-1.20.tar.gz
+    Requesting deb and rpm packaging for ligo-gracedb-1.24-1.
+    A source tarball has been uploaded to the usual location.
+    A diff of the code changes is here: https://git.ligo.org/lscsoft/gracedb-client/compare/gracedb-1.23-1...gracedb-1.24-1
 
-    thanks!
-    Branson
+    This release includes:
 
-After the package is in the testing repo, look for the corresponding row in the 
-`SCCB wiki <https://wiki.ligo.org/SCCB/WebHome>`__.
-One of the packagers will hopefully have added it.
+    * Improved method for checking .netrc file permissions.
+    * Added capability of creating events with labels initially attached, rather than having to add them as a separate step and generate multiple LVAlerts.
+    * Added "offline" boolean parameter when creating an event. This parameter signifies whether the event was identified by an offline search (True) or online/low-latency search (False). Default: offline=False, which is identical to the current behavior.
 
-Once the new package is installed at the system level on the bleeding-edge
-head nodes, test it on different OSes (probably ``ldas-pcdev4`` at CIT for
-Scientific Linux and ``atlas9`` at AEI for Debian).
+    I've tested this release extensively, including:
 
-Update the SCCB wiki entry stating that the package has been tested on SL
-and Debian and request that it be moved into production.
-
-Forward the package announcement email to the SCCB with some additional text
-notifying them that the package is waiting for their approval.  Here is an
-example of one that I sent::
-
-    to daswg+SCCB@ligo.org 
-
-    dear SCCB,
-
-    I have tested this release on ldas-pcdev4 @ CIT and atlas9 @ AEI. The
-    release passes the unit tests, so I am requesting that it be moved 
-    into production.
-
-    best,
-    Branson
+    * Attempting to use several combinations of "bad" inputs for both labels and the "offline" parameter and ensuring that it fails appropriately (without contacting the server)
+    * Running the unit tests (some of which were added in this patch)
 
+Leave the issue status as 'New' to begin with.
+The package builders will create packages for Scientific Linux and Debian; after it's deployed to the test machines (atlas9 on ATLAS and ldas-pcdev4 at CIT), someone will set the status to 'Testing'.
+Run your tests on these machines if you didn't do it already; then update the issue's status to 'Requested'.
+The SCCB members will vote and set the status to 'Approved' after it's approved.
+After approval, the package will be deployed during the next maintenance period and the admins will set the category to 'Production' and the status to 'Closed'.
 
+.. NOTE::
+    You should submit a package for building and approval by Thursday at the very latest if you want it to be moved into production during maintenance on the following Tuesday.
 
diff --git a/docs/admin_docs/source/conf.py b/docs/admin_docs/source/conf.py
index b263faf11ec6516470f6016efa98950235c3569e..0cb2180c11bcb651034120ca7a3d4361c7e23b10 100644
--- a/docs/admin_docs/source/conf.py
+++ b/docs/admin_docs/source/conf.py
@@ -49,8 +49,8 @@ master_doc = 'index'
 
 # General information about the project.
 project = u'GraceDB Administration and Development'
-copyright = u'2016, Branson Stephens'
-author = u'Branson Stephens'
+copyright = u'2017, Tanner Prestegard, Alexander Pace, Branson Stephens'
+author = u'Tanner Prestegard, Alexander Pace, Branson Stephens'
 
 # The version info for the project you're documenting, acts as replacement for
 # |version| and |release|, also used in various other places throughout the
diff --git a/docs/admin_docs/source/dev.rst b/docs/admin_docs/source/dev.rst
index bf28e856bc8d45feef56eb4c0fc9fef2c2c416d5..b0d0c28e309cea2f4378251a85f47e2234d203f4 100644
--- a/docs/admin_docs/source/dev.rst
+++ b/docs/admin_docs/source/dev.rst
@@ -6,12 +6,10 @@ Developer's Guide
 Contents:
 
 .. toctree::
-   :maxdepth: 2
+    :maxdepth: 2
 
-   new_server_feature
-   new_gracedb_instance
-   new_event_subclass
-   client_release
-   shibbolized_client
-   public_gracedb
+    new_server_feature
+    new_gracedb_instance
+    new_event_subclass
+    public_gracedb
 
diff --git a/docs/admin_docs/source/index.rst b/docs/admin_docs/source/index.rst
index 9f2b386c7164bac06a8eda8570ffa8e0338ebdca..b1b1bd7619a468ff39b0f4338221de729452807f 100644
--- a/docs/admin_docs/source/index.rst
+++ b/docs/admin_docs/source/index.rst
@@ -11,8 +11,10 @@ Contents:
 .. toctree::
    :maxdepth: 2
 
+   introduction
    ops
    dev
+   client
 
 Indices and tables
 ==================
diff --git a/docs/admin_docs/source/introduction.rst b/docs/admin_docs/source/introduction.rst
new file mode 100644
index 0000000000000000000000000000000000000000..28fa9d275344e37716cd6a7f43f6c704f5069117
--- /dev/null
+++ b/docs/admin_docs/source/introduction.rst
@@ -0,0 +1,93 @@
+.. _introduction:
+
+============
+Introduction
+============
+
+*Last updated 13 Feb 2018*
+
+GraceDB is a service for aggregating and disseminating information about candidate gravitational-wave events.
+It is a key component of the effort to offer low-latency GW notifications to astronomer partners.
+We provide a web interface and a RESTful API, along with a Python client for easily interacting with the API.
+
+GraceDB currently runs on Debian Stretch, although upcoming developments may allow the service to be run on any operating system.
+
+Components of the service
+=========================
+We can divide the GraceDB service into five main components:
+
+- Django app
+- Backend webserver (Gunicorn)
+- Frontend webserver (Apache)
+- Primary authentication (Shibboleth)
+- Database backend (MariaDB)
+
+Django
+------
+GraceDB is written in Python and is constructed around the `Django <https://www.djangoproject.com/>`__ web framework.
+We are currently using Python 2 and Django 1.11.
+Note that this is the last version of Django to support Python 2, so a migration to Python 3 will be necessary in the future.
+
+Gunicorn
+--------
+`Gunicorn <http://gunicorn.org/>`__ is a lightweight Python webserver which interfaces directly with the Django service via the WSGI protocol.
+The settings are managed with a config file and the service is started via systemd.
+
+Apache
+------
+`Apache <https://httpd.apache.org/>`__ is one of the longest-running open source webservers.
+We still use Apache in concert with Gunicorn because Shibboleth seems to work best with it.
+It is configured as a reverse proxy which gets authentication information from Shibboleth, sets that information in the headers, and then passes it on to Gunicorn.
+
+Shibboleth
+----------
+`Shibboleth <https://www.shibboleth.net/>`__ is a software package for managing federated identities and providing a single sign-on portal.
+It uses metadata providers to collect user attributes from an attribute authority and put them into the user's session.
+These attributes are then available to the relevant service providers which are accessed by the user.
+
+Metadata providers used by GraceDB:
+
+- LIGO attribute authority
+- InCommon - provides access via institutional accounts registered on gw-astronomy.org
+- Cirrus Gateway - provides access via Google accounts registered on gw-astronomy.org
+
+MariaDB
+-------
+Currently, we use MariaDB 10.1 with the MyISAM table engine.
+Note that the table engine is set within the Django settings, not directly in the database.
+May want to look into other table engines in the future.
+
+Servers
+=======
+Here's a short overview of the currently available GraceDB servers:
+
+- Production:
+
+  - gracedb.ligo.org
+- Test: (almost) identical to production, to be used for final testing of development work.
+
+  - gracedb-test.ligo.org
+- Development: for raw code development; feel free to break these servers as needed. Note that these servers are not registered with any non-LIGO metadata providers, so testing of external authentication needs to happen on gracedb-test.
+
+  - gracedb-dev1.ligo.org
+  - gracedb-dev2.ligo.org
+- Other:
+
+  - simdb.ligo.org: for gstlal testing.  May be retired in the near future.
+
+See :ref:`new_gracedb_instance` for information on setting up new servers.
+
+Useful repositories
+===================
+The `scripts <https://git.ligo.org/gracedb/scripts>`__ repository contains a set of scripts for running cron jobs and performing useful tasks on a GraceDB server.
+Examples include:
+
+- Pulling LIGO users from the LIGO LDAP
+- Adding users to executives and advocates groups
+- Parsing Apache logs and database logs
+- Starting LVAlert Overseer
+- Managing LVAlert nodes
+
+The `admin tools <https://git.ligo.org/gracedb/admin-tools>`__ repository is a collection of tools to be used by GraceDB administrators.
+It also includes notes on past work and debugging efforts, as well as planning for future development.
+
diff --git a/docs/admin_docs/source/lvalert_management.rst b/docs/admin_docs/source/lvalert_management.rst
new file mode 100644
index 0000000000000000000000000000000000000000..c0580f85b898f80ede143f4d283a9f4ec7f68827
--- /dev/null
+++ b/docs/admin_docs/source/lvalert_management.rst
@@ -0,0 +1,25 @@
+.. _lvalert_management:
+
+==================
+LVAlert management
+==================
+*Last updated 28 June 2017*
+
+Server configuration
+====================
+You can access some of the Openfire server settings through the `web interface <lvalert.cgca.uwm.edu:9090>`__.
+Use your LVAlert credentials on that server to log in.
+If you don't have an account, you'll need another GraceDB/LVAlert developer to create an account for you and make you an admin.
+There is also an admin account, ask Patrick Brady if you don't know the password.
+Note that this password can also be used to login to the database via the MySQL interface.
+
+There are a few things you can do from within this web interface, but the main useful function is to create/edit/delete user or pipeline accounts.
+
+Managing nodes
+==============
+In an effort to tightly control production LVAlert nodes, we've developed a `script <https://git.ligo.org/gracedb/scripts/blob/master/add_lvalert_nodes.py>`__ which is used to create them and add publishers.
+There are more instructions in the script, but the main principles are:
+
+* The LVAlert account used by the production GraceDB server (username ``gracedb``) should be the owner of **ALL** GW event nodes, even those on the test LVAlert server.
+* LVAlert accounts for test/development GraceDB servers should be added as publishers to these nodes on the test LVAlert server **only**.
+* The production and test LVAlert servers should contain the same GW event nodes.
diff --git a/docs/admin_docs/source/miscellaneous.rst b/docs/admin_docs/source/miscellaneous.rst
index ecb3e5a4ae5a453ef0920f85f55a0ffe2fc4943d..ea8a490faea8ec5f4791a74f831c6a27bcc7518c 100644
--- a/docs/admin_docs/source/miscellaneous.rst
+++ b/docs/admin_docs/source/miscellaneous.rst
@@ -1,7 +1,11 @@
+.. _miscellaneous:
+
 ================================
 Miscellaneous 
 ================================
 
+*Last updated 17 October 2017*
+
 Replacing the database on the test instance
 ===========================================
 
@@ -24,8 +28,9 @@ the ``gracedb`` user. Then::
     mysql -u gracedb -p gracedb < gracedb.sql
 
 The latter step requires entering the MySQL password for the ``gracedb``
-testing user. This can be found in ``/home/gracedb/settings/settings_secrets.py``.    
+testing user. This can be found in ``/home/gracedb/config/settings/secret.py``.    
 
+.. _copying_event_data:
 
 Getting data for particular events onto the test instance
 =========================================================
@@ -43,8 +48,8 @@ of the events that you want to move data for. I would do this in the Django
 console (i.e., ``./manage.py shell``). Suppose I want to move the data
 for all gstlal events during O1::
 
-    from gracedb.models import Event
-    from gracedb.forms import SimpleSearchForm
+    from events.models import Event
+    from events.forms import SimpleSearchForm
     f = SimpleSearchForm({'query': 'gstlal O1'})
     outfile = open('/home/gracedb/query_graceids.txt', 'w')
     if f.is_valid():
@@ -77,7 +82,7 @@ but you still have to go through the same sequence of steps that you would
 for a true developement task. I recommend the workflow described in :ref:`new_server_feature`.
 
 In this particular case, the only necessary code change is to edit the 
-file ``gracedb/gracedb/buildVOEvent.py`` and add something like::
+file ``events/buildVOEvent.py`` and add something like::
 
     w.add_Param(Param(name="MyParam",
         dataType="float",
@@ -89,20 +94,47 @@ this example here, because it seems likely that such a task will be considered
 "operational" even though it is really mini-development. The line is pretty 
 blurry.
 
+Adding an interferometer
+========================
+Note that these directions may change in the near future since we plan to add an instruments table to the database.
+
+A good starting point is to search the GraceDB server code for "L1" to see where interferometers directly come into play.
+
+Specifics (assume X1 is the IFO code):
+
+1. Add X1OPS, X1OK, X1NO labels, update ``templates/gracedb/event_detail_script.js`` with description, and update ``templates/gracedb/query_help_frag.html``
+2. Add to instruments in ``events/buildVOEvent.py``
+3. Update ifoList in ``events/query.py``
+4. Add entry to ``CONTROL_ROOM_IPS`` in ``config/settings/base.py``
+5. Add signoff option for X1 in ``templates/gracedb/event_detail.html``
+6. Update INSTRUMENTS in ``events/models.py``.
+7. Update any event objects which need it (currently only LIB events)
+8. Update lots of things in ``events/serialize.py``
+
+See an example (Virgo) `here <https://git.ligo.org/lscsoft/gracedb/commit/65a4c08e25d7a472e1f995072d166b4c8dc611df>`__, but note that a lot of the Virgo-related stuff was already in the code.
+
+Leap seconds
+============
+GraceDB does its own conversion between UTC and GPS time, but unfortunately, we have to track leap seconds.
+This is done in ``gracedb/core/time_utils.py``.
+You'll have to update this whenever a new leap second is announced (preferably in advance of its implementation).
+
+There is probably a better way to do this.
+
 On backups
 ==========
 
 Backups for GraceDB are controlled by the file::
-    ``/root/backup-scripts/gracedb.cgca.uwm.edu-filesystems`` 
+    ``/root/backup-scripts/gracedb.ligo.uwm.edu-filesystems`` 
     
-on ``backup01``.  This file simply contains::
+on ``backup01.nemo.uwm.edu``.  This file simply contains::
 
     /etc
     /opt/gracedb
 
-which means that everything under these directories on ``gracedb.cgca.uwm.edu``
-will be backed up on ``backup01``.  You can see the files under the location
-``/backup/gracedb.cgca.uwm.edu/``. This is occasionally useful for recovering
+which means that everything under these directories on ``gracedb.ligo.uwm.edu``
+will be backed up on ``backup00``.  You can see the files under the location
+``/backup/gracedb.ligo.uwm.edu/``. This is occasionally useful for recovering
 a config file that got blown away by puppet. Notice, though, that nothing 
 under ``/home/gracedb`` is backed up. That's because the core server code and
 accompanying scripts are under version control, and thus are backed up elsewhere.
diff --git a/docs/admin_docs/source/new_gracedb_instance.rst b/docs/admin_docs/source/new_gracedb_instance.rst
index 19a2151b67595fe20dbcc9cc233f00bb6176ded5..9386514ec447033a7cb05d83f57a0792db03cb60 100644
--- a/docs/admin_docs/source/new_gracedb_instance.rst
+++ b/docs/admin_docs/source/new_gracedb_instance.rst
@@ -1,302 +1,280 @@
+.. _new_gracedb_instance:
+
 ==================================
 Standing up a new GraceDB instance
 ==================================
 
+*Last updated 14 December 2017*
+
 Disclaimer
 ==========
-
-These instructions will almost certainly not work. Please edit when you find
-something that fails. 
-
-Recipe
-======
-
-Machine and certificates
-------------------------
-
-I'll assume that the new instance will have the FQDN ``gracedb-new.cgca.uwm.edu``.
-Follow the 
-`instructions <https://www.lsc-group.phys.uwm.edu/wiki/Computing/ManagingVirtualMachines>`__ 
-for setting up a new Debian stock VM managed by puppet. 
-You are going to need an InCommon SSL certificate for Apache, so I recommend
-requesting this first. Instructions are found 
-`here <https://www.lsc-group.phys.uwm.edu/wiki/CertificateRequestUWM>`__. Store the 
-cert and key with correct file permissions somewhere for safe keeping.
-
-.. NOTE::
-    If this new instance will have a FQDN ending in ``.ligo.org``, you will
-    need to get the cert from Caltech instead. Some instructions are found
-    `here <https://wiki.ligo.org/AuthProject/ComodoInCommonCert>`__.
-
-Puppet configuration
+Certain parts of these instructions may not work.
+Please edit when you find something that fails. 
+
+Also note that setup of a GraceDB server relies heavily on Puppet.
+You may attempt a Puppet-less setup at your own risk!
+
+Initial steps
+========================
+The first step is to pick the FQDN for your new server.
+As of spring 2017, it's preferred to use the ``.ligo.uwm.edu`` domain.
+For this exercise, we'll assume that a server name ``gracedb-new.ligo.uwm.edu``.
+You should also decide whether you will need a LIGO.ORG domain name (i.e., ``gracedb-new.ligo.org``).
+This is not absolutely necessary for test instances, but is recommended in order to simulate the production environment as closely as possible.
+
+Virtual machine setup
+---------------------
+You'll need to either have one of the VMWare tools (VMWare Workstation for Linux or VMWare Fusion for OS X) installed on your machine, access to ``headroom.cgca.uwm.edu`` (a Windows machine that has VMWare vSphere), or access to the `web interface <http://vc5.ad.uwm.edu>`__.
+Currently, the web interface is the preferred method for setting up a new VM, so the following instructions will be for this method.
+
+Find the VM template you want to use (click on a VM host (left panel), then "VMs" (middle frame), then "VM Templates in Folders").
+Currently, the Debian templates are on ``vmhost05``, but you may have to check all of the VM hosts if you don't find it there.
+We are currently using Debian 8, but in the process of moving to Debian 9. 
+Left-click on the template and then choose "New VM from This Template" (above the list of templates).
+Enter ``gracedb-new`` for the virtual machine's name and hit "Next".
+Then, choose the VM host you want to put the VM on and hit "Next".
+You can probably skip the next two steps and hit "Finish".
+
+At this point, you can modify the VM's settings:
+
+- CPU: 2 is fine for testing, use only 1 socket total (not 1 per core).
+- RAM: something like 2 GB should be fine for testing.
+- Storage space: add a second hard drive of about 100 GB (for testing). You may want a larger disk if this is a production server or if you intend to copy the entire production database for testing purposes.
+- Network adapter: use public VLAN 61.
+
+The instructions on the CGCA computing `wiki <https://www.lsc-group.phys.uwm.edu/wiki/Computing/ManagingVirtualMachines>`__ provide more detailed information that may be helpful.
+
+Getting certificates
 --------------------
+It's best to submit your requests for any certificates as soon as possible, as waiting for these will most likely be the biggest bottleneck in this process.
 
-On your workstation, clone the ``cgca-hiera`` git repository::
+- In all cases, you'll need an InCommon SSL certificate for your ligo.uwm.edu domain name. Follow the instructions on the CGCA computing wiki `here <https://www.lsc-group.phys.uwm.edu/wiki/CertificateRequestUWM>`__.  Note that the "short hostname" for our server is ``gracedb-new``.
+- If you decided that you want a LIGO.ORG domain name, you'll need an InCommon SSL certificate for this, as well.  Follow the instructions `here <https://wiki.ligo.org/AuthProject/ComodoInCommonCert>`__.
+- Finally, you may want an IGTF certificate to provide gsissh access.  It depends whether you want non-UWM people to potentially have access via the command line without SSH keys.  You can do this for either the UWM or LIGO domain names; Tom prefers that we use the UWM one.  The instructions for the UWM SSL certificate also contain information about obtaining an IGTF certificate.
 
-    git clone git@git.ligo.org:cgca-computing-team/cgca-hiera.git
+In all cases, you'll generate a key and a certificate request, and will send the certificate request to the proper authorities for it to be signed.
+Once your certificate is ready, you'll receive an e-mail with instructions for downloading your certificate.
+You will usually want the certificate labeled as "X509 Certificate only, Base64 encoded".
 
-Create the necessary YAML files by copying from one of the existing
-instances.  This will get you pretty far::
+DNS configuration
+-----------------
 
-    cd cgca-hiera
-    cp gracedb-test.cgca.uwm.edu.yaml gracedb-new.cgca.uwm.edu.yaml
-    cp gracedb-test.cgca.uwm.edu.eyaml gracedb-new.cgca.uwm.edu.eyaml
+UWM
+___
+In the web interface, you should be able to find the MAC address of the network adapter under the adapter's settings.
+If you need to generate a new MAC address, I'm not sure how to do that through the web interface.
+However, you can do this with VMWare Workstation by right-clicking on your VM to access "Settings", then "Network adapter", and then "Advanced."
+Follow the `instructions <https://www.lsc-group.phys.uwm.edu/wiki/Computing/ManagingVirtualMachines#Create_a_DNS_entry_for_the_guest>`__ on the CGCA wiki for setting up a DNS entry through ``dns.uwm.edu``.
+Note that you will have to click on the "Data Management" tab in the top middle to get to the "Network" settings specified in these instructions.
+
+After this is complete, you can boot up the VM.
+
+LIGO DNS
+________
+This section is only relevant if you are using a LIGO.ORG domain name.
+Email Larry Wallace (larry.wallace@ligo.org) and ask him to configure ``gracedb-new.ligo.org`` as a CNAME that points to ``gracedb-new.ligo.uwm.edu``.
+
+VM configuration
+================
+
+Standard CGCA server configuration
+----------------------------------
+Log on to your server through VMWare Workstation, using the standard root password (note that the hostname is initially set to ``server``).
+Download and run the Debian setup script (as shown on the CGCA wiki)::
+
+    curl -s http://omen.phys.uwm.edu/setup_debian.sh | bash -s -- gracedb-new.ligo.uwm.edu
+
+Reboot the VM.
+The hostname should now be ``gracedb-new.ligo.uwm.edu``.
+Change the root password to match the new hostname using the standard root password formula (use the ``passwd`` command).
+Note: the root password formula may change/be removed in late 2017.
+
+The setup script has generated and sent a Puppet certificate request to the puppetmaster server.
+Log in to ``puppet.cgca.uwm.edu`` and sign the certificate (see instructions `here <https://www.lsc-group.phys.uwm.edu/wiki/Computing/AddingPuppet>`__).
+
+Running Puppet
+--------------
+GraceDB servers use the standard CGCA configuration for a webserver, with several customizations implemented by a gracedb module.
+More information about how to use this module is in its README file.
+You can find the module `here <https://git.ligo.org/cgca-computing-team/cgca-config/tree/production/localmodules/gracedb>`__ (for now, it may move to its own repo in the near future).
+
+First, you'll need to generate hiera files for this server for use with Puppet.
+In the cgca-config repository, create ``data/nodes/gracedb-new.ligo.uwm.edu.yaml`` and ``data/nodes/gracedb-new.ligo.uwm.eyaml``.
+I suggest copying another GraceDB server's files and customizing them as needed.
+Things you will likely need to change include:
+
+- The database password: ``gracedb::mysql::database::password``
+- The root MySQL password: ``mysql::server::root_password``)
+- Accounts for LVAlert servers (if this is a test server, use only ``lvalert-test.cgca.uwm.edu``): create the new account on the LVAlert server (current best method is the online Openfire interface). You'll need to add an entry to ``gracedb::config::netrc`` for this account.
+- Set ``shibboleth::certificate::useHiera`` to false. This will cause a new Shibboleth key and certificate to be generate on the first Puppet run.  After that, you'll copy the generated certificate and key into your server's .eyaml file and set this variable to true.  Then re-run Puppet.
+- If you have SSL certificates already, add them to the .eyaml file.  If not, remove these lines for now and add them back in once you have the certificates.
+- Add this server to the gracedb hostgroup (contains base setup for all GraceDB servers) in the `puppet_node_classifier <https://git.ligo.org/cgca-computing-team/cgca-config/blob/production/site/profile/files/puppetmaster/puppet_node_classifier>`__.
+
+Push your changes to the repository (use a branch and ``r10k`` if you want to be cautious).
+Then, run Puppet on your new server.
+Note that it may take a few minutes for the changes to propagate to the puppetmaster machine, so you may have to wait before running Puppet.
 
-Edit the latter file until you are satisfied. Here are some things you
-will definitely want to change
+Shibboleth SP registration
+--------------------------
+Once you have your Shibboleth key and certificate set up in the Puppet configuration, with ``shibboleth::certificate::useHiera`` set to true, you need to register your SP.
+Send an email to ``rt-auth@ligo.org`` and ask that a service provider with your FQDN be added to the LIGO shibboleth metadata (generally, use the LIGO.org FQDN, if available).
+You will need to attach the cert you find at ``/etc/shibboleth/sp-cert.pem``.
 
-- instances of the FQDN
-- SSH key for the gracedb@gracedb-new.cgca.uwm.edu user
-- user entry for yourself, to map your InCommon cert DN to the gracedb user account
+Shibboleth discovery service
+----------------------------
+Next, set up the embedded discovery service for Shibboleth.
+Go to the "latest" Shibboleth downloads `page <http://shibboleth.net/downloads/embedded-discovery-service/latest/>`__ and determine the version.
+Then you can do::
 
-You may also need to add the ``webserver3`` and ``gracedb`` modules to the 
-list, as these handle much of the work, but are sometimes left off of the 
-list in order to prevent changes being made to the server without the 
-maintainer's knowledge.
+    wget http://shibboleth.net/downloads/embedded-discovery-service/latest/shibboleth-embedded-ds-1.2.0.tar.gz
 
-Next, edit the EYAML file, which has the secret information in it.
-At the time of writing, the best way of editing an EYAML file has not
-been settled upon. (My favorite way to
-do this is to use ``eyaml edit``. But at the time of writing, that is only
-available as root on the ``puppet.cgca.uwm.edu`` machine, and you have to
-explicitly provide paths to the PKCS7 public and private keys. In the 
-intervening time, it is likely that a better way to edit eyaml files will
-have been devised.) Change the mysql root and gracedb
-user passwords, noting that these occur in multiple locations. Add in the 
-naturally occurring shib cert and key, as well
-as the apache cert and key.  Importantly, you should comment out the 
-lines associated with the file ``settings_secret``. We don't want Puppet
-to try to create this file yet, since our server code directories that 
-contain it don't exist yet.
+Unpack the archive into ``/etc/shibboleth-ds`` (create the directory if it doesn't exist), and edit ``idpselect_config.js``.
 
-Commit the new files and push. Then log into the new machine as root and 
-run the puppet agent::
+Change the line starting with ``this.preferredIdP`` to::
 
-    puppet agent -t 
+    this.preferredIdP = ['https://login.ligo.org/idp/shibboleth', 'https://login.guest.ligo.org/idp/shibboleth', 'https://google.cirrusidentity.com/gateway'];
 
-This may initially produce errors, so some iteration is to be expected.
+This determines the identity providers which will be shown on the discovery service login page.
+For test deployments, you may not need to include the Google IdP (depends if your server is set up to use the Cirrus Google gateway or not), but it doesn't hurt anything to include it.
 
-Shibboleth SP registration
---------------------------
+You may need to increase the width of the ``idpSelectIdpSelector`` element in
+``idpselect.css`` (set to ~512 px for 3 IdPs).
+You may need to edit ``this.maxPreferredIdPs`` if you have more than the default number (3).
 
-At this point, the ``shibboleth`` package should be installed, along with its
-self-signed certificates. Send email to ``rt-auth`` and ask that a service provider
-with your FQDN be added to the LIGO shibboleth metatadata. You will need to
-attach the cert you find at ``/etc/shibboleth/sp-cert.pem``.  The rest of the
-Shibboleth SP configuration should already have been taken care of by Puppet,
-so it should "just work" once it is added to the LIGO metadata.  If it doesn't,
-there is more detail about setting up a new Shibboleth SP 
-`here <https://wiki.ligo.org/AuthProject/DeployLIGOShibbolethDebianSqueeze>`__.
+Check if the link provided in ``this.helpURL`` is functional or not; it has not worked for me in the past several versions of ``shibboleth-ds``.
+I suggest using this `link <https://wiki.shibboleth.net/confluence/display/SHIB2/DiscoveryService>`__ instead (if functional).
 
-Application code
-----------------
+Finally, if you are confused about parts (or all) of this section, I suggest looking at other GraceDB servers and emulating their configuration.
 
-Next, we'll pull down the repo containing the source code. Log in to the 
-new machine as the ``gracedb`` user, and clone the 
-server code using your LIGO credentials::
+Populating the database
+=======================
 
-    cd
-    ecp-cookie-init LIGO.ORG https://versions.ligo.org/git albert.einstein
-    git config --global http.cookiefile /tmp/ecpcookie.u`id -u`
-    git clone https://versions.ligo.org/git/gracedb.git
+"Fresh" database
+----------------
+To construct a "fresh" database from migrations, just run::
 
-Create a new settings file by copying from one of the existing ones::
+    cd $HOME/gracedb
+    python manage.py migrate
 
-    cd gracedb/settings
-    cp test.py new.py
+Copying production database
+---------------------------
+First, as yourself, copy the database from a test server to your new server::
 
-or some other appropriate name. (Copy from ``default.py`` if you'd rather
-have a production-like instead of testing-like instance.) Edit this new 
-settings module as desired. You will at least want to change the
-``CONFIG_NAME`` and all instances of the FQDN.  Now edit
-``settings/__init__.py`` to make sure this new settings module will
-be invoked::
+    sudo cp /opt/gracedb/sql_backups/gracedb.sql.gz $HOME
+    scp gracedb.sql.gz $(whoami)@gracedb-new.ligo.uwm.edu:~
 
-    from default import *
+On the new server, as yourself, import the database using the ``gracedb`` user's credentials::
 
-    config = configs.get(ROOT_PATH, "production")
+    gunzip gracedb.sql.gz
+    mysql -u gracedb -p gracedb < gracedb.sql
 
-    if socket.gethostname() == 'gracedb-test':
-        config = 'test'
-    elif socket.gethostname() == 'gracedb-new':
-        config = 'new'
+Note that files related to the events aren't part of the database and won't exist on the new server unless you copy them over, too (see :ref:`copying_event_data` for more information).
 
-    settings_module = __import__('%s' % config, globals(), locals(), 'gracedb')
+Next, become the ``gracedb`` user, enter the Django manager shell, and delete all Contacts and Triggers so that people don't get phone or email alerts from this instance without signing up for them::
 
-Note that the behavior here is that we first import everything from default.
-Then we'll overwrite those settings with fhe module specified by ``config``.
-Also uncomment the ``settings_secret`` file in the EYAML for this machine,
-and run the puppet agent again. This will install our secret settings file
-that is pulled in by the default settings.
+    from userprofile.models import Contact, Trigger
+    for c in Contacts.objects.iterator():
+        c.delete()
+    for t in Trigger.objects.iterator():
+        t.delete()
 
-Required packages
------------------
+You might want to delete the Events, too, especially if you copy the production database.
 
-GraceDB relies on several packages that are best installed in a virtual environment
-rather than at the system level. This is important, because we don't want 
-our regular package updates to suprise us with, say, a new version of Django
-that our code hasn't yet been ported to.
-Create the virtual environment for the ``gracedb`` user in that user's
-home directory::
-
-    cd
-    virtualenv djangoenv --system-site-packages
-    source djangoenv/bin/activate
-    pip install mysql-python
-    pip install python-ldap
-    pip install html5lib
-    pip install requests
-    pip install Sphinx
-    pip install python-memcached
-    pip install django-model-utils
-    pip install djangorestframework==3.3.2
-    pip install django-guardian==1.4.1
-    pip install django-debug-toolbar
-    pip install django-debug-panel
-    pip install Django==1.8.11
-    pip install ligo-lvalert --pre
-    pip install ligo-lvalert-overseer       
-
-You may find that you need to install additional packages during the testing
-process.  Note that the ``--system-site-packages`` is necessary in order for the
-system install of ``python-glue`` to be available inside the virtual environment.
-Also note that we ask for specific version numbers of some packages. Also, the
-ordering of these commands matters, since packages such as ``django-guardian``
-will try to pull in the very latest version of Django.  So if we really want
-Django 1.8, we have to ask for that one *after* installing the third-party
-packages.  I decided to stick with Django 1.8 for the time being, since it is
-one of the designated LTS releases. Version 1.9, by contrast, is not and will
-be supported for a shorter period of time. Successive releases of Django often
-contain breaking API changes, so be prepared if you decide to update. 
-
-Run ``collectstatic`` so that all of the static files from the various Python
-sources are collected under ``gracedb/static``, where Apache will expect to 
-find them::
-    
-    cd
-    cd gracedb
-    ./manage.py collectstatic
-
-Next, install the JavaScript components GraceDB uses to render web pages.
-As root::
-
-    update-alternatives --install /usr/bin/node nodejs /usr/bin/nodejs 100
-    which node
-    curl https://www.npmjs.com/install.sh | sh
-    which npm
-    npm install -g bower
-
-Then, as the ``gracedb`` user::
-
-    cd
-    bower install dgrid#0.4.0
-    bower install dijit#1.10.4
-    bower install dojox#1.10.4
-    bower install moment#2.11.1
-    bower install moment-timezone#0.5.0
-
-These particular versions may be required in order for the web pages to render
-correctly.
-
-Miscellaneous
--------------
+Extra steps
+===========
 
-GraceDB relies on the ability to send email--both for alerts to users who
-request them, and to the maintainer/developer in case of unhandled exceptions.
-Reconfigure ``exim4`` as root by executing::
+As root
+-------
+- Upgrade ``nodejs`` version (also installs ``npm``)::
 
-    dpkg-reconfigure exim4-config
+    curl -sL https://deb.nodesource.com/setup_8.x | bash -
+    apt-get install nodejs
 
-You'll want to accept the defaults, except for two: 1) set this host to be an
-"internet site; mail is sent and received directly using SMTP." and 2) remove 
-``::1`` from the list of listening addresses. (The latter seems to be necessary,
-as I've observed that the exim4 server hangs if it tries to listen on ``::1``.)
-Also check that the system FQDN appears correctly.
+  - Note: you may want to check for a newer version than 8.x.
+- Install ``bower`` for managing JavaScript packages: ``npm install -g bower``
+- Reconfigure ``exim4`` package for sending e-mail: ``dpkg-reconfigure exim4-config``. Accept the defaults, except for:
+    - Set the host to be an "internet site"; mail is sent and received directly using SMTP.
+    - Remove ``::`` from the list of listening addresses; seems to cause the server to hang.
+    - Set "system mail name" to ``gracedb-new.ligo.uwm.edu``.
+    - Set the IP address to listen to for incoming connections to be ``127.0.0.1``.
+    - Set "other destinations for which mail is accepted" to ``gracedb-new.ligo.uwm.edu``; can optionally add ``gracedb-new.ligo.org`` if desired.
+    - Once you're done, restart the ``exim4`` process: ``systemctl restart exim4``
+- Build and mount the secondary file system for holding data files:
+    - Build the filesystem: ``mkfs.ext4 /dev/sdb``
+    - Add the following line to ``/etc/fstab``: ``/dev/sdb /opt/gracedb ext4 errors=remount-ro 0 1``
+        - A safer option is to find the UUID for your drive (``ls -lh /dev/disk/by-uuid``) and use that in place of ``/dev/sdb`` (see other entries in the file for examples).
+    - If there are subdirectories currently in ``/opt/gracedb``, move them somewhere else temporarily.
+    - Mount the filesystem: ``mount -a``
+    - Move back any subdirectories that you may have temporarily moved.
 
-Next, set up the embedded discovery service.  Download from::
+As the ``gracedb`` user
+-----------------------
+- Activate the virtualenv: ``source $HOME/djangoenv/bin/activate``
 
-    http://shibboleth.net/downloads/embedded-discovery-service/latest/shibboleth-embedded-ds-1.1.0.tar.gz
+- Build the GraceDB documentation::
 
-Unpack the archive into /etc/shibboleth-ds, and edit ``idpselect_config.js``::
+    cd $HOME/gracedb/doc
+    sphinx-build -b html source build
+    cd ../admin_docs
+    sphinx-build -b html source build
 
-    this.preferredIdP = ['https://login.ligo.org/idp/shibboleth', 'https://login.guest.ligo.org/idp/shibboleth', 'https://google.cirrusidentity.com/gateway'];        // Array of entityIds to always show
+- Clone the GraceDB admin scripts repo into the gracedb user's ``$HOME``::
 
-You may need to increase the width of the ``idpSelectIdpSelector`` element in
-``idpselect.css``. I set this to 512.
+    git clone https://git.ligo.org/gracedb/scripts.git $HOME/bin
 
-As the ``gracedb`` user obtain the random bin scripts used by GraceDB for various purposes::
+  - Note that the server code repo has already been cloned by Puppet, since it's publicly available. We clone this repo by hand since it's private and dealing with deploy keys is too annoying.
+  - You can call the directory whatever you want (instead of ``bin``), but then you should change the corresponding parameter (``gracedb::config::script_dir``) in the server's Puppet configuration file.
 
-    cd
-    git clone git@git.ligo.org:gracedb/scripts.git bin
+- Run the setup script in this repository (``initial_server_setup.py``) to pull user accounts from the LIGO LDAP, set up admin/superuser accounts, add users to the executives group, and add users to the EM advocates group.
 
-If this raises an error regarding access rights, simply copy over your ssh keypair
-that you use to access ``git.ligo.org``, and add the key to your ssh-agent.
+- Collect static files::
 
-Final steps
------------
+    cd $HOME/gracedb
+    python manage.py collectstatic
 
-As the ``gracedb`` user, fill up the database::
+- Use bower to install packages::
 
-    cd 
-    scp gracedb@gracedb.cgca.uwm.edu:/opt/gracedb/sql_backups/gracedb.sql.gz .
-    gunzip gracedb.sql.gz
-    mysql -u gracedb -p gracedb < gracedb.sql
+    cd $HOME
+    bower install dgrid#0.4.0 dijit#1.10.4 dojox#1.10.4 moment#2.11.1 moment-timezone#0.5.0
+    bower install jquery#3.2.1
 
-From your workstation, test the web interface of your new instance to make
-sure it's working, and run the unit tests::
+  - Note that many of these packages may no longer be needed after the upcoming web UI update (expected in 2018).
 
-    cd gracedb-client/ligo/gracedb/test
-    export TEST_SERVICE='https://gracedb-new.cgca.uwm.edu/api/'
-    python test.py
+- Instantiate the database backups (``logrotate`` will fail if there isn't an initial file)::
 
-I found it necessary to do this as the ``gracedb`` user::
+    touch /opt/gracedb/sql_backups/gracedb.sql.gz
 
-    cd 
-    chmod g+w -R logs
+Allowing access
+===============
 
-Also build the docs::
-    
-    cd 
-    cd gracedb/docs
-    mkdir build
-    sphinx-build -b html source build
-    cd ../admin_docs
-    mkdir build
-    sphinx-build -b html source build
+Outside networks
+----------------
+As configured, your new VM is only accessible from the UWM campus network (or from outside if you are on the VPN).
+If you'd like to allow access from the outside world, email ``noc@uwm.edu``, specify the FQDN and IP address of your new server, and ask them to add openings to the entire world for SSH, HTTP, and HTTPS.
 
-Explanation of the hiera files
-==============================
+In either case, you'll need to update the firewall policy document, which is used to track the accessibility of all of the CGCA servers.
+It's hosted in the CGCA Computing SharePoint, accessible through your UWM Microsoft Online account.
+The file is called ``cgca-firewall-policy.xlsx``; add a new entry and follow the syntax of the other GraceDB servers.
 
-The ``hiera`` YAML and EYAML files attempt to describe the GraceDB server
-as it *should* be.  They contain the build of the configuration necessary for
-setting up a GraceDB instance, though there are some stray bits that have
-to be done by hand.
+Non-LVC users
+-------------
+For non-internal users to be able to access this server, you'll need to register the server with InCommon.
+This provides access via federated identity login (through their university or organization).
+Talk to Scott K. about how to set this up.
 
-.. NOTE::
-    You may find yourself in the situation of needing to stand up an instance
-    that is *not* managed by puppet--for example if you are setting up an 
-    instance at a different data center. In that case, you will need to take
-    care of the above tasks by hand. I recommend copying the Apache virtual
-    host configuration and ``shibboleth2.xml`` from a working GraceDB 
-    instance and modifying as needed.
+If you want to allow Google account access, you'll need to set it up through the Cirrus gateway in addition to registering with InCommon.
+Go `here <https://apps.cirrusidentity.com/console/auth/index>`__ to login, look at the other GraceDB servers to see how they are configured, and follow the directions.
+Make sure to set the Google service up with your LIGO.ORG credentials rather than a personal Gmail account.
+Note that you'll need to be an admin in the Cirrus console to make these changes; talk to Warren A. about setting that up.
 
+If you use either of these services, users will need to register through gw-astronomy in order to get the proper attributes added to their session.
+Ask Mike Manske to "add the server to the attribute filter for the attribute authority IdP" (his words).
+This is necessary so that gw-astronomy will send information about LV-EM group memberships.
 
 Why isn't everything managed by Puppet?
 =======================================
 
-Ideally, the entire process of standing up a GraceDB instance should be
-automated.  This would be very useful (perhaps necessary?) for moving GraceDB
-to the cloud, and also for disaster recovery.  There are gaps in the puppet
-config for ``gracedb`` and ``gracedb-test`` however, as I could not find
-suitable existing puppet modules.  For example, there is a `python module
-<https://forge.puppetlabs.com/stankevich/python>`__ in the Puppet forge that
-manages virtul environments, but it does not handle dependencies well. You
-would have to engineer a ``requirements.txt`` file that lists exact packages
-and versions in a strict dependency order in order for that module to work. I
-experimented with creating my own process based on a file resource for the
-``requirements.txt`` and exec resources to create and update the virtual
-environment based on changes to the file. However, this seemed fragile, and I
-decided that it would be better to manage the virtual environment by hand.
-That being said, I would recommend gradually finding ways to Puppet-ize the
-rest of the install process, especially if improved modules become available.
-
+Ideally, the entire process of standing up a GraceDB instance should be automated.
+This would be very useful (perhaps necessary?) for moving GraceDB to the cloud, and also for disaster recovery.
+However, there do not exist suitable Puppet modules for certain portions of the configuration (i.e., the parts that you just did manually in `Extra steps`_).
+As new modules become available (or you develop them yourself), it may be possible to Puppetize more (or all) of this process.
diff --git a/docs/admin_docs/source/new_pipeline.rst b/docs/admin_docs/source/new_pipeline.rst
index 74091fe01fb6d37fb708418e505f49799afe6f1e..eabbe23fce5d7ad4de1562e9094162fcddd2521d 100644
--- a/docs/admin_docs/source/new_pipeline.rst
+++ b/docs/admin_docs/source/new_pipeline.rst
@@ -4,6 +4,8 @@
 Adding a new pipeline or search
 ================================
 
+*Last updated 18 Sept 2017*
+
 Sometimes, users will request that a new ``Pipeline`` be added. Creating
 the pipeline object itself is the easy part. The hard part is figuring out
 what kind of data file the group will be uploading, and how to ingest the values.
@@ -16,18 +18,9 @@ Adding a new ``Search`` is simpler, but the steps relating to LVAlert are simila
 
 
 .. NOTE::
-    The following suggests performing the necessary database operations
-    in the django console (i.e., a Python interpreter running with the correct
-    environment). These operations could also be done in the web-based django
-    admin interface. However, I never use it myself, so that's not the method
-    I'll show in this documentation. One could also issue raw SQL commands
-    if preferred.
-
-.. NOTE::
-    The database operations here could also be done with 'data migrations'. 
-    This leaves more of a paper trail, and as such might be considered 
-    'the right thing to do.' However, it seems like overkill for relatively
-    small tasks like this.
+    **PLEASE** use a database migration to perform this work, for the purposes
+    of leaving a clear paper trail and making it easily and reproducibly
+    portable to other GraceDB servers (e.g., test and development servers).
 
 GraceDB server side steps
 =========================
@@ -36,17 +29,17 @@ First, create the new pipeline object. Since the only field in the Pipeline mode
 is the name, it's pretty simple. Suppose we are creating a new pipeline called 
 ``newpipeline``. We fire up the Django console:: 
 
-    cd /home/gracedb
+    cd /home/gracedb/gracedb_project
     ./manage.py shell
 
 Now we create the pipeline object itself::
 
-    from gracedb.models import Pipeline
+    from events.models import Pipeline
     newpipeline = Pipeline.objects.create(name='newpipeline')
 
 Now that the pipeline exists, one or more users will need to be given
 permission to *populate* the pipeline (i.e., to create new events for that
-pipeline). For more info on permissions, see :ref:`managing_user_permissions`.
+pipeline). For more info on permissions, see :ref:`user_permissions`.
 By default, all internal users will have permission to create ``Test``
 events for our new pipeline, but only specific users will be allowed to create
 non-``Test`` events. Let's suppose we want to give access to a human user
@@ -58,7 +51,7 @@ non-``Test`` events. Let's suppose we want to give access to a human user
 
     # Retrieve the objects we will need
     p = Permission.objects.get(codename='populate_pipeline')
-    ctype = ContentType.objects.get(app_label='gracedb', model='pipeline')
+    ctype = ContentType.objects.get(app_label='events', model='pipeline')
     einstein = User.objects.get(username='albert.einstein@LIGO.ORG')
     robot = User.objects.get(username='newpipeline_robot')
 
@@ -78,13 +71,13 @@ adequately represent it. If the latter, see :ref:`new_event_subclass`.
 For now, let's assume that the attributes of the new pipeline match up
 exactly with those of an existing pipeline, and that the data file can be
 parsed in the same way. Then all we need to do is to edit the utility function
-``_createEventFromForm`` in ``gracedb/view_logic.py`` so that our 
+``_createEventFromForm`` in ``events/view_logic.py`` so that our 
 new pipeline's name appears in the correct list, resulting in the correct
 event class being created. For example, if the events
 of the new pipeline match up with those from Fermi, then we can add it to
 the same list as Fermi, Swift, and SNEWS. 
 
-Next, edit the function ``handle_uploaded_data`` in ``gracedb/translator.py``
+Next, edit the function ``handle_uploaded_data`` in ``events/translator.py``
 so that, when an event is created for our new pipeline, the data file is
 parsed in the correct way. This function is basically just a huge ``if``
 statement on the pipeline name. So if we want the data file to be parsed
@@ -110,50 +103,10 @@ created as well::
     test_newpipeline_search2
     burst_newpipeline_search2
 
-where the names of the searches are ``search1`` and ``search2``. I typically
-use a script such as the one below to create the nodes and add the ``gracedb``
-user as a publisher::
-
-    #!/usr/bin/env python
-
-    import subprocess
-    import time
-
-    nodes = [
-        'test_newpipeline',
-        'burst_newpipeline',
-        'test_newpipeline_search1',
-        'burst_newpipeline_search1',
-        'test_newpipeline_search2',
-        'burst_newpipeline_search2',
-    ]
-
-    servers = [
-        'lvalert.cgca.uwm.edu',
-        'lvalert-test.cgca.uwm.edu',
-    ]
-
-    for server in servers:
-        for node in nodes:
-            print "creating node %s for server %s ..." % (node, server)
-            cmd = 'lvalert_admin -c {0} -d -q {1}'.format(server, node)
-            p = subprocess.Popen(cmd, shell=True)
-            out, err = p.communicate()
-
-            if err:
-                print "Error for node %s: %s" % (node, error)
-
-            # add gracedb as publisher
-            # Also serves as a check to whether the node exists if not creating
-            time.sleep(2)
-
-            print "adding gracedb as publisher to node %s for server %s ..." % (node, server)
-
-            cmd = 'lvalert_admin -c {0} -j gracedb -q {1}'.format(server, node)
-            p = subprocess.Popen(cmd, shell=True)
-            out, err = p.communicate()
-
-            if err:
-                print "Error for node %s: %s" % (node, error)
+where the names of the searches are ``search1`` and ``search2``.
 
-Note that you must have your ``.netrc`` file set up as described `here <https://gracedb.ligo.org/documentation/responding_to_lvalert.html#subscribing-to-pubsub-nodes>`__ for this to work automatically.
+There is a script (`add_lvalert_nodes.py <https://git.ligo.org/gracedb/scripts/blob/master/add_lvalert_nodes.py>`__) in the "GraceDB scripts" repository
+which can be used to create LVAlert nodes and manage publishers. In general,
+we create the nodes on both the lvalert and lvalert-test servers from the
+"gracedb" LVAlert account, and add other test servers (gracedb-test,
+gracedb-dev1, etc.) as publishers to the new nodes on lvalert-test.
diff --git a/docs/admin_docs/source/new_server_feature.rst b/docs/admin_docs/source/new_server_feature.rst
index 445837f93189558312535379c7be90936bcc4706..665699ef2d1edd3c8908bfb7a7db03c5a18eab5d 100644
--- a/docs/admin_docs/source/new_server_feature.rst
+++ b/docs/admin_docs/source/new_server_feature.rst
@@ -69,3 +69,34 @@ changes to the GraceDB server codebase. Here's how I like to go about it.
 And now your new feature or bugfix should be live on the production machine.
 The scenario I've outlined above is more-or-less the simplest way things can 
 go. Things are more complicated if you need to do a database migration...
+
+To add:
+
+* Information about doing migrations
+* Information about django-debug-toolbar
+* Tips for checking things in the logs
+* Information about what each of the servers is used for and has (incommon, cirrus, etc.)
+
+Using Django Debug Toolbar
+==========================
+Django Debug Toolbar (DJDT) is a useful tool for inspecting request objects, headers, SQL calls, etc. made by your web views.
+One limitation is that it doesn't track Javascript or AJAX requests.
+It's used by turning on the relevant middleware in the ``MIDDLEWARE`` setting and app in ``INSTALLED_APPS``.
+The toolbar is shown to users whose IP address is included in the ``INTERNAL_IPS`` setting.
+However, DJDT reveals certain headers which are to be kept secret, so to keep things extra-safe, we only allow the GraceDB server's IP address to be included in ``INTERNAL_IPS``.
+As a result, you'll have to set up a SOCKS proxy to use DJDT.
+
+Set up an SSH tunnel to the GraceDB server in question::
+
+    ssh -D 8080 -f -C -q -N albert.einstein@gracedb-test.ligo.org
+
+This process runs in the background, so once you're done, you'll have to kill it.
+
+Mozilla Firefox
+---------------
+Go to Preferences, then under the "Network Proxy" heading, go to "Settings".
+Use a manual proxy configuration, use SOCKS v5, use port 8080, and set ``127.0.0.1`` as the SOCKS host.
+
+Google Chrome
+-------------
+Start Chrome from the command line with ``google-chrome --proxy-server="socks5://localhost:8080" --host-resolver-rules="MAP * ~NOTFOUND , EXCLUDE localhost"``.
diff --git a/docs/admin_docs/source/ops.rst b/docs/admin_docs/source/ops.rst
index b88428031383195d2a1bb5a3ee2a6f7bb0447045..9b60da6033611309858ff75e21826d4dd2920bcc 100644
--- a/docs/admin_docs/source/ops.rst
+++ b/docs/admin_docs/source/ops.rst
@@ -1,4 +1,4 @@
-.. GraceDB operation (admin) tasks
+.. GraceDB operational (admin) tasks
 
 Operational Tasks
 =================
@@ -6,11 +6,14 @@ Operational Tasks
 Contents:
 
 .. toctree::
-   :maxdepth: 2
+    :maxdepth: 2
 
-   new_pipeline
-   user_permissions
-   robot_certificate
-   phone_alerts
-   miscellaneous
+    server_maintenance
+    new_pipeline
+    user_permissions
+    robot_certificate
+    phone_alerts
+    lvalert_management
+    sql_tips
+    miscellaneous
 
diff --git a/docs/admin_docs/source/phone_alerts.rst b/docs/admin_docs/source/phone_alerts.rst
index 6fc9af9d1efa6426d98e64b02d0d3350a5a83676..1f702f704a2e326c8b89d24fe24acb752182d72f 100644
--- a/docs/admin_docs/source/phone_alerts.rst
+++ b/docs/admin_docs/source/phone_alerts.rst
@@ -1,6 +1,8 @@
-================================
+.. _phone_alerts:
+
+========================
 Phone alerts with Twilio
-================================
+========================
 
 *Last updated 1 December 2016*
 
@@ -72,7 +74,7 @@ You'll need the Account SID, Auth Token, and TwiML bin SIDs for the next step.
 
 Configuration on GraceDB server
 ===============================
-Most of the relevant code is in ``gracedb/alerts.py``, including the following functions:
+Most of the relevant code is in ``events/alerts.py``, including the following functions:
 
 - ``get_twilio_from``
 - ``make_twilio_calls``
@@ -83,11 +85,10 @@ There is also some relevant code in ``userprofile/models.py``, which defines a `
 
 The TwiML bin SIDs are used in ``make_twilio_calls`` to generate the URLs and make the POST request.
 These SIDs, along with the Account SID and Auth Token should **NOT** be saved in the git repository.
-As a result, they are saved in ``settings/secret_settings.py``, which is not part of the git repository, but is created by Puppet and encrypted in the GraceDB eyaml `file <https://git.ligo.org/cgca-computing-team/cgca-config/blob/production/production/hieradata/gracedb.cgca.uwm.edu.eyaml>`__ in the `cgca-config repository <https://git.ligo.org/cgca-computing-team/cgca-config>`__.
+As a result, they are saved in ``settings/secret.py``, which is not part of the git repository, but is created by Puppet and encrypted in the GraceDB eyaml `file <https://git.ligo.org/cgca-computing-team/cgca-config/blob/production/production/hieradata/gracedb.cgca.uwm.edu.eyaml>`__ in the `cgca-config repository <https://git.ligo.org/cgca-computing-team/cgca-config>`__.
 If you need to edit this, you'll have to follow the instructions `here <https://git.ligo.org/cgca-computing-team/cgca-config/blob/production/EncryptedYaml.md>`__ for working with eyaml files.
 
 *Note: currently, the code for determining the phone call recipients is coupled with the code that determines e-mail recipients.
 This may be the most efficient way of doing it, but we may want to consider separating it in the future for ease of understanding and modularity.*
 
-
 Finally, phone calls are only made to LVC members at present.
diff --git a/docs/admin_docs/source/public_gracedb.rst b/docs/admin_docs/source/public_gracedb.rst
index 11fe0616ba80a2d2143d19395ed5152ed1427d41..e368d7f7ab3b97561b5c89ae734d06a3e668f401 100644
--- a/docs/admin_docs/source/public_gracedb.rst
+++ b/docs/admin_docs/source/public_gracedb.rst
@@ -1,3 +1,5 @@
+.. _public_gracedb:
+
 =====================================
 GraceDB and public triggers
 =====================================
diff --git a/docs/admin_docs/source/robot_certificate.rst b/docs/admin_docs/source/robot_certificate.rst
index 9a1217548d8d6c046b8e4e23480c4b785264c8e0..8cc84b3931cf24abc358ad7da2bb8fa5379728d5 100644
--- a/docs/admin_docs/source/robot_certificate.rst
+++ b/docs/admin_docs/source/robot_certificate.rst
@@ -1,17 +1,14 @@
+.. _robot_certificate:
+
 ================================
 Creating a robot account
 ================================
 
-.. NOTE::
-    You could also do the database operations through the Django admin
-    interface. Instead, I show how to do it with a database migration
-    since that seems easier to me and leaves more of a paper trail. 
-
-General information or robot accounts
+General information on robot accounts
 =====================================
 
 The flagship data analysis pipelines are usually operated by groups of 
-users. Thus, it doesn't make much since if the events are submitted to GraceDB
+users. Thus, it doesn't make much sense for the events to be submitted to GraceDB
 via a single user's account. This is also impractical, as an individual
 user's auth tokens expire often, but the pipeline process needs to be running
 all the time. 
@@ -155,8 +152,9 @@ Edit the migration to do what you want it to do. You could use this as a templat
         ]
               
 
-The above could definitely be refactored in some nice way. I'll leave that as
-an exercise for the reader :-) Now apply the migration::
+The above could definitely be refactored in some nice way.
+I'll leave that as an exercise for the reader :-).
+Now apply the migration::
     
     python manage.py migrate ligoauth
     
diff --git a/docs/admin_docs/source/server_maintenance.rst b/docs/admin_docs/source/server_maintenance.rst
new file mode 100644
index 0000000000000000000000000000000000000000..fbc73b4b73a5e1985e8ff818a9212c89b6f0bc0d
--- /dev/null
+++ b/docs/admin_docs/source/server_maintenance.rst
@@ -0,0 +1,173 @@
+.. _server_maintenance:
+
+=============================
+GraceDB server maintenance
+=============================
+
+*Last updated 24 July 2017*
+
+This section documents procedures for performing server maintenance and upgrading the production server code.
+
+Routine maintenance
+===================
+You should upgrade the system packages on at least a bi-weekly basis.
+Do the following on a test server first, then if everything seems OK, repeat for the production server:
+
+.. code-block:: bash
+
+    # Get package updates
+    apt-get update
+    # CHECK which packages will be upgraded, removed, etc.
+    apt-get upgrade -s
+    # Install upgrades, if they look OK
+    apt-get upgrade
+    apt-get dist-upgrade
+
+Then, reboot the server, run the server code unit tests (if available), and run the client code unit tests, pointing to the server you just upgraded.
+
+When doing updates on the production server, make sure to take a snapshot of the VM first, in case something goes wrong.
+
+Server code upgrades
+====================
+First, develop and test your new features on one of the test servers.
+
+SCCB approval
+-------------
+Once you are ready to move the new features into production, you'll need to get approval from the SCCB.
+Create a new issue on the `SCCB project page <https://bugs.ligo.org/redmine/projects/sccb>`__.
+The title should be something like "GraceDB server code update (1.0.10), 4 July 2017".
+Note that you should submit these requests by Thursday at the latest if you want to implement the changes during maintenance the next Tuesday.
+
+In the issue, you should describe the features/changes/bugfixes you've implemented, along with why they are necessary and what you've done to test them.
+It's really helpful if you can get someone else with GraceDB experience to look over them in advance and approve them, too; this goes a long way with the SCCB reviewers.
+The description should also include a link to a diff of the code changes from the server code git repository - this can be master vs. the old tag or a new tag vs. the old tag, depending on if you tag the code in advance.
+I often wait to tag the new code until it's fully in place on the production server, in case any small changes become necessary.
+
+Leave the status as 'New' and set the the category to 'Requested'.
+After approval, an SCCB member will change the category to 'Approved' and the status to 'In Progress'.
+
+After you've successfully updated the server code (see next subsection), post a short note, including a permanent diff between the old tag and the new tag (if you didn't already), and change the status to 'Closed' and the category to 'Production'.
+
+Updating the production server
+------------------------------
+First, take a snapshot of the VM in case you somehow catastrophically break something.
+Pull the changes into the local master branch:
+
+.. code-block:: bash
+
+    git checkout master
+    git pull
+
+Run any database migrations (as gracedb user):
+
+.. code-block:: bash
+
+    source ~/djangoenv
+
+    # Show all migrations
+    # Those not marked with an 'X' have not been performed
+    python ~/gracedb/manage.py showmigrations
+
+    # Example migration
+    python ~/gracedb/manage.py migrate gracedb 0021
+
+If you haven't tagged this version of the code already, do that and push the tag:
+
+.. code-block:: bash
+
+    git tag -a gracedb-1.0.11 -m "tag notes"
+    git push --tags
+
+Check out the tag (the production server should **always** be on a tag):
+
+.. code-block:: bash
+
+    git checkout gracedb-1.0.11
+
+Build this documentation:
+
+.. code-block:: bash
+
+    sphinx-build -b html ~/gracedb/admin_doc/source/ ~/gracedb/admin_doc/build/
+
+At this point, you can run the client code unit tests (pointing to the production server), since they only create Test events.
+It always makes me a bit nervous to do this on the production server, so you can do some manual tests instead, like creating Test events, annotating with log messages, etc.
+Basically, do whatever it takes for you to feel confident that the changes are in place and the server is working properly.
+
+Memory management
+=================
+To get an idea for how much memory is currently being used, you can run the same plugin as nagios does for the memory check:
+
+.. code-block:: bash
+
+    /usr/lib/nagios/plugins/check_memory -f -C -w 10 -c 5
+
+To combat an issue with wsgi_daemon threads persisting and consuming memory, we've set up a monit instance on the production server which monitors the memory usage and kills all wsgi_daemon processes running longer than 12 hours if the memory usage goes over 70%.
+If this doesn't work for some reason, you can:
+
+* Run the script manually:
+
+  - Log in as root
+  - Run ``/root/kill_wsgi.sh``
+
+* Kill the processes manually (not very nice, may result in dropped connections for users):
+
+  - Run ``sudo kill -s KILL $(pgrep -u wsgi_daemon)``
+
+The aforementioned issue may be resolved in the near future.
+
+Clearing file space
+===================
+Sometimes the GraceDB server's file system gets full for one reason or another.
+A few common culprits are the ``apt`` logs and the Shibboleth cache.
+
+To clean up ``apt`` logs (``/var/cache/apt``):
+
+.. code-block:: bash
+
+    apt-get clean
+
+Shibboleth seems to accumulate many MB of JSON cache files in ``/var/cache/shibboleth`` on a daily basis.
+We have currently set up a cron job under the root user to clear files older than a day from this cache.
+This may be fixed in future versions of Shibboleth.
+To remove these files manually, you can do:: 
+
+    find /var/cache/shibboleth -type f -name '*.json' -mtime +1 -delete
+
+To find directories which are using significant amounts of space, you can do something like:
+
+.. code-block:: bash
+
+    sudo -i
+    cd /
+    du -h --max-depth=1
+
+and iterate from there to identify large subdirectories.
+
+Tips for emergencies
+====================
+
+1. Announce emergency maintenance, especially if downtime is expected.  You'll probably want to send this announcement to DASWG, ldg-announce, and possibly DAC and the search groups.
+2. Take a VM snapshot, if possible.
+
+Stuck/overloaded server
+-----------------------
+If the server is "stuck", you might need to:
+
+* Restart Apache: ``systemctl restart apache2``
+* Restart Shibboleth: ``systemctl restart shibd``
+* Free up some memory: see `Memory management`_.
+* Clear up some space on the file system: see `Clearing file space`_.
+* Reboot the entire server.
+
+Server code bugfixes
+--------------------
+If at all possible, you'll want to do your testing/debugging on a test server.
+To do this, you might need to copy the production database over to a test server.
+See :ref:`sql_tips` for how to do this.
+
+A few things that you may want to do after copying the database, but before beginning your debugging:
+
+* Turn off phone alerts! Obviously the Contact and Trigger instances are part of the database you just copied and will trigger and annoy people if you submit events for testing.  The easiest solution is probably to just delete all of the Contact and/or Trigger objects (in the copied database) through the Django shell.
+* You may want to turn off XMPP alerts just to be safe.
+
diff --git a/docs/admin_docs/source/sql_tips.rst b/docs/admin_docs/source/sql_tips.rst
new file mode 100644
index 0000000000000000000000000000000000000000..58e265bb4237811c01064e44b0f808981616b874
--- /dev/null
+++ b/docs/admin_docs/source/sql_tips.rst
@@ -0,0 +1,179 @@
+.. _sql_tips:
+
+==============================
+Database interactions with SQL
+==============================
+
+*Last updated 11 Aug 2017*
+
+This section gives some tips on using the MySQL interface and some descriptions of currently available tables in the GraceDB and LVAlert databases.
+
+The first step is to log in to the MySQL interface::
+
+    sudo -i
+    mysql -u root -p
+
+Then enter the password when prompted.
+
+Collection of tricks
+====================
+
+.. NOTE::
+    It's not safe to update tables or delete rows manually through the MySQL interface, unless you **REALLY** know what you're doing (due to cross-table dependencies). The safest bet is to do it through the Django manager shell.
+
+Viewing data
+------------
+
+* See all databases: ``SHOW DATABASES;``
+* Select a database to use: ``USE gracedb;``
+* See tables in that database: ``SHOW TABLES;``
+* Get column names of a table: ``SHOW COLUMNS FROM events_event;``
+* Get number of rows in a table: ``SELECT COUNT(*) FROM events_event;``
+* See all data in a table: ``SELECT * FROM events_event;``
+* Get only specific columns from a table: ``SELECT id,created FROM events_event;``
+* Get a specific row: ``SELECT * FROM events_event WHERE id=2;``. Can use ``!=`` as well.
+
+* Get a set of rows with a regular expression: ``SELECT * FROM auth_user WHERE user like '%albert%';``
+
+  * ``%`` is a wildcard.
+  * Can use ``NOT LIKE`` as well.
+
+* Show a few rows: ``SELECT * FROM auth_user LIMIT Ni,N;`` where ``Ni`` is the starting row and ``N`` is the number of rows to display.
+* Get unique entries in a column: ``SELECT DISTINCT(group_id) FROM events_event;``
+
+Modifying data
+--------------
+
+* Delete a row or set of rows: ``DELETE FROM events_event where id<4;``
+* Edit rows matching a regex: ``UPDATE auth_user SET username='new_username' WHERE username LIKE 'albert%';``
+
+
+More complicated stuff
+----------------------
+
+* Set a variable: ``SET @var='created';``
+* Show variable value: ``SELECT @var;``
+* Get table information: ``SHOW TABLE STATUS WHERE Name='events_eventlog';``
+* Changing storage engine for a table: ``ALTER TABLE events_eventlog ENGINE=InnoDB;``
+* Check foreign key relationships::
+
+    USE information_schema;
+    SELECT table_name,column_name,referenced_table_name,referenced_column_name FROM key_column_usage;
+
+* Complex ``SELECT`` query with a join::
+
+    SELECT event.id, user.username
+    FROM events_event AS event
+    LEFT JOIN auth_user AS user ON event.submitter_id=user.id
+    WHERE id<4;
+
+* Complex ``UPDATE`` query with a join::
+
+    UPDATE events_event AS event
+    LEFT JOIN events_group AS group ON event.group_id=group.id
+    SET event.far=0
+    WHERE (group.name='Stochastic' AND event.far > 0);
+
+Database copying and checking
+-----------------------------
+
+Make a copy of the database::
+
+    mysqldump -u gracedb -p gracedb > backup.sql
+
+Dump only specific tables::
+
+    mysqldump -u root -p gracedb events_event events_eventlog auth_user > backup.sql
+
+Use this command to import a dump file (overwrites any databases or tables which already exist)::
+
+    mysql -u gracedb -p gracedb < backup.sql
+
+Check the database for errors::
+
+    mysqlcheck -c gracedb -u gracedb -p
+
+.. NOTE::
+
+   This won't resolve foreign key issues; for example, if an event is removed but the corresponding event logs are not.
+
+GraceDB tables
+--------------
+
+The following tables should be the same across all GraceDB instances:
+
+* auth_group
+* auth_group_permissions
+* auth_permission
+* auth_user
+* auth_user_groups
+* auth_user_user_permissions
+* django_content_type
+* events_group
+* events_label
+* events_pipeline
+* events_search
+* guardian_groupobjectpermission
+* guardian_userobjectpermission
+* ligoauth_alternateemail
+* ligoauth_ligoldapuser
+* ligoauth_localuser
+* ligoauth_x509cert
+* ligoauth_x509cert_users
+
+I'm not sure what the following tables do or if they are still in use:
+
+* coinc_definer
+* coinc_event
+* coinc_event_map
+* coinc_inspiral
+* django_session
+* django_site
+* experiment
+* experiment_map
+* experiment_summary
+* events_approval
+* ligolwids
+* multi_burst (somehow related to coinc_event)
+* process
+* process_params
+* search_summary
+* search_summvars
+* sngl_inspiral
+* time_slide
+
+Notes on LVAlert databases
+==========================
+
+* To get the MySQL database password: ask Patrick Brady or the previous GraceDB developer.
+* You'll want to do ``USE openfire;`` to select the database after logging into the MySQL interface.
+
+Summary of tables
+-----------------
+All tables not listed here were found to be empty by Tanner in April 2017.
+
+* Not empty, but probably not useful
+
+  * ofID: not sure
+  * ofPrivacyList: not sure, only contains stuff from Brian Moe
+  * ofRoster: users, JIDs, nicknames, but only 4 users...
+  * ofRosterGroups: shows roster groups (what are those?)
+  * ofUserProp: something to do with the admin user
+  * ofVCard: not sure
+    ofVersion: shows ``openfire`` version
+
+* Possibly useful
+
+  * ofOffline: shows messages not received or waiting to be received? Seems out of date.
+  * ofPresence: shows offline users (?)
+  * ofPubsubDefaultConf: shows default configuration for users? Only contains info for certain users.
+  * ofPubsubItem: shows all messages? Seems out of date
+
+* Useful
+
+  * ofProperty: properties of ``openfire`` server
+  * ofPubsubAffiliation: shows affiliations to nodes, most useful columns are nodeID, jid (username, affiliation. Note: affiliation='none' indicates a subscriber to the node.
+  * ofPubsubNode: shows all nodes
+  * ofPubsubSubscription: shows node subscriptions
+  * ofSecurityAuditLog: log of admin actions through the console
+  * ofUser: list of users
diff --git a/docs/admin_docs/source/user_permissions.rst b/docs/admin_docs/source/user_permissions.rst
index e9625d6ee2ba95c8bc270ede896d0fe46d895d18..781969a56f252a34196c6510867ca26030cf6f74 100644
--- a/docs/admin_docs/source/user_permissions.rst
+++ b/docs/admin_docs/source/user_permissions.rst
@@ -1,16 +1,15 @@
-.. _managing_user_permissions: 
+.. _user_permissions: 
 
 ================================
 Managing user permissions
 ================================
 
-.. NOTE::
-    The examples here show how to work with permissions in the Django console.
-    I believe it is also possible to do the same thing through the admin
-    browser interface.  I personally don't like it, though, so I never use it.
+*Last updated 18 Sept 2017*
 
 .. NOTE::
-    This is a sample edit in order to prove editing functionality. 
+    The examples here show how to work with permissions in the Django console.
+    Please adapt these examples and use a database migration to implement
+    any changes to permissions.
 
 Background on the permissions infrastructure
 ============================================
@@ -49,7 +48,7 @@ of the infinitive and object, lower-cased and separated by an underscore,
 such as ``add_event``. This code name makes for a very convenient way of looking up
 permissions in the database. The ``content_type`` specifies
 exactly which model the permission refers to (e.g., the ``Event`` model
-from the ``gracedb`` app).  
+from the ``events`` app).  
 (The content type entry contains both the model name *and* the app to which 
 the model belongs because both are necessary to fully specify the model. It
 is not uncommon to have the same model name in multiple apps.)
@@ -72,7 +71,7 @@ make this easier::
 
     >>> u = User.objects.get(username='albert.einstein@LIGO.ORG')
 
-    >>> if u.has_perm('gracedb.add_event'):
+    >>> if u.has_perm('events.add_event'):
     ...:    print "Albert can add events!"
 
 Again, notice that the ``has_perm`` function needs the codename to be scoped by
@@ -163,7 +162,7 @@ The row-level extension
 The permissions described above apply at the Django model level, or
 equivalently, to entire database tables.  Thus, a user with the permission
 ``change_event`` in his or her permission set is able to change *any* entry in
-the ``gracedb_event`` table. However, GraceDB requires finer grained access
+the ``events_event`` table. However, GraceDB requires finer grained access
 controls: we need to be able to grant individual users or groups permissions
 on *individual objects*, or equivalently, individual rows of the database.
 Thus these are sometimes called *row-level* (or object-level) permissions, as opposed to the 
@@ -220,8 +219,8 @@ group of users, such as the LV-EM observers group
 *revoke* the view permissions on an event. Because releasing event information
 to non-LVC users is a sensitive matter, only the ``executives`` group is
 authorized to do this. Thus, we are using table-level permissions to authorize
-the addition and deletion of row-level permissions. Turtles all the way down
-(well, not *really*). 
+the addition and deletion of row-level permissions.
+Turtles all the way down (well, not *really*). 
 
 On permissions and searching for events
 ---------------------------------------
@@ -237,13 +236,13 @@ either an individual or group permission to view the event. There is a
 provided to do this from the guardian package::
 
     from django.contrib.auth.models import User
-    from gracedb.models import Event, Pipeline
+    from events.models import Event, Pipeline
     from guardian.shortcuts import get_objects_for_user
 
     user = User.objects.get(username='albert.einstein@LIGO.ORG')
     events = Event.objects.filter(pipeline=Pipeline.objects.get(name='gstlal'))
 
-    filtered_events = get_objects_for_user(user, 'gracedb.view_event', events)
+    filtered_events = get_objects_for_user(user, 'events.view_event', events)
 
 However, behind the scenes, this requires creating a complex join query over
 several tables, and the process is rather slow. Thus, I added a field to the
@@ -278,7 +277,7 @@ on some anecdotal testing.
 There is also a method on the ``Event`` object to refresh this permissions
 string::
 
-    from gracedb.models import Event
+    from events.models import Event
     e = Event.getByGraceid('G184098')
     e.refresh_perms()