guardian issueshttps://git.ligo.org/cds/software/guardian/-/issues2024-03-19T08:35:12Zhttps://git.ligo.org/cds/software/guardian/-/issues/91Guardian raises ModuleNotFoundError on Python 3.122024-03-19T08:35:12ZDuncan Macleodduncan.macleod@ligo.orgGuardian raises ModuleNotFoundError on Python 3.12The guardian code is attempting to import the `imp` module, which was removed in Python 3.12, and so raises a `ModuleNotFoundError`:
```console
$ python3 --version
Python 3.12.0
$ git describe HEAD
1.5.1-8-g6d76afa
$ PYTHONPATH=lib pyth...The guardian code is attempting to import the `imp` module, which was removed in Python 3.12, and so raises a `ModuleNotFoundError`:
```console
$ python3 --version
Python 3.12.0
$ git describe HEAD
1.5.1-8-g6d76afa
$ PYTHONPATH=lib python3 -c "import guardian"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/duncan/git/guardian/lib/guardian/__init__.py", line 14, in <module>
from .system import GuardSystem
File "/home/duncan/git/guardian/lib/guardian/system.py", line 5, in <module>
import imp
ModuleNotFoundError: No module named 'imp'
```https://git.ligo.org/cds/software/guardian/-/issues/90Add pem to the userapps directory search2024-03-18T21:52:04ZThomas ShafferAdd pem to the userapps directory searchNow that there are pem guardians, it would be good to have guardian search in these directories in userapps. Currently, the users are creating sym links to paths in the guardian search paths.Now that there are pem guardians, it would be good to have guardian search in these directories in userapps. Currently, the users are creating sym links to paths in the guardian search paths.https://git.ligo.org/cds/software/guardian/-/issues/89filter setting fails if front end offline2023-04-27T00:28:45ZKarla Ramirezfilter setting fails if front end offlineThis was supposed to be using EPICS_NOFAIL, but it still failed with a filter channel that wasn't available:
```
2023-04-27T00:18:20.317Z ISC_LOCK [DOWN.main] ezca: L1:SUS-ETMY_L2_OLDAMP_Y_RSET => 2
2023-04-27T00:18:21.951Z ISC_LOCK [DO...This was supposed to be using EPICS_NOFAIL, but it still failed with a filter channel that wasn't available:
```
2023-04-27T00:18:20.317Z ISC_LOCK [DOWN.main] ezca: L1:SUS-ETMY_L2_OLDAMP_Y_RSET => 2
2023-04-27T00:18:21.951Z ISC_LOCK [DOWN.main] ezca: L1:SUS-ETMX_L1_DRIVEALIGN_L2P_RSET => 2
2023-04-27T00:18:21.960Z ISC_LOCK [DOWN.main] ezca: L1:SUS-ETMX_L1_DRIVEALIGN_L2Y_RSET => 2
2023-04-27T00:18:22.098Z ISC_LOCK [DOWN.main] ezca: L1:GRD-SQZ_MANAGER_REQUEST => INIT
2023-04-27T00:18:34.579Z ISC_LOCK [DOWN.main] ezca: L1:ASC-DHARD_P => OFF: OUTPUT
2023-04-27T00:18:38.580Z ISC_LOCK W: Traceback (most recent call last):
File "/opt/conda/base/envs/cds-py39-2023030301/lib/python3.9/site-packages/guardian/worker.py", line 494, in run
retval = statefunc()
File "/opt/rtcds/userapps/release/isc/l1/guardian/ISC_LOCK.py", line 558, in main
ezca.switch('ASC-' + stiffness + '_' + rot, 'OUTPUT', 'OFF')
File "/opt/conda/base/envs/cds-py39-2023030301/lib/python3.9/site-packages/ezca/ezca.py", line 521, in switch
return LIGOFilter(sfm_name, self).switch(*args, **kwargs)
File "/opt/conda/base/envs/cds-py39-2023030301/lib/python3.9/site-packages/ezca/ligofilter.py", line 489, in switch
mask |= self.__get_mask('on', switch_actions['ON'])
File "/opt/conda/base/envs/cds-py39-2023030301/lib/python3.9/site-packages/ezca/ligofilter.py", line 600, in __get_mask
for sw_name, current_value in self.get_current_switch_dict().items():
File "/opt/conda/base/envs/cds-py39-2023030301/lib/python3.9/site-packages/ezca/ligofilter.py", line 242, in get_current_switch_dict
return {sw_name: int(getattr(self, '_'+sw_name+'R').get()) for sw_name in const.FILTER_SW_NAMES}
File "/opt/conda/base/envs/cds-py39-2023030301/lib/python3.9/site-packages/ezca/ligofilter.py", line 242, in <dictcomp>
return {sw_name: int(getattr(self, '_'+sw_name+'R').get()) for sw_name in const.FILTER_SW_NAMES}
ValueError: cannot convert float NaN to integer
```https://git.ligo.org/cds/software/guardian/-/issues/88NodeManager should be able to access node index2023-02-17T23:58:46ZThomas ShafferNodeManager should be able to access node indexIt would be very helpful for a manager to get the index of a subordinate.It would be very helpful for a manager to get the index of a subordinate.Thomas ShafferThomas Shafferhttps://git.ligo.org/cds/software/guardian/-/issues/87Node needed a restart after the name of nominal state was changed.2023-02-17T19:23:25ZCamilla ComptonNode needed a restart after the name of nominal state was changed.![image](/uploads/c0ff344e4bf401a87324b3e886952a33/image.png)
This is the error once we reloaded the VIOLIN_DAMPING guardian after the old nominal state DAMPING_ON_DC was removed and renamed to DAMP_VIOLINS_FULL_POWER. This was solved ...![image](/uploads/c0ff344e4bf401a87324b3e886952a33/image.png)
This is the error once we reloaded the VIOLIN_DAMPING guardian after the old nominal state DAMPING_ON_DC was removed and renamed to DAMP_VIOLINS_FULL_POWER. This was solved by command `guardctrl restart VIOLIN_DAMPING`. [Svn commit 24745](https://redoubt.ligo-wa.caltech.edu/viewvc/cds_user_apps?view=revision&revision=24745).
@thomas-shaffer thinks this shouldn't have been needed.https://git.ligo.org/cds/software/guardian/-/issues/86Guardian returned True in the middle of main state?2023-01-24T17:43:51ZJenne DriggersGuardian returned True in the middle of main state?![OMC_guardian_more](/uploads/8d1b314809784dea1e6956a2d6569082/OMC_guardian_more.png)
From LHO control room chat: https://chat.ligo.org/ligo/pl/84zwqfdd83r4dfki4qwtz8bj3w
Sheila: @Jameson Rollins @Thomas Shaffer Here is a screenshot of ...![OMC_guardian_more](/uploads/8d1b314809784dea1e6956a2d6569082/OMC_guardian_more.png)
From LHO control room chat: https://chat.ligo.org/ligo/pl/84zwqfdd83r4dfki4qwtz8bj3w
Sheila: @Jameson Rollins @Thomas Shaffer Here is a screenshot of the OMC gaurdian, when it gets a redirect request while in the OMC_LSC_ON state. It does some but not all of the steps in OMC_LSC_ON, but then returns the next state, OMC_LOCKED before completing main.
1:09 PM
So the IFO transitioned to DC readout even though the OMC wasn't locked. We only noticed this because by luck the OMC was staying close enough to on resonance to keep the IFO locked.
Jenne: One thing I wonder (haven't figured out yet) is if ISC_LOCK was requesting the OMC_LOCK to DOWN, but then OMC_LOCK was able to jump from halfway through the main state it was in to the next in sequence, rather than to DOWN.
Sheila: yeah, I think there's no situation where it should have returned true halfway through main. And it has to return true to move onto the next state in the graph. But there might be a bug in the way the redirect is handled that allows this to happen?https://git.ligo.org/cds/software/guardian/-/issues/83Manager error of "invalid request" for valid subordinate state request2023-01-20T20:44:43ZThomas ShafferManager error of "invalid request" for valid subordinate state requestJuly 21 16:29:26.62 UTC -H1- ISC_LOCK requested the INIT_ALIGN node go to the INIT state. ISC_LOCK threw "guardian.manager.NodeError: INIT_ALIGN: Invalid REQUEST: INIT". INIT_ALIGN still saw the request and transitioned to the INIT state...July 21 16:29:26.62 UTC -H1- ISC_LOCK requested the INIT_ALIGN node go to the INIT state. ISC_LOCK threw "guardian.manager.NodeError: INIT_ALIGN: Invalid REQUEST: INIT". INIT_ALIGN still saw the request and transitioned to the INIT state, but it never recorded that it saw a request in the REQUEST_N channel (only request channel we can trend).
![grd_init_align_error_log](/uploads/e97cbc0d3bc4f8a205d38928b567d27f/grd_init_align_error_log.png)
![grd_init_align_missing_request](/uploads/730b6c8a4be8f22436855d1cff8ca55e/grd_init_align_missing_request.png)https://git.ligo.org/cds/software/guardian/-/issues/85guardctrl not able to show pre-update log entries2022-12-14T17:23:08ZErik von Reisguardctrl not able to show pre-update log entriesguardctrl log command cannot show logs from before the update
At LHO I upgraded Guardian to deb 11. A new log directory was automatically created. The log files have a new (seemingly random) hash embedded in their name. Old log files...guardctrl log command cannot show logs from before the update
At LHO I upgraded Guardian to deb 11. A new log directory was automatically created. The log files have a new (seemingly random) hash embedded in their name. Old log files are still preserved in their original directory.https://git.ligo.org/cds/software/guardian/-/issues/84guardmedm --status incorrectly displays system2022-08-03T16:45:57ZRhys Poultonguardmedm --status incorrectly displays systemWhen using the command `guardmedm --status <system name>` it creates a "status line" for each character e.g. using the command:
```
guardmedm --status ITF_LOCK
```
I see the following:
![Capture](/uploads/6e1c1f9919c6ac01ecc5aa7e58fe52e...When using the command `guardmedm --status <system name>` it creates a "status line" for each character e.g. using the command:
```
guardmedm --status ITF_LOCK
```
I see the following:
![Capture](/uploads/6e1c1f9919c6ac01ecc5aa7e58fe52ef/Capture.PNG)
where a status line is created for each character.
This is because the `nargs='?'` [here](https://git.ligo.org/cds/software/guardian/-/blob/master/lib/guardian/cli.py#L8), meaning that the positional arguments will be converted into a single string. This creates a problem when `guardmedm` loops over `args.system` [here](https://git.ligo.org/cds/software/guardian/-/blob/master/lib/guardian/medm/__main__.py#L113). So I can see two ways to solve this:
1. Have `guardmedm` separately parse the `system` argument with `nargs='+'`
2. Only allow `guardmedm --status` to only see the status of a single system.
@jameson.rollins what are your thoughts on this?https://git.ligo.org/cds/software/guardian/-/issues/82Separate guardmedm into a new package2022-07-20T15:06:42ZRhys PoultonSeparate guardmedm into a new packageSince `MEDM` is only required at workstations where one would interact with guardian processes over EPICS, and is not needed on a machine where guardian daemons are run in a headless environment. It would therefore make sense to separate...Since `MEDM` is only required at workstations where one would interact with guardian processes over EPICS, and is not needed on a machine where guardian daemons are run in a headless environment. It would therefore make sense to separate out `guardmedm` into another package that would only be deployed at workstations and not the machines where the guardian daemons are run.https://git.ligo.org/cds/software/guardian/-/issues/81guardutil not compatible with latest version of networkx2022-06-29T14:57:47ZErik von Reisguardutil not compatible with latest version of networkxSee control room ticket here
https://git.ligo.org/cds/admin/lho/-/issues/7
NetworkX version 2.8.4:
```guardutil graph ISC_LOCK```
gives the following stack trace
```
/opt/rtcds/userapps/release/sys/common/guardian/cdslib.py:52: Futu...See control room ticket here
https://git.ligo.org/cds/admin/lho/-/issues/7
NetworkX version 2.8.4:
```guardutil graph ISC_LOCK```
gives the following stack trace
```
/opt/rtcds/userapps/release/sys/common/guardian/cdslib.py:52: FutureWarning: Possible nested set at position 1
re_fec = re.compile('[[].*:FEC-(.*)_CPU_METER[]]')
Traceback (most recent call last):
File "/ligo/home/erik.vonreis/.conda/envs/networkx/bin/guardutil", line 11, in <module>
sys.exit(main())
File "/ligo/home/erik.vonreis/.conda/envs/networkx/lib/python3.9/site-packages/guardian/guardutil.py", line 505, in main
args.func(args)
File "/ligo/home/erik.vonreis/.conda/envs/networkx/lib/python3.9/site-packages/guardian/guardutil.py", line 148, in draw_graph
dot = graph.sys2dot(system,
File "/ligo/home/erik.vonreis/.conda/envs/networkx/lib/python3.9/site-packages/guardian/graph.py", line 201, in sys2dot
dot = to_pydot(G)
File "/ligo/home/erik.vonreis/.conda/envs/networkx/lib/python3.9/site-packages/networkx/drawing/nx_pydot.py", line 279, in to_pydot
any(
File "/ligo/home/erik.vonreis/.conda/envs/networkx/lib/python3.9/site-packages/networkx/drawing/nx_pydot.py", line 280, in <genexpr>
(_check_colon_quotes(k) or _check_colon_quotes(v))
File "/ligo/home/erik.vonreis/.conda/envs/networkx/lib/python3.9/site-packages/networkx/drawing/nx_pydot.py", line 199, in _check_colon_quotes
return ":" in s and (s[0] != '"' or s[-1] != '"')
TypeError: argument of type 'bool' is not iterable
```
but works with networkx 2.5https://git.ligo.org/cds/software/guardian/-/issues/79guardlog should produce informative message if there are no logs available fo...2022-05-23T17:40:15ZJameson Rollinsjameson.rollins@ligo.orgguardlog should produce informative message if there are no logs available for the given searchThis is prompted by an issue that came up at LLO where the guardlog window was closing immediately, which is probably because there are no logs between the specified times:
https://services1.ligo-la.caltech.edu/FRS/show_bug.cgi?id=23791This is prompted by an issue that came up at LLO where the guardlog window was closing immediately, which is probably because there are no logs between the specified times:
https://services1.ligo-la.caltech.edu/FRS/show_bug.cgi?id=23791https://git.ligo.org/cds/software/guardian/-/issues/80guardlog should never limit number of output log lines2022-05-23T17:39:49ZJameson Rollinsjameson.rollins@ligo.orgguardlog should never limit number of output log linesThis should just be a matter of adjusting the options sent to journalctl to specify not limiting the log lines.This should just be a matter of adjusting the options sent to journalctl to specify not limiting the log lines.https://git.ligo.org/cds/software/guardian/-/issues/34Manager still makes request to node manually unmanaged2022-05-19T22:02:01ZThomas ShafferManager still makes request to node manually unmanagedAs of Guardian version 1.2.0 this is a feature of the "loose" management style. The problem comes when a user is working on a node and manually UNmanages a node. It may seem obvious to the user that the manager will no longer make reques...As of Guardian version 1.2.0 this is a feature of the "loose" management style. The problem comes when a user is working on a node and manually UNmanages a node. It may seem obvious to the user that the manager will no longer make requests, but this is not true.
Example:
While working on the SQZ_MANAGER, Sheila had unmonitored this node to prevent ISC_LOCK (the manager) from moving it around. ISC_LOCK went to DOWN and was still able to make a request to SQZ_MANAGER.
ISC_LOCK log:
```
2019-06-10_20:42:18.700866Z ISC_LOCK executing state: DOWN (10)
2019-06-10_20:42:18.709163Z ISC_LOCK [DOWN.main] USERMSG 0: SQZ_MANAGER: REQUEST CHANGED (was: SQZ_READY_IFO, now: LOCKED_SEED_LOW)
2019-06-10_20:42:18.709833Z ISC_LOCK [DOWN.main] USERMSG 1: SQZ_MANAGER: NOT IN MANAGED MODE
2019-06-10_20:42:18.710262Z ISC_LOCK [DOWN.main] USERMSG 2: LASER_PWR: REQUEST CHANGED (was: POWER_35W, now: POWER_37W)
2019-06-10_20:42:18.713993Z ISC_LOCK [DOWN.main] Unstalling VIOLIN_DAMPING
2019-06-10_20:42:18.747731Z ISC_LOCK [DOWN.main] ezca: H1:SUS-ETMY_M0_DARM_DAMP_V_GAIN => 0
2019-06-10_20:42:18.791187Z ISC_LOCK [DOWN.main] ezca: H1:GRD-ALS_COMM_REQUEST => DOWN
2019-06-10_20:42:18.792761Z ISC_LOCK [DOWN.main] ezca: H1:GRD-ALS_DIFF_REQUEST => DOWN
2019-06-10_20:42:18.794219Z ISC_LOCK [DOWN.main] ezca: H1:GRD-ISC_DRMI_REQUEST => DOWN
2019-06-10_20:42:18.795783Z ISC_LOCK [DOWN.main] ezca: H1:GRD-OMC_LOCK_REQUEST => DOWN
2019-06-10_20:42:18.797656Z ISC_LOCK [DOWN.main] ezca: H1:GRD-IMC_LOCK_REQUEST => LOCKED
2019-06-10_20:42:18.799524Z ISC_LOCK [DOWN.main] ezca: H1:GRD-ALS_XARM_REQUEST => UNLOCKED
2019-06-10_20:42:18.801094Z ISC_LOCK [DOWN.main] ezca: H1:GRD-ALS_YARM_REQUEST => UNLOCKED
2019-06-10_20:42:18.802933Z ISC_LOCK [DOWN.main] ezca: H1:GRD-LASER_PWR_REQUEST => POWER_2W
2019-06-10_20:42:18.804507Z ISC_LOCK [DOWN.main] ezca: H1:GRD-PSL_FSS_REQUEST => READY_FOR_MC_LOCK
2019-06-10_20:42:18.807749Z ISC_LOCK [DOWN.main] ezca: H1:GRD-FAST_SHUTTER_REQUEST => DOWN
2019-06-10_20:42:18.809209Z ISC_LOCK [DOWN.main] ezca: H1:GRD-VIOLIN_DAMPING_REQUEST => TURN_OFF_DAMPING_ALL
2019-06-10_20:42:18.810686Z ISC_LOCK [DOWN.main] ezca: H1:GRD-SQZ_MANAGER_REQUEST => SQZ_READY_IFO
```
SQZ_MANAGER log:
```
2019-06-10_20:42:18.846154Z SQZ_MANAGER REQUEST: SQZ_READY_IFO
2019-06-10_20:42:18.846154Z SQZ_MANAGER calculating path: LOCKED_SEED_LOW->SQZ_READY_IFO
2019-06-10_20:42:18.846154Z SQZ_MANAGER new target: DOWN
2019-06-10_20:42:18.846154Z SQZ_MANAGER GOTO REDIRECT
```
H1:GRD-SQZ_MANAGER_MODE was 0 (Auto) at that time.guardian 1.5Jameson Rollinsjameson.rollins@ligo.orgJameson Rollinsjameson.rollins@ligo.orghttps://git.ligo.org/cds/software/guardian/-/issues/78Test suite fails on macOS 'cannot pickle module object'2022-04-22T14:42:32ZDuncan Macleodduncan.macleod@ligo.orgTest suite fails on macOS 'cannot pickle module object'I am observing test failures on macOS as follows:
```
$ bash -e guardian-test
...
graph: Testing graph state/path execution
PASS non-existant state
PASS non-existant request state
FAIL state A
--- graph.3.expected 20...I am observing test failures on macOS as follows:
```
$ bash -e guardian-test
...
graph: Testing graph state/path execution
PASS non-existant state
PASS non-existant request state
FAIL state A
--- graph.3.expected 2022-04-22 14:34:46.000000000 +0000
+++ graph.3.output 2022-04-22 14:34:46.000000000 +0000
@@ -1,8 +0,0 @@
-T1:TEST-A..0..
-T1:TEST-B..0..
-T1:TEST-C..0..
-T1:TEST-D..0..
-T1:TEST-A..1..
-T1:TEST-A..2..
-T1:TEST-A..3..
-T1:TEST-A..4..
2022-04-22T14:34:46.481Z TEST0 Guardian v1.4.4
2022-04-22T14:34:46.482Z TEST0 single-shot mode
2022-04-22T14:34:46.482Z TEST0 TIME_INIT => 1650638086
2022-04-22T14:34:46.482Z TEST0 OP => EXEC
2022-04-22T14:34:46.482Z TEST0 LOGLEVEL => DEBUG
2022-04-22T14:34:46.482Z TEST0 STATE => A
2022-04-22T14:34:46.482Z TEST0 TARGET => A
2022-04-22T14:34:46.482Z TEST0 REQUEST => A
2022-04-22T14:34:46.482Z TEST0 NOMINAL => NONE
2022-04-22T14:34:46.482Z TEST0 SPM_MONITOR_NOTIFY => True
2022-04-22T14:34:46.482Z TEST0 STATUS => INIT
2022-04-22T14:34:46.482Z TEST0 system name: TEST0
2022-04-22T14:34:46.482Z TEST0 system CA prefix: TEST
2022-04-22T14:34:46.482Z TEST0 module path: /Users/duncan/git/guardian/test/modules/TEST0.py
2022-04-22T14:34:46.483Z TEST0 system states: ['C', 'B', 'A', 'J', 'INIT']
2022-04-22T14:34:46.483Z TEST0 request states: ['C', 'A', 'INIT']
2022-04-22T14:34:46.483Z TEST0 initial state: A
2022-04-22T14:34:46.483Z TEST0 initial request: A
2022-04-22T14:34:46.483Z TEST0 CA setpoint monitor: False
2022-04-22T14:34:46.483Z TEST0 CA setpoint monitor notify: True
2022-04-22T14:34:46.483Z TEST0 daemon initialized
2022-04-22T14:34:46.483Z TEST0 ============= daemon start =============
2022-04-22T14:34:46.483Z TEST0 initializing worker...
2022-04-22T14:34:46.513Z TEST0 W: initialized
2022-04-22T14:34:46.516Z TEST0 stopping daemon...
2022-04-22T14:34:46.516Z TEST0 daemon stopped.
Traceback (most recent call last):
File "/Users/duncan/opt/mambaforge/envs/guardian/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Users/duncan/opt/mambaforge/envs/guardian/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/duncan/git/guardian/lib/guardian/__main__.py", line 289, in <module>
main()
File "/Users/duncan/git/guardian/lib/guardian/__main__.py", line 280, in main
guard.run()
File "/Users/duncan/git/guardian/lib/guardian/daemon.py", line 384, in run
self.worker_init()
File "/Users/duncan/git/guardian/lib/guardian/daemon.py", line 189, in worker_init
self.worker.start()
File "/Users/duncan/git/guardian/lib/guardian/worker.py", line 173, in start
super(Worker, self).start()
File "/Users/duncan/opt/mambaforge/envs/guardian/lib/python3.9/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/Users/duncan/opt/mambaforge/envs/guardian/lib/python3.9/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Users/duncan/opt/mambaforge/envs/guardian/lib/python3.9/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/Users/duncan/opt/mambaforge/envs/guardian/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/Users/duncan/opt/mambaforge/envs/guardian/lib/python3.9/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/Users/duncan/opt/mambaforge/envs/guardian/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/Users/duncan/opt/mambaforge/envs/guardian/lib/python3.9/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle 'module' object
```
This is not observed on Linux.https://git.ligo.org/cds/software/guardian/-/issues/77Can't display node graphs on my remote connection2021-05-05T20:07:24ZGeorgia MansellCan't display node graphs on my remote connectionWhen I'm using x2go I can't get the state graph to show. I tried pushing the button on the medm screen and it never displays, I tried running ```guardutil graph``` but get the error attached. Halp!!
[Screen_Shot_2021-05-03_at_12.25.22_P...When I'm using x2go I can't get the state graph to show. I tried pushing the button on the medm screen and it never displays, I tried running ```guardutil graph``` but get the error attached. Halp!!
[Screen_Shot_2021-05-03_at_12.25.22_PM](/uploads/7cbdc0b4deedfdc58809d53ab459e77f/Screen_Shot_2021-05-03_at_12.25.22_PM.png)https://git.ligo.org/cds/software/guardian/-/issues/76ezca write process reports a change in EPICS record in guardlog, but doesn't ...2021-04-21T01:22:23ZJeffrey Kisselezca write process reports a change in EPICS record in guardlog, but doesn't actually write the valueExposed this issue using lines 256 - 263 in [^/trunk/isi/h1/guardian/SEI_ENV.py](https://redoubt.ligo-wa.caltech.edu/svn/cds_user_apps/trunk/isi/h1/guardian/SEI_ENV.py) (happens both before and after the sleep(0.1) we installed superstit...Exposed this issue using lines 256 - 263 in [^/trunk/isi/h1/guardian/SEI_ENV.py](https://redoubt.ligo-wa.caltech.edu/svn/cds_user_apps/trunk/isi/h1/guardian/SEI_ENV.py) (happens both before and after the sleep(0.1) we installed superstitiously) (see context in [LHO aLOG 58672](https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=58672)):
We found intermittent problems with setting EPICs variables via the guardian's ezca interface. When writing to a bunch of similar channels with a for loop, for one or two of the channels, the guardian log would say one or two got written, but the value never got written (and the rest did).
We exercised the CONSTRUCTION state 3 times, and each of the three times we got a different result: (1) ITMY ST1 and ITMX ST2 actuator allowable counts were not set to max, (2) only ITMY ST1 was not set, and (3) it all worked as written. Each time in (1) and (2), the guardian log reports a change in variable, but MEDM and ndscope reveal that the value was never changed.
Anecdotally, Jenne and Sheila report past experience with this too.
@thomas-shaffer can provide a bit more explicit detail.https://git.ligo.org/cds/software/guardian/-/issues/73reload of node doesn't find itself???2020-09-14T21:12:12ZJameson Rollinsjameson.rollins@ligo.orgreload of node doesn't find itself???Somehow a reload of the IMC_LOCK couldn't find it's own module????:
```
2020-09-14_21:09:13.645662Z IMC_LOCK executing state: OFFLINE (80)
2020-09-14_21:09:13.646238Z IMC_LOCK [OFFLINE.enter]
2020-09-14_21:09:13.899447Z IMC_LOCK [OFFLINE...Somehow a reload of the IMC_LOCK couldn't find it's own module????:
```
2020-09-14_21:09:13.645662Z IMC_LOCK executing state: OFFLINE (80)
2020-09-14_21:09:13.646238Z IMC_LOCK [OFFLINE.enter]
2020-09-14_21:09:13.899447Z IMC_LOCK [OFFLINE.main] ezca: H1:IMC-DOF_2_P => OFF: OFFSET
2020-09-14_21:09:14.151522Z IMC_LOCK [OFFLINE.main] ezca: H1:IMC-DOF_1_Y => OFF: OFFSET
2020-09-14_21:10:09.444205Z IMC_LOCK LOAD REQUEST
2020-09-14_21:10:09.446029Z IMC_LOCK RELOAD requested. reloading system data...
2020-09-14_21:10:09.458197Z IMC_LOCK Traceback (most recent call last):
2020-09-14_21:10:09.458197Z File "/usr/lib/python3/dist-packages/guardian/daemon.py", line 566, in run
2020-09-14_21:10:09.458197Z self.reload_system()
2020-09-14_21:10:09.458197Z File "/usr/lib/python3/dist-packages/guardian/daemon.py", line 327, in reload_system
2020-09-14_21:10:09.458197Z self.system.load()
2020-09-14_21:10:09.458197Z File "/usr/lib/python3/dist-packages/guardian/system.py", line 400, in load
2020-09-14_21:10:09.458197Z module = self._load_module()
2020-09-14_21:10:09.458197Z File "/usr/lib/python3/dist-packages/guardian/system.py", line 287, in _load_module
2020-09-14_21:10:09.458197Z self._module = self._import(self._modname)
2020-09-14_21:10:09.458197Z File "/usr/lib/python3/dist-packages/guardian/system.py", line 159, in _import
2020-09-14_21:10:09.458197Z module = _builtin__import__(name, *args, **kwargs)
2020-09-14_21:10:09.458197Z File "<frozen importlib._bootstrap>", line 1086, in __import__
2020-09-14_21:10:09.458197Z File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
2020-09-14_21:10:09.458197Z File "<frozen importlib._bootstrap>", line 983, in _find_and_load
2020-09-14_21:10:09.458197Z File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
2020-09-14_21:10:09.458197Z ModuleNotFoundError: No module named 'IMC_LOCK'
2020-09-14_21:10:09.460251Z IMC_LOCK LOAD ERROR: see log for more info (LOAD to reset)
```https://git.ligo.org/cds/software/guardian/-/issues/72node crash when timer object improperly defined2020-09-14T21:05:00ZJameson Rollinsjameson.rollins@ligo.orgnode crash when timer object improperly definedIn this case the timer had been defined without specifying a name, should protect against this somehow:
```
2020-09-14_21:02:04.429224Z IMC_LOCK executing state: INIT (0)
2020-09-14_21:02:04.430158Z Process Worker-1:
2020-09-14_21:02:04....In this case the timer had been defined without specifying a name, should protect against this somehow:
```
2020-09-14_21:02:04.429224Z IMC_LOCK executing state: INIT (0)
2020-09-14_21:02:04.430158Z Process Worker-1:
2020-09-14_21:02:04.431279Z Traceback (most recent call last):
2020-09-14_21:02:04.431279Z File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
2020-09-14_21:02:04.431279Z self.run()
2020-09-14_21:02:04.431648Z File "/usr/lib/python3/dist-packages/guardian/worker.py", line 302, in run
2020-09-14_21:02:04.431648Z state.destroy()
2020-09-14_21:02:04.431648Z File "/usr/lib/python3/dist-packages/guardian/state.py", line 139, in destroy
2020-09-14_21:02:04.431648Z self.timer.destroy()
2020-09-14_21:02:04.431648Z AttributeError: 'int' object has no attribute 'destroy'
2020-09-14_21:02:04.444455Z IMC_LOCK stopping daemon...
2020-09-14_21:02:04.447260Z IMC_LOCK daemon stopped.
2020-09-14_21:02:04.447451Z Traceback (most recent call last):
2020-09-14_21:02:04.447451Z File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
2020-09-14_21:02:04.448347Z "__main__", mod_spec)
2020-09-14_21:02:04.448347Z File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
2020-09-14_21:02:04.448347Z exec(code, run_globals)
2020-09-14_21:02:04.448347Z File "/usr/lib/python3/dist-packages/guardian/__main__.py", line 289, in <module>
2020-09-14_21:02:04.448347Z main()
2020-09-14_21:02:04.448347Z File "/usr/lib/python3/dist-packages/guardian/__main__.py", line 280, in main
2020-09-14_21:02:04.448989Z guard.run()
2020-09-14_21:02:04.448989Z File "/usr/lib/python3/dist-packages/guardian/daemon.py", line 452, in run
2020-09-14_21:02:04.449237Z raise GuardDaemonError("worker exited unexpectedly, exit code: %d" % self.worker.exitcode)
2020-09-14_21:02:04.449237Z guardian.daemon.GuardDaemonError: worker exited unexpectedly, exit code: 1
2020-09-14_21:02:04.510384Z guardian@IMC_LOCK.service: Main process exited, code=exited, status=1/FAILURE
2020-09-14_21:02:04.510904Z guardian@IMC_LOCK.service: Failed with result 'exit-code'.
```https://git.ligo.org/cds/software/guardian/-/issues/71Using incorrect channel prefix for DIAG_L0 node to channel list file2020-09-03T20:34:56ZKeith ThorneUsing incorrect channel prefix for DIAG_L0 node to channel list fileThe L1 DIAG_L0 Guardian uses the L0 prefix, so channels are L0:GRD-DIAG_L0_*. However, when 'guardctrl enable...' is done for any node, it writes them as L1:GRD-DIAG_L0_* in L1EPICS_GRD.ini. Only work-around currently is to hand-edit t...The L1 DIAG_L0 Guardian uses the L0 prefix, so channels are L0:GRD-DIAG_L0_*. However, when 'guardctrl enable...' is done for any node, it writes them as L1:GRD-DIAG_L0_* in L1EPICS_GRD.ini. Only work-around currently is to hand-edit that file as the 'guardian' user from l1guardian.
This was present in the Python2 version on Debian 9 as wellJameson Rollinsjameson.rollins@ligo.orgJameson Rollinsjameson.rollins@ligo.org