2017 November: Difference between revisions
(Created page with "[http://www.ovsa.njit.edu/wiki/index.php/Expanded_Owens_Valley_Solar_Array#Observing_Log Back to Observing Log] = Nov 07 = '''12:28 UT''' There have been some notable issues...") |
(→Nov 27) |
||
(4 intermediate revisions by 2 users not shown) | |||
Line 2: | Line 2: | ||
= Nov 07 = | = Nov 07 = | ||
'''12:28 UT''' There have been some notable issues in the past few days. The first is that, around Nov. 1 I rebooted the ACC, which loaded it from the disk copy of the LabVIEW code. After that time, we noticed some glitches that I recalled from a previous time, but I could not recall the source. Looking at some old emails from Aug. 31, we realized that the glitches were solved by loading the ACC code from memory, i.e. by downloading the source code from the Win computer. This is done by starting LabVIEW and launching ''Targets/ACC/Startup/ACC Master'' from the project page. After the target is runnning on the ACC, right click and select Disconnect from the menu. At present, we cannot explain why this should be any different from running it from the disk copy, but when run this way the glitches do disappear. | '''12:28 UT''' There have been some notable issues in the past few days. The first is that, around Nov. 1 I rebooted the ACC, which loaded it from the disk copy of the LabVIEW code. After that time, we noticed some glitches that I recalled from a previous time, but I could not recall the source. Looking at some old emails from Aug. 31 (see the Aug. 31 observing log entry for more information), we realized that the glitches were solved by loading the ACC code from memory, i.e. by downloading the source code from the Win computer. This is done by starting LabVIEW and launching ''Targets/ACC/Startup/ACC Master'' from the project page. After the target is runnning on the ACC, right click and select Disconnect from the menu. At present, we cannot explain why this should be any different from running it from the disk copy, but when run this way the glitches do disappear. The ACC was reloaded in this way on Nov. 6. We did an experiment by recompiling the deploying the Win1 copy back to the ACC, and that '''did not''' change the behavior. So there are a couple of scans on Nov. 6 that have the glitches. | ||
A completely separate issue is that the dynamic spectrum seen in the daily spectrogram overview plots began to show an amplitude pattern starting around Oct 22 that became progressively more pronounced until the ACC was rebooted on Nov. 1. After that, the spectrogram went back to its usual appearance. I am not sure whether calibration of the data from Oct 22 - Nov 1 will be impacted, but this does seem to be an issue with synchronization of the DCM attenuation cycling. The lesson is that we should reboot the ACC from time to time, perhaps once/week. | A completely separate issue is that the dynamic spectrum seen in the daily spectrogram overview plots began to show an amplitude pattern starting around Oct 22 that became progressively more pronounced until the ACC was rebooted on Nov. 1. After that, the spectrogram went back to its usual appearance. I am not sure whether calibration of the data from Oct 22 - Nov 1 will be impacted, but this does seem to be an issue with synchronization of the DCM attenuation cycling. The lesson is that we should reboot the ACC from time to time, perhaps once/week. | ||
= Nov 27 = | |||
There was a rather pervasive power outage this morning, which took a lot of effort to recover from. Here is the list of actions: | |||
- Most crios were down. Efforts by Gelu, Natsuha and I were needed to get them working. The crio on ant11 is still down. | |||
- The network port to ant1 was down. I reset the port, but then had to cycle the power on both ant1 and crio1 to get their network address and NTP working. | |||
- The front end on ant1 was also not working (probably the same reason) so I had to cycle its power. | |||
- Tawa did not reboot properly, so I cycled its power. It is now up. | |||
- The DPP was up, but the Myricom driver did not load, so I had to load it manually to recover the interfaces eth2 and eth3. | |||
- The packets were not all there from the correlator, so I reloaded the correlator design. That means we have to remeasure the delays on a calibrator, but as luck would have it, we have high wind conditions. As soon as a calibrator measurement is possible, we need to set the delays. '''Delays were set at ~ 15:00 UT, and were refined just for ants 6 and 8 at ~00:50 UT next day.''' | |||
- The ant14 control system was down. I had to do the following: | |||
restart the amplifier bias program on the lna14 machine. | |||
stop and start the control system on feanta (basically like issuing ctlstop and ctlgo from helios). | |||
cycle the power on the BRICK (outlet 3) | |||
issue the FRM-HOME ant14 command | |||
issue the RX-SELECT HI ant14 command | |||
issue the $LNA-INIT command (also had to fudge the HH drain voltage value) | |||
- The schedule was not communicating with the SQL server, so I had to close the program and restart it. | |||
'''At the end of the day, all antennas except for 11 & 13 (and 3, which has been down for a while for gear box failure) are back.''' | |||
= Nov 28 = | |||
Ant 11 and 13 remains down due to motion fault in one of their axis, according to Gelu. |
Latest revision as of 00:51, 29 November 2017
Nov 07
12:28 UT There have been some notable issues in the past few days. The first is that, around Nov. 1 I rebooted the ACC, which loaded it from the disk copy of the LabVIEW code. After that time, we noticed some glitches that I recalled from a previous time, but I could not recall the source. Looking at some old emails from Aug. 31 (see the Aug. 31 observing log entry for more information), we realized that the glitches were solved by loading the ACC code from memory, i.e. by downloading the source code from the Win computer. This is done by starting LabVIEW and launching Targets/ACC/Startup/ACC Master from the project page. After the target is runnning on the ACC, right click and select Disconnect from the menu. At present, we cannot explain why this should be any different from running it from the disk copy, but when run this way the glitches do disappear. The ACC was reloaded in this way on Nov. 6. We did an experiment by recompiling the deploying the Win1 copy back to the ACC, and that did not change the behavior. So there are a couple of scans on Nov. 6 that have the glitches.
A completely separate issue is that the dynamic spectrum seen in the daily spectrogram overview plots began to show an amplitude pattern starting around Oct 22 that became progressively more pronounced until the ACC was rebooted on Nov. 1. After that, the spectrogram went back to its usual appearance. I am not sure whether calibration of the data from Oct 22 - Nov 1 will be impacted, but this does seem to be an issue with synchronization of the DCM attenuation cycling. The lesson is that we should reboot the ACC from time to time, perhaps once/week.
Nov 27
There was a rather pervasive power outage this morning, which took a lot of effort to recover from. Here is the list of actions:
- Most crios were down. Efforts by Gelu, Natsuha and I were needed to get them working. The crio on ant11 is still down.
- The network port to ant1 was down. I reset the port, but then had to cycle the power on both ant1 and crio1 to get their network address and NTP working.
- The front end on ant1 was also not working (probably the same reason) so I had to cycle its power.
- Tawa did not reboot properly, so I cycled its power. It is now up.
- The DPP was up, but the Myricom driver did not load, so I had to load it manually to recover the interfaces eth2 and eth3.
- The packets were not all there from the correlator, so I reloaded the correlator design. That means we have to remeasure the delays on a calibrator, but as luck would have it, we have high wind conditions. As soon as a calibrator measurement is possible, we need to set the delays. Delays were set at ~ 15:00 UT, and were refined just for ants 6 and 8 at ~00:50 UT next day.
- The ant14 control system was down. I had to do the following:
restart the amplifier bias program on the lna14 machine.
stop and start the control system on feanta (basically like issuing ctlstop and ctlgo from helios).
cycle the power on the BRICK (outlet 3)
issue the FRM-HOME ant14 command
issue the RX-SELECT HI ant14 command
issue the $LNA-INIT command (also had to fudge the HH drain voltage value)
- The schedule was not communicating with the SQL server, so I had to close the program and restart it.
At the end of the day, all antennas except for 11 & 13 (and 3, which has been down for a while for gear box failure) are back.
Nov 28
Ant 11 and 13 remains down due to motion fault in one of their axis, according to Gelu.