The code pipeline of light curve generation and analysis for AGNs. Originally developed by Xiaoyi Ma for use on ACT blazars. Copied to ACTCollaboration on Aug 13th 2025.
To get light curves across all arrays for one AGN, you need to input the name of the AGN (name) and its coordinate (RA, DEC) following these steps:
1. sbatch get_script.slurm name RA DEC
2. sh submit.sh name
Here is the list of code:
| Name of the code | purpose | Description | Outputs |
|---|---|---|---|
| maps_13.py | Map Making | Generate the slurm script for generating thumbnail maps and corresponding light curves from the maps, text files for one-day map selection, and start time of maps to make maps for given source, frequency and array from S13 to S16. (Dataset: mr3f). For the maps between S13 and S16, the raw maps need to be copy from the projects directory to the scratch directory to generate the thumbnails maps. There will be a text file listed of raw maps generated for copying them to the scratch directory. | time_XXX.txt; selection_XXX.test; test_XXX.sh (script to get the individual thumbnail map); test_XXX.slurm (run XXX.sh for each timestamps in time_XXX.txt); tod_XXX.txt |
| maps_17.py | Map Making | Generate the slurm script for generating thumbnail maps and corresponding light curves from the maps, and start time of maps to make maps for given source and frequency from S17 to S22. (Dataset: dr6v4) | time_XXX.txt; selection_XXX.test; test_XXX.sh (script to get the individual thumbnail map); test_XXX.slurm (run XXX.sh for each timestamps in time_XXX.txt) |
| tod2map2.py | Map Making | Code create the thumbnail one-day maps (reference) | thumbnail maps |
| get_script.slurm | Map Making | The user input the name and location of the source (ra,dec) (in the script), the script call the above map making python code and output the ten slurm files that calls the map (1-day) making code (tod2map2.py) | thumbnail maps for each AGN |
| submit.sh | Map Making | Bash script submit the slurm script (get_script.slurm) to the server, which calls the map (1-day) making code (tod2map2.py) and creates the one-day maps in scratch | thumbnail maps for each AGNs |
| get_script.sh | Map Making | Bash script run submit.sh for all AGNs | thumbnail maps for all AGNs |
| --- | --- | --- | --- |
| AGN_data_new.py | Light curve | Extract the brightness of the central source in the map after apply the matched filter, also rejected the maps with bad pixels near or at the source | light curves with flagged data due to missing/bad hits |
| --- | --- | Submit separate SLURM scripts to get light curves for each AGN after the thumbnail maps are generated | --- |
| data.slurm | Light curve | SLURM script to run AGN_data_new.py to get the light curve for each sources acrossing all frequncies and arrays | |
| get_data.sh | Light curve | Script submitting data.slurm for all sources on the cluster | light curves with flagged data due to missing/bad hits |
| --- | --- | Submit one SLURM script to get get light curves for all AGN after the thumbnail maps are generated | --- |
| data.sh | Light curve | Script for getting the light curve of single AGN for all arrays | light curves with flagged data due to missing/bad hits |
| data.slurm | Light curve | SLURM script to run data.sh for all AGN | light curves with flagged data due to missing/bad hits |
| --- | --- | --- | --- |
| Light_curve.ipynb | Light curve | Rejected the data points by visual inspection and generate the light curve image and data with consideration of rejected data points, and calibration with Planck data. | light curves with flagged data due to missing/bad hits & manually inspection, also flags for calibration. |
| --- | --- | --- | --- |
| PCA_simulated.py | PCA | Get the common mode for simulated light curve | Simulated data |
| PCA.ipynb | PCA | Get the common mode for light curve of all sources | --- |
| --- | --- | --- | --- |
| stat_prop.ipynb | Stats | Get the statistical property of the light curves (structure function, variability index, etc.) | Information of the statistical property |
| Hist_summary.ipynb | Stats | Generate histogram for statistical property | Histograms |
The final data product will have 6 columns:
- Time
- Flux [mJy]
- Measurement uncertainty from the each single map [mJy]
- Hits map rejection indicator (0 means rejected)
- Visual inspection rejection indicator (0 means rejected)
- Calibration indicator (0 means no calibration with Planck)
Regarding the data without Planck-based calibration, here's the breakdown:
- Two observations in 2015 for dozens of light curves.
- All points in 2014 for 25 sources.
- All points for f150 ar4 light curves.
The f150 ar4 light curves aren't included in our cosmology analysis due to unreliable detectors and uncertainty about the bandpass, making calibration with Planck difficult.
For calibration, each frequency and array within one season has a constant calibration factor. Final corrections typically range within a few percent, although earlier seasons may have larger variations, up to around 10%.
I recommend removing the data points with zeros in the 4th and 5th columns. Regarding the points without calibration correction, it's up to you whether to keep them or not. Considering that f150 ar4 comprises almost one-third of the data for f150, it might be good to keep them if you're comfortable with a few percent drift.