You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 13 Next »

Data Preparation and Launching FIREHOSE

Assuming you have properly installed FIREHOSE and its dependent packages, you are ready to start.

First, you should prepare your data by copying all raw files into a directory named "Raw" (if you prefer you can change this later, but firehose uses this as a default).  Create a second directory named "redux" in the same parent directory as Raw and "cd" into the redux directory.  From there, start an idl session and type "firehose" to bring up the GUI.

unix> mkdir Raw

unix> cp <path>/fire*.fits Raw

unix> mkdir redux

unix> cd redux

unix> idl

IDL> firehose

Initial Setup

You are now presented with the firehose GUI.  It is organized into a series of tabs which guide you through the linear process of reduction steps.

By default, you are started  in the "setup" tab, which requires only two simple operations.  First, you can choose the directory where firehose looks for the raw data, if you did not use the default value suggested above.  Firehose will never overwrite files in this space, so your raw data will remain safe.

Next you can enter your Magellan-formatted observing catalog into the second box.  This is used by the software to match exposures of the same object when organizing for the reduction process.  It is more efficient than using header information since it groups objects by where the telescope was pointing and matching that to your object list, thereby avoiding operator errors at the telescope.  It also checks against the FIRE list of known telluric standards to determine which exposures are science targets versus calibrators.  If you choose not to enter the catalog, firehose will attempt to organize based on header information.  All settings can be over-ridden manually later in the process.

Locating the Order Boundaries

The first reduction step consists of indexing the echelle order boundaries.  This is done using a flat field exposure.  Using the browser, select a single internal quartz lamp spectrum from your run and then click "Trace Orders."  The software will run for 15-20 seconds and then present an atv window showing the flat field image with order boundaries marked in either black or another color, depending on your display settings. 

Browse around the image for a bit to make sure that the fit to the edges makes sense everywhere.  About 1 out of 10 times, the fit is bad in the upper left corner of the array.  If you find this condition, try another exposure - this usually works.  Or, if you select multiple exposures the software will average them together and fit on the combined image.

You will know that the software worked correctly if: the slit boundaries are located correctly on the image, and a file is produced in the subdirectory redux/Flat with the name "Orders_0123.fits" where 0123 is the frame number of whatever file you used to trace the slits.

Generating Flat Fields

The next step is to generate the pixel flat field and slit illumination images.  The pixel flat corrects gain variations on the detector, and the illumination function corrects for the fact that the internal quartz lamp has a slight gradient in intensity across the slit (~5%).

Firehose needs 4 pieces of information to perform this operation: (1) The flat field file itself (internal quartz), (2) A sky flat for illumination correction, (3) The file indicating slit positions and order numbers generated in the trace step, and (4) a data frame which helps firehose calculate the "tilt" of the slit on the detector, to facilitate its fitting out the wavelength dependence of the lamp and/or twilight sky.

Clicking in the "Flat" tab will allow you to enter these exposures in for the combine process:

In the "Flat Field Files" area, click the browse button to select the files you'd like to combine for the flat.  Typically one should not combine together flats that were taken after moves of the slit wheel.  Instead, you should generate a new composite flat for each time you moved the slit, and then the software will apply the appropriate one to your data.  You may select multiple files in the dialog box; these will be combined using an avsigclip algorithm when generating the composite.

In the Illum flat files area, enter in the names of the sky flat files you'd like to use for illumination correction.

For the Slit Tilt file, enter in the name of a long science exposure (to use OH lines to fit the slit tilt) or an Arc lamp file.  We've found that the OH lines work slightly better if you have a suitably long frame (>300 sec).

Finally, enter in the name of the order mask file (i.e. Orders_*.fits) generated in the previous step. 

Click "Make flat field" and the software will run.  This process takes some time, roughly 10 minutes per flat field generated. At the end of the process, the illumination flat and pixel flat will be displayed for inspection.

You will know that the software worked correctly if: The pixel flat looks like a flat image centered around 1.00 with fairly little structure, and the illumination image shows a smooth gradient with amplitude of 1.0 +/- ~0.05.  Also, two files should be created in the redux/Flat directory.  One named Pixflat_0123to0223.fits where the numbers again represent the range of files used.  And another named Illumflat_0.60_0122.fits where the 0.60 represents the slit used, and the 0122 is the frame number of the sky flat.

Generating the FIRE Structure

Firehose stores all information about which flat, arc, and telluric files are paired with which science frames in an IDL structure for portability. The next step of the reduction process is to generate this structure, and then edit it to ensure it reflects the pairings you want (e.g. eliminating saturated tellurics, etc.).  This procedure is done by a largely automated, and fairly complicated procedure.  It performs the following procedures without user intervention:

  1. Read in the headers for all objects, and record telescope RA, DEC
  2. Check telescope pointing against the input Magellan object catalog, and assign each science target a unique object ID and common object name
  3. Cross-check pointing against a list of known A0V telluric calibrators; flag standards accordingly and record B and V mags if available.
  4. Assign ThAr arc frames to science and telluric exposures by checking for matches in telescope pointing, and asserting that lamps are on and mirrors in.
  5. Match telluric frames to science targets by searching for the closest match to each science target in UT obs, airmass, and sky angle
  6. Match flat field frames to each object or standard

These operations are all performed when the "Generate Structure" button is selected.  You may select the "verbose" or "Loud" (= very verbose) buttons to produce more output, but these are mostly only useful for debugging purposes.

Once you have created the structure, it is advisable to inspect and (if appropriate) edit it by hand to ensure there are no errors.  To do this, click "Edit Structure."  You will be presented with a window that looks like the following:

The first column is an index number in the FIRE structure, used for bookkeeping.  The second and third columns show the fits file and object name as determined by matching with teh catalog, or, failing a match, form the OBJECT field in the header.  The third column shows how firehose has classified the frame.  All files classified as SCIENCE frames are tagged with an object ID, and grouped with telluric and arc calibrators, shown in the rightmost fields.  Actual telluric frames are paired with an arc image.

You should check to make sure that the intended tellurics and arcs are matched with the right science frames, and that classifications are correct.  By highlighting a line in the list and clicking a button at the top, you can change the relevant fields for an element of the FIRE structure.  This is useful for example if you wish to delete a file from the reduction list (change exptype to TRASH or object ID to -1).  Or, if name matching failed you can force objects to be grouped together by matching their object IDs and names.  Or, you can split a single object pointing into individual subsets by assigning extra object IDs (e.g. by rotator angle, which is not considered by the automated object matching routines).

When you are happy with the structure, click "Done."  Your changes will be committed, and the entire structure will be written to file for future use.  You can save or reload the structure from disk at any time using the save and load buttons. 

We are working on a utility to dump the contents of the structure to a text file that may be edited and read back in as a scriptable version of the pipeline; this feature will be available soon.

Object Extraction

The heart of the software is contained in the Extraction step, where essentially all of the heavy lifting occurs.  It is a slow process, taking roughly 15 minutes per science frame on a fairly modern MacBook pro.  During extraction, the following steps are performed:

  1. A 2D wavelength map is determined by matching each order to a template spectrum of the OH sky lines and/or ThAr arc lamp spectra.  This produces an output file containing the log of the wavelength for each pixel on the array (and in an order).  The wavelengths are corrected for heliocentric velocity shifts, and are in vacuum units by default (since the instrument is in vacuum).
  2. The software calculates tilt of the slit in each order for accurate sky subtraction.
  3. A first pass 2D sky subtraction is performed for the purposes of object finding.
  4. Each order is collapsed in the wavelength direction to locate the object trace.  The trace is then fit over the full order.
  5. An iterative procedure is used to perform a simultaneous fit of both the object profile and background sky residuals.
  6. An optimally-weighted extraction is performed using the profile determined in (5).  
  7. Spectra are stored on an order-by-order basis in units of counts vs. wavelength.

For high signal-to-noise ratio data, a non-parametric b-spline is fit to the object profile; at lower SNR a Gaussian model is used. 

During extraction, you will see plots of the object profile appear to indicate data quality.  The plots show a cross section of the object profile with errors, and the resultant fit.  As the iterative model fit and local sky subtraction improve, you should see the residuals in this plot diminish, through each iteration on a given order.  

Because this process is slow, it can take a day or more to reduce a full run or data.  In many cases you are interested in a particular object.  You can choose a subset of the full target list to extract to speed the process.  To do this, simply highlight the names of the targets you want to extract given the list at right of the panel. If you have already reduced some of your frames, firehose will not re-reduce them unless the "Clobber" button is pressed.

The "Interactive" button gives you an option to check intermediate steps in the process. It is not recommended for novices, as there are many stops in the code that make a pipeline reduction tedious.

Telluric Correction / Flux Calibration

a

Combining Orders and Exposures to 1D

Once you have completed telluric calibration (not before!), you are ready to combine the frames into a 1D spectrum.  On the "Combine" tab, select the object(s) for which you intend to generate 1D spectra, and click "Combine to 1D."  This procedure will average together the individual orders from all exposures on a given object, weighting optimally by signal to noise ratio.  Then the orders of the combined spectrum are merged, again with inverse variance weighting, onto a single grid.  The final output spectra are stored in the redux/FSpec subdirectory by object name.  The files Objname_F.fits and Objname_E.fits are the flux and 1 sigma error spectra, respectively. 

The FITS headers are modified to include the standard CRVAL1 and CDELT1/CD1_1 keywords for the wavelength solution at this stage, so any program can be used to inspect the output.  Within IDL, you may wish to use x_specplot, which is a convenient spectral GUI distributed with xidl.

Accessing 2D sky subtracted/Wavelength calibrated images

Firehose is optimized for point source subtraction.  However it is occasionally useful to look at the 2D sky subtracted frame.  This is handy wen the 1D frame has unusual residuals, or if you would like to generate your own customized routines for extracting extended or multiple sources (the pipeline *can* handle multiple point sources, but requires some hand holding). 

If you would like to access the 2D data, there is an IDL function named "fire_show2D" which will return a 2D array with the sky subtracted data.  This can only be done AFTER normal extraction is attempted, because local sky subtraction is performed at this time.  For example, to look at the sky subtracted image of fire_0123.fits, you can call as follows from the redux directory:

IDL> skysub = fire_show2d('0123')

IDL> xatv, skysub, /block

If you wish to also look at the wavelength map, you can read this in directly from the arc file.  Assuming that file fire_0123 was a long exposure with OH lines for calibration, we read in the arc solution:

IDL> logwv = xmrdfits("Arcs/ArcImg0123.fits.gz")

IDL> wv = 10^logwv

IDL> xatv, skysub, wvimg=wv


  • No labels