Phil Cigan

University of Wisconsin - Madison

REU program-Summer 2004
University of Wisconsin-Madison
UW-Madison Astronomy Department
Madison, WI 53706

Advisor: Jay Gallagher
...and his webpage

Email me at
pjcigan --a-t--

UW-Madison Logo NSF Logo

Round One: CCDPROC
Round Two: DOHYDRA
Round Three: Sky Subtraction
Getting Velocities
Plotting Your Data:
Brief and Unofficial guide to L.A.Cosmic
Research Page

This page is under construction!

You can download an extension for firefox to translate this page. I think that's pretty cool.

Reducing Multi-fiber Spectra in IRAF with DOHYDRA: A Guide by Phil Cigan

Specifically Tailored for DensePak Data

This guide assumes that you already have the latest necessary version of IRAF already installed and configured on your machine. It is specifically tailored to describe, step-by-step, how to reduce (to the point of getting extracted spectra that have been sky-subtracted) multi-fiber data taken with DensePak, but it should be able to be easily modified and applied to data from other instruments as well, such as SparsePak, Hydra, etc. As far as procedures are concerned, this means that I will not be covering the step of subtracting the background sky continuum via "sky fibers". Please keep in mind that this is just a rough guide (by no means definitive and all-encompassing!), and that even if you are using DensePak data, you will probably have to make slight alterations, depending on the nature of the data itself and what you want to do with it. That being said, let's get rollin'!

More to come... hopefully soon!

There are a few different phases to reducing DensePak data. Here is a synopsis: The scans should first be run through CCDPROC to trim them, combine the appropriate scans, and perform zero scan corrections. Next they will be flat-fielded and wavelength calibrated with DOHYDRA. Finally, they will have the background sky subtracted with IMARITH. Two useful resources are the Beginner's Guide to IRAF, CCDPROC and DOHYDRA user guides. The first two can be downloaded as zipped postscript files from under the names and, respectively, and the third can be found at

Round One: CCDPROC

1. Place all the images you will be processing in the same directory. Login to IRAF (cl) and cd to that directory. From the login, go to noao -> imred -> ccdred. This is an IRAF package for processing CCD data.

2. Type

     ccdlist  [all scans]

Image: ccdlist

This prints out information such as the image name, size, type, etc. Check to make sure that the image type listed on the right matches up with what the image is supposed to be (zero, flat, target, arc scan). Input all scan by using *.fits or input select files only with an @ file.

3. Run the routine imhead, which prints out the image header (l+ gives the long version).

   imhead  [scan name]  l+
The scan used can be any target object scan. Copy down the values for BIASSEC and TRIMSEC. Other useful things to note are the values for the gain and noise, which can be listed as something like GAIN_12 and NOISE_12.

4. Set the instrument by typing

In the following parameters, make sure that the pixel type is set to 'real real', then exit (with :q).

Note: if a setup file is not available for your instrument, you must either define one or leave the field blank.

You will then be brought to the ccdproc params. For the first run through , you will only be trimming the scan to get rid of the bias and overscan areas on each image, so the only things in the second block of parameters that should be set to 'yes' are overscan and trim. Set all the rest in this group (such as zerocor, darkcor, etc.) to no. Set the values for biassec and trimsec in one of two ways: either manually set the values you found in imhead (including the brackets and commas, etc.), or (the eay way) you can just try inputting the header keywords in lower case with a ! before the word. For example, for the BIASSEC listed in the header you would input !biassec
Set other parameters such as function, low_rej, etc. as needed. Exit with :q

Image: ccdproc1

5. Cosmic ray subtraction can be an issue. If you have many scans, the crreject or avsigclip parameter of imcombine will probably be sufficient, in which case you can combine the appropriate scans before reducing them. If you have few scans, the rejection algorithms from combining images probably won't cut it, and you'll need a program such as L.A.Cosmic (which can be found at I usually use L.A.Cosmic after I let ccdproc do trimming and overscan correction so that L.A.Cosmic will take less time to process. And don't run L.A.Cosmic on your arc scans.

If you want to rely on the normal IRAF methods of cosmic ray removal, you will use the parameter 'crreject' in imcombine, flatcombine, zerocombine, etc. Wait to combine your target scans until after they have been fully processed with ccdproc. Also, only combine your target scans if the objects lay in the same place on each scan, and if they are all of the same orientation, etc.

6. Run ccdproc by entering

   ccdproc  [all scans]
This will make all the scans uniform in size.

Note: It would technically be correct to do all the trimming and bias subtracting (= zero-correction) to the target scans all in one step, but the way I am listing it is to trim all the scans first (including the zero scans) and then do the zero-correction to the target and flat scans. Is short, I recommend separating these steps because you don't want to zero-correct the zero scans.

You will be prompted to fit a curve to the data from the overscan region. Try to keep the order of the curve as low as possible while still maintaining a good fit. Here are some screenshots of the process:

Overscan Interactive Fit 1 Overscan Interactive Fit 2 Overscan Interactive Fit 3

First you will have the data plotted up as Overscan vs. Pixel (Picture 1). The default order is whatever you had last. In this example, it starts off as order 1. To change order, use the ':order #' command in the irafterm window, with '#' being the desired order. To fit this curve with the new order, hit 'f' (Picture 3. In this example, order=2). Then hit 'q' when you are satisfied, and it will move on to the next scan. You have the option to keep the same fit for each successive scan - simply enter 'no' (which only acts on the next scan) or 'NO' (which will say 'no' to all further scans) at the prompt to interactively fit the next scan.

7. Combine the zero exposure scans with zerocombine (zerocomb and flatcomb are just more specific versions of imcombine):

   zerocombine  [zero*.fits]
You can edit the parameters of zerocombine and flatcombine to change the output file name, cosmic ray rejection, etc.

8. Run ccdproc again on the target and flat field scans, this time changing the parameters to perform the zero correction, and list the combined zero scan at the 'zero' field.

Image: ccdproc2

9. Combine the flat field scans for later use in dohydra with flatcombine by entering

   flatcombine  [dflat*.fits]
For those of you who are used to doing your flat-fielding in ccdproc, don't give in to the temptation. Flat-fielding will be done inside dohydra.

10. Combine your target scans now, if you wish, as long as the data are at the same place on each scan (i.e. the pointing was not dithered or offset between scans) and the sans all have the same orientation. You can always combine them after you run them through dohydra. If you decide to use L.A.Cosmic to remove cosmic rays, feel free to refer to the short guideto the program I have made.

Round Two: DOHYDRA

1. From the cl login, go to noao -> imred -> hydra.

2. Prep work. (Note: this step can be done inside dohydra, but I prefer to do it beforehand) Find out which apertures are being used and which are not. Two very useful tasks for this are apfind and apedit.
Note: use a flat field scan for this step, because fibers in flat scans are all roughly equally illuminated.
Apfind will try to find apertures automatically. Simply enter

   apfind [scan]
then tell it how many fibers you want it to find at the prompt and you will be able to see where each fiber is listed (using a flat scan might be helpful). It is possible to rearrange and delete fibers with this program- see the help file for a full list of commands (or hit '?' on the irafterm window). One simple command to get you started is the 'w' command. This will allow you to zoom in and move left and right on the scan, among other things. Just hit 'w' on the irafterm window, then hit the key corresponding to your desired action. For a list of the available options, hit 'w' then '?'.

Image: apfind Image: apedit
apfind (left) and apedit (right)

As for actually defining, ordering, and deleting apertures, here is a brief description of the process:

If you run apedit, you will be asked to interactively fit a curve to each aperture of this scan (picture 1 below). Again, try to keep the order low while still maintaining a good fit. You will want to delete any points that are far off the curve. Picture 2 below shows an aperture that both needs some points deleted and also needs the polynomial order increased.

interactive function fitting for each aperture apedit:
interactive fitting with aperture that needs higher order function and with
points needing to be deleted

Check the headers of files in the database directory (which should be inside the directory in which you're currently working) for the aperture (i.e. fiber)numbers and their corresponding beam numbers for the scans. Check to make sure that the beam numbers are correct for the listed apertures.

Note about apertures and fibers: The difference between apertures and fibers is somewhat subtle. Fibers correspond to the data that are being input to dohydra, while apertures are the data that dohydra can see. So, for example, if you have a dead fiber, there will be little or no data from it. You will still count it as a fiber, but dohydra counts apertures in order without skipping any numbers. This means in the beginning of the numbering, your aperture and fiber numbers will be fairly close, if not the same. But at the higher end you will probably have skipped some fibers, so the aperture numbers could be very different. This is also why you need an aperture ID table (below). Another way to describe the difference between apertures and fibers is that fibers are the data that DensePak records while apertures are the recognizable data from fibers that dohydra recognizes.

3. DensePak and Hydra are somewhat similar, but they are ultimately different instruments. The beam numbers in Hydra include dedicated sky and flat fibers, but DensePak is just arranged in a block of fibers, so none are able to be placed as dedicated arcs, flats, etc. Instead, separate scans are taken with the whole array pointed at
a. 'uniformly' illuminated surface (dflat scans)
b. A lamp that has known spectral lines for wavelangth calibrations (arc scans)
c. Separate sky/background scans for later subtraction (sky scans)
Thus DensePak really only uses two settings for any given fiber: 1=object(used) or -1=none(unused)

4. Make an aperture identification table that lists each aperture and its corresponding beam number. The format is as follows:

   aper1  beamvalue1 
   aper2  beamvalue2
   aper3  beamvalue3
   aper4  beamvalue4
There should be one pair of aper# and beam# per row, with a space separating them. Here is an example to view or download: apid.txt

5. Make an arc assignment table (and name it something easy, like arcassign1.txt, etc.) that lists each target scan to be used, and the corresponding arc scan that will be used for each one (it will probably be the same for all scans if they were taken around the same time). Check the available arc scans (SAOimage ds9 is good for this) to find the best one(s). The format of the table is similar to the apid table:

   target1  arc1
   target2  arc2
   target3  arc3
Here is a viewable and downloadable example of the arc assignment table: arcassign.txt
If you are using multiple arc scans, you should also make a file that lists all the arc scans to be used like so:

And again, name it something easy, like arcs.txt...

Note about arc assignment tables: I have never actually tried to assign more than one arc scan to one target scan. I imagine that it would try to have you do two wavecals for the same object. My experience has been that dohydra really wants you to list one arc scan to be used for each target scan. If you have an arc scan before and after the target scan, I would suggest picking the one that is better or closer in time and just using that one. It may also help to combine (temporally) corresponding individual arcs to make the line identification process easier, assuming that the same lamp was used and the lines are in the same place on the scans, etc. But there may be some issues with combining arc spectra that I'm forgetting, so I (in my relatively inexperienced opinion) would just stick to listing the best single/combined arc scan for each target scan.

6. Edit the parameters of dohydra (with epar) and change the values in the following fields (see the help file for a more comlete description of the fields):

apref (aperture reference spectrum) - again, it's helpful to use a flat field scan that has been identified and saved with apfind
flat (the flat field spectrum) - use the combined flat scan
arcs1 (list of arc spectra) - this is the table of arc scans to be used (from step 5, can be an @ file)
arctabl (arc assignment table, from step 5, assumed to be a text file)
gain (note: readnoise and gain can be listed as their name in the file header, this time without the '!' prefix)

Also make sure that objbeams is set to a value of 1. For now just worry about flat fielding- set fitflat and dispcor to 'yes', with the rest of the corrections (scatter through skyedit) to 'no'. You can set clean to 'yes' if you have already made a bad pixel map (you will only have gotten a bad pixel map from using L.A.Cosmic, etc. Even so, you shouldn't really need it, as the final L.A.Cosmic image should be VERY clean. If you used the cosmic ray subtraction in imcombine, you won't have one).

Image: dohydra1

7. Edit the parameters of params to change many settings for the reduction by entering

   epar params 
These are all important, so check the help file (i.e. 'help dohydra' at the command line) to find out more about what they do. If the data come out of dohydra looking funny, it's probably because of a wrong setting in these parameters.

Image: dohydra1

Those of you who are reducing DensePak data and are confused by my statement that there are many parameters that can be changed, depending on your setup, may be asking "Well, shouldn't there only be one particularly pertinent setup in the params?" The short answer is that it really depends on what you want to do with/to your data. There are important things to look at for a few different aspects of the reduction. I'll try to go into a little more detail here:

Making sure the apertures are lined up correctly, etc. - all the fields through Trace Parameters are important, plus the Aperture Extraction Parameters. You will have to change these depending on how far the apertures are from each other, etc.

Flat Fielding - the Flat Field Function Fitting Parameters. You can select a different function to fit if you want. Also, the order is important. Really ratty data might require a higher order. I've found that I can usually get by with an order of ~5 when the data look good.

Wavelength Calibration - the next block of parameters is the most important. You will probably have to mess around with some of these a little, depending on what lamp was used, what type of function you want to fit, etc. My data used a Copper-Argon lamp, so I used the 'cuar' line list.

As a whole, setting the parameters to what I have listed would probably be a good starting point if you're starting from scratch and want a first guess.

8. Run dohydra and follow the prompts:

   dohydra [target scans] 
If you haven't already defined your apertures and fit curves to the flat field scan as in step 2, you will be prompted to do so at this point. Next, you will be prompted to define a wavelength calibration scale using your arc comparison spectra scans (which may be combined if you have multiple to increase signal). To do this, you need some kind of diagram that lists the wavelengths of each line emission. For the Cu-Ar calibration lamp, there is an absolutely invaluable document put out by Kitt Peak National Observatory under the NOAO called A CCD Atlas of Comparison Spectra: Cu-Ar Hollow Cathode 4000 Angstroms - 9800 Angstroms by Daryl Willmarth, Taft Armandroff, Sam Barden, and Jim De Veny.

The window where you
define your spectral lines

The irafterm window will display a spectrum from the arc scan, and you will have to go down the whole thing, labelling as many lines as possible to get the best calibration. A little tedious the first time, but worth it for really well-calibrated data. This irafterm also uses the 'w' (window) commands, and you will use them often in this step. Again, to get a list of commands, simply hit '?' and the commands will be spit out in the xgterm window where you are running dohydra. For the window commands, hit 'w' to enter 'window mode' and then hit '?'. Here is the list of the general commands:


               ?  Help                   k  Next line              u  Enter coordinate
               a  Affect all features    l  Match list (refit)     v  Weight
               b  Auto identification    m  Mark feature           w  Window graph
               c  Center feature(s)      n  Next feature           x  Find shift
               d  Delete feature(s)      o  Go to line             y  Find peaks
               e  Add lines (no refit)   p  Pan graph              z  Zoom graph
               f  Fit positions          q  Quit                   .  Nearest feature
               g  Fit zero point shift   r  Redraw graph           +  Next feature
               i  Initialize             s  Shift feature          -  Previous feature
               j  Preceding line         t  Reset position         I  Interrupt


               :add [image [ap]]         :fwidth [value]           :read [image [ap]]
               :coordlist [file]         :image [image]            :show [file]
               :cradius [value]          :labels [type]            :threshold [value]
               :database [file]          :match [value]            :write [image [ap]]
               :features [file]          :maxfeatures [value]      :zwidth [value]
               :ftype [type]             :minsep [value]           

               3. IDENTIFY CURSOR KEYS

               ?  Clear the screen and print menu of options
               a  Apply next (c)enter or (d)elete operation to (a)ll features
               b  Automatic line identifications: queries for approx. coordinate and dispersion
               c  (C)enter the feature nearest the cursor
               d  (D)elete the  feature nearest the cursor
               e  Add features from coordinate list with no automatic refit
               f  (F)it a function of pixel coordinate to the user coordinates
               g  Fit a zero point shift to the user coordinates
               i  (I)nitialize (delete features and coordinate fit)
               j  Go to the preceding image line or column in a 2D or multispec image
               k  Go to the next image line or column in a 2D or multispec image
               l  Add features from coordinate (l)ist with  automatic refit
               m  (M)ark a new feature near the cursor and enter coordinate and label
               n  Move the cursor or zoom to the (n)ext feature (same as +)
               o  Go to the specified image line or column in a 2D or multispec image
               p  (P)an to user defined window after (z)ooming on a feature
               q  (Q)uit and continue with next image (also carriage return)
               r  (R)edraw the graph
               s  (S)hift the current feature to the position of the cursor
               t  Reset the position of a feature without centering
               u  Enter a new (u)ser coordinate and label for the current feature
               v  Modify weight of line in fitting
               w  (W)indow the graph.  Use '?' to window prompt for more help.
               x  Find zero point shift by matching lines with peaks
               y  Automatically find "maxfeatures" strongest peaks and identify them
               z  (Z)oom on the feature nearest the cursor
               .  Move the cursor or zoom to the feature nearest the cursor
               +  Move the cursor or zoom to the next feature
               -  Move the cursor or zoom to the previous feature
               I  Interrupt task and exit immediately.  Database information is not saved.
And here is the list of commands for 'window mode':
                 SET GRAPH WINDOW

               a  Autoscale x and y axes
               b  Set bottom edge of window
               c  Center window at cursor position
               d  Shift window down
               e  Expand window (mark lower left and upper right of new window)
               f  Flip x axis
               g  Flip y axis
               j  Set left edge of window
               k  Set right edge of window
               l  Shift window left
               m  Autoscale x axis
               n  Autoscale y axis
               p  Pan x and y axes about cursor
               r  Shift window right
               t  Set top edge of window
               u  Shift window up
               x  Zoom x axis about cursor
               y  Zoom y axis about cursor
               z  Zoom x and y axes about cursor
At first, your data may be flipped in the x-axis from what your comparison spectra atlas shows. To flip it about the x-axis, hit 'w' then 'f'. Then zoom in to an area of the spectrum that has easily identifiable lines (see image below). Note that the x-axis units are in pixels; there is not yet a wavelength scale - that's what you're about to define. To mark a new line, simply place the cursor on the line and hit 'm' for mark. A prompt will come up, asking you for the wavelength of that line (in Angstroms). Input the wavelength (you can truncate it to an integer) and it will search the line list that you defined before in params. When it recognizes the line, a yellow flag will be placed above the line (see image below). Continue to do this for a few lines, then have the window apply (fit) the changes by typing 'f'. This will bring you to the fit screen (see image below), which shows data points for the lines you have defined fit by a curve. the more data points you have (i.e. the more lines defined) the better the fit will be, which means that the wavelength scale will be better defined. Notice now that there is an initial wavelength scale on the x-axis, and that when you mark new lines the program will anticipate the wavelength of the line based on that scale, in which case you can just hit enter if it's correct without having to input the numbers again. To get back to the normal window, type 'q'. You will want to re-fit with 'f' every few lines so that this scale stays accurate. And when you are finished (image 4 below), fit it one last time before moving on by typing 'q' in the irafterm window.

Zoomed to some
easily identifiable lines Some lines marked
The 'fit' screen Wavelength
Calibration complete.
Image 1 (Upper left): wavelength calibration, zooming in
Image 2 (Upper right): wavelength calibration, marking lines
Image 3 (Lower left): wavelength calibration, fitting the points
Image 4 (Lower right): wavelength calibration, all finished

Dohydra will now extract the spectra, assign arc spectra, and dispersion correct for the object scans before giving you the option to splot (i.e. view with the task splot) the newly reduced data.

9. Your newly flat-fielded and wavelength-calibrated scans will be output as [original name].ms.fits Re-run dohydra as many times as needed for different target scans, wavelength calibrations, etc.

Round Three: Sky Subtraction

This is very simple in theory. And we have an advantage over regular Hydra data: each fiber has a different throughput, so subtracting a separate scan made solely out of background sky will be better than applying one fiber's sky spectrum to the others, as is done for normal hydra data. You will add the sky scans together, scale down the intensity to match the target scans, then simply subtract the sky from the targets.

Note: while Hydra's fibers can be moved to acquire sky scans during obervations, DensePak's "sky fibers" are fixed, so they may actually be pointed at something other than the sky, particularly when observing a very large (in angular size) object (again, see the DensePak User Manual to see the positional setup of the instrument. In other words, the observer may have relied on the 4 dedicated sky fibers) for sky subtraction purposes, using separate sky scans in reduction (if there are any) is preferable, for the reasons listed above.

If you're still confused about this whole sky-subtracting-outside-of-hydra business, here is another explanation that is (hopefully) more clear:

With the Hydra setup, each fiber can be freely moved around to different targets. A few of these can be placed far away from the rest to capture the background sky. Separate scans where each fiber is looking at the background sky simultaneously are not taken. In reduction, the spectra from these few sky fibers can be extrapolated to the other fibers to subtract out the background sky spectral lines. With the DensePak setup, however, each fiber is fixed into an array so that the fibers cannot move relative to each other, so none can be placed far away from the rest to measure the background continuum. Instead, separate scans are taken where all the fibers are pointed at the background sky. In reduction, this preserves the relative throughput of each fiber. This allows you to subtract the night sky image from the target image without having to worry about each fiber's individual throughput. In other words, with Hydra, the spectra from the sky fibers are used for all the other fibers. This means that you would have to individually scale the sky signal to each fiber to correctly account for the different amount of signal each fiber will have. With DensePak, however, each fiber will have its own sky observation. In reduction you will be left with images of the target spectra through each fiber and separate sky spectra through each fiber. This preserves the relative throughput of each fiber for both the target and sky scans. This means you won't have to worry about scaling the night sky spectrum for each individual fiber, but rather you can just scale the intensity of the entire image and directly subtract the image of the sky spectra from the image of the target spectra.

Well, on to the actual steps:

1. Combine all the sky scans and remove cosmic rays using imcombine:

   imcombine  [scans to combine, can be an @ file]  [output file name] 
You can use imcombine's cosmic ray rejection or use a separate program such as L.A.Cosmic.

2. If your sky scans and target scans are of different length (e.g. 300 seconds and 700 seconds), you will need to divide all scans by their integration time to get them in terms of counts per second. The raw images are just a measure of counts. Use

   imarith  [sky scan]  /  [integration time]  [output file name] 
Note: when you do this, you will lose your signal-to-noise information.

3. Using imarith, scale down the emission line intensity of the sky scans to match that of the target scans. Pick one prominent line that you can easily identify in both sets of scans, and try to figure out how much you need to scale for that line - you can use ds9 to find some starting values for the lines. First you should pick out one prominent night sky line that you can easily identify in the target scan and the sky scan. Figure out how much stronger (or weaker) the line is on the sky scan than on the target scan. Then divide the sky scan by that amount and it should be scaled fairly well.

   imarith  [input]  [operator]  [operand]  [output] 
Say that a certain night sky line has a value of 2 on the target scans and 5 on the sky scan. The sky scan is 5/2 = 2.5 times as strong as the target scan. To scale the sky scan down, you would divide 5 / (2.5) = 2, which is the value of the target scan:
   imarith  Sky.fits  /  2.5  sky0.4.fits 
will make make an image called sky0.4 that is the same as Sky, but only 0.4 ( = 1 / 2.5 ) times the intensity. You can also just do it without creating a new image by giving the same file name for the input and output.

Note: As far as determining the amount you need to scale the night sky lines goes, you kind of have to do some trial and error. Again, ds9 can help you find some intensity values of the lines to work with. From there you can just do some simple math to determine how much stronger the signal in the sky is, then divide the sky scan by that value. This should get you pretty close.

4. Subtract the combined and scaled sky scan from the target scans using imarith.

   imarith  [input target scan]  [operand (in this case a minus sign)] [input sky scan]  [output file name] 

imarith - Sky.fits obj0003.clean.fits

Here is another description of this process, in words, for those who want another explanation:
To subtract the background sky continuum, you will combine sky scans from the same general region of the sky your target was in, then manually subtract these scans from the target scans. You will want to divide all of your scans by the integration time if the length of the sky scans and target scans are different. Assuming you have separate scans of background sky spectra, you can manually subtract these from the processed target scans outside of dohydra. But when you combine scans, you increase the signal-to-noise, and make each of the emission lines have a greater intensity. So when you subtract the sky from the target with imarith, the lines on the sky scan may be stronger than those on the target scan. This can be remedied by scaling down the intensity of the sky scan with imarith before subtraction. You want to scale the sky lines in the sky scan to the intensity of the sky lines in the target scan so that they (ideally) exactly cancel out when subtracted. Obviously there will still be some junk left over in practice, but this is what you should aim for, at least.

Measuring Velocities with splot

Under Construction

This will be a very brief guide to obtaining velocity values with the task 'splot' (located in cl.noao.onedspec, it is also available in the hydra package where you ran dohydra). To use splot, type

 splot  [images]  [line [band] ]
... or just do the ol' epar trick. This will bring up an irafterm window if there wasn't one before, which will plot the data from the indicated scans, one spectrum/fiber/aperture at a time. To move to a different aperture, type '(' or ')' to move down or up, respectively. To zoom in, use the handy-dandy window commands from before - with 'w'. This irafterm also has the added benefit of zooming in on the x range by simply hitting 'z'... no 'w' required. As always, a full list of commands can be obtained by typing '?'. When you find a feature to which you would like to fit a gaussian (e.g. H-alpha), place your cursor at the base of the feature on the left and hit 'k'. Then place your cursor at the base of the feature on the right side, and again hit 'k'. This will fit a profile to the curve you have enclosed. Many useful values will be spit out at the bottom of the irafterm (jot the ones you need down on a spreadsheet). On the far left, the centroid of the gaussian you fit - this is the central wavelength of the line you're after, which translates into a velocity... Next is flux in random units - good for comparing between different lines to get ratios, such as [NII]/Ha, etc. Refer to the splot help file for a full discussion of the many things it can do for you.

The next step is to determine the velocity from the wavelength of the line.

The redshift z = |λ - λ0|/λ0   =  Δλ / λ0
Then use
velocity = v = c * Δλ / λ0    or    v = c * z
where z is the redshift, λ is the wavelength, and c is the speed of light.

There is another way to do this, and that is with the task fxcor (located in cl.noao.rv). This is a fourier cross-correlation process. It basically takes one scan and tries to correlate another scan with it - it figures out the offset at which the prominent line features match up. As an analogy, think of two identical combs that have teeth of different lengths. The only difference between the combs is that the pattern of the teeth is slightly offset for one. if you were to perform a sort of fourier cross-correlation on these combs, you would effectively be sliding them next to each other until their patterns matched up. See my crude cartoon below:

thing1:                  |   |          | 
                      |  |   ||  |      |
                      |  |   ||  |  |   || |

thing2:                   |         |
                          |       | ||    ||
                      |   || |    | ||||||||

cross-correlated:        |   |          | 
                      |  |   ||  |      |
                      |  |   ||  |  |   || |

                                        |         |
                                        |       | ||    ||
                                    |   || |    | ||||||||
The offset between the two would be measured, so if you know the wavelength of the first scan, you apply the offset to determine the wavelength

Plotting the Data with

Under Construction

For those who need to rotate their images, ROTANGLE appears to be the parameter you use to figure out the position angle. See the DensePak user's manual - and go to the very bottom of the page in the 'On-Sky Orientation' section.

Prep Work
Other Display Options
Interpreting Your Plots

So. Let's say that you've just downloaded, are getting ready to use it for the very first time and you're wondering where to start. Let's assume that the instrument from which you got your data has probably changed in some way since the program was written. And let's assume that you have IDL installed on your machine. This is a basic rundown of the steps you'll need to take before you can claim your long-awaited velocity plots:

  1. Calculate the size the array / fibers to reflect the actual setup at the time your data was taken.
  2. Edit to update the sizes of the array and fibers.
  3. Make data tables of the coordinates and velocities to use in the program.
  4. Run the program and watch it spit out the files you've been waiting for!

You really only need to calculate the size of the array/fibers once, assuming that the instrument setup remained constant. If you're just working with relative positions, there is a helpful table on Birgit Otte's page with the relative positions of ALL the fibers made for DensePak. (You still need to figure out which fibers correspond to which apertures in your data, and weed out the non-working fibers, etc.) It would take up too much space to list it here, so I'll write out a rough guide on another page which can be found here ***link*** in the near future. Once all of that is finished, you can follow these next relatively easy steps for successive iterations.

Prep Work

* Copy the program into the directory you want to use. (I find that doing your plots in a special 'plots' subdirectory of your data folder keeps things nice 'n tidy.)

* Make a .crd file. --> This is a data table, delimited by spaces and separate lines, that tells the program the data values and coordinates to plot. The basic format is that each line has three entries delimited by spaces: x-coordinate, y-coordinate, and data (velocity) value.


Running the Program

* Open IDL. In a terminal window like xterm or xgterm, go to the directory with your .crd file. Then, simply type:

* If you are only interested in viewing the plot on the screen (and not making an image file), then you want to open up a plot-viewing window with square dimensions before you run the program - otherwise it will stretch the output and look funny. You can adjust the size as needed, but the following command will give you a square window, 600 pixels to a side:
* If you want to save the output as an image file, you add a special command BEFORE running There is no way to save the output as an image file after the program has already run, so if you forgot, you must set the plotting device and re-run the program. The following commands will set the output to create a .ps file:

          (You can always change the sizes and offsets as you like by substituting different values above.)
* If you want to change the output back to the screen, use the following commands (and note that if you created a .ps with the previous device, it will save to file and be readable once you close the device here):
* Run the program by entering "fibers" at the IDL prompt, then following the program's directions:
     Enter name of data file: [inputFile].crd 
     Enter number of lines: [# of lines in your .crd file]
     Enter minimum value for color bar: [min data value, rounded down] 
     Enter maximum value for color bar: [max data value, rounded up] 
     Enter plot title: [text you want to appear at the top of the plot]  

Note that you want to run your next .crd file and make a new plot for that data, you must enter the "IDL>device,file='[DesiredOutputFileName].ps',bits=8,...." line and insert an new output file name BEFORE running again. Also note that the data won't write and your .ps file won't be readable until you either run on another data set, close the IDL device (see above) or until you exit IDL.

***bigger font***

The following is an example using some of my data. Directly below are links to a data table and the .crd file of that data, followed by a list of the IDL commands on the left and the resulting .ps image on the right:

***Table --> IDL commands on left, resulting ps on right***

        Enter name of data file: ############
        Enter number of lines: 83
        Enter minimum value for color bar: #######
        Enter maximum value for color bar: #######
        Enter plot title: #################

- Note that even though my data table listed velocities for fibers up to number 97, I told the program that there were only 83 lines. This is because ten fibers were dead or unused when the data was taken, and I didn't plot the four 'sky' fibers. Without those 14, the total count went down to 83. If you tell that there are more lines in your .crd file than there actually are, you will get an error message that looks like this:

          % READF: End of file encountered. Unit: 1, File: template.crd
          % Execution halted at: FIBERS             13
          %                      $MAIN$          

- It's a good idea to use rounded numbers for your color bar min and max values. If possible, use numbers such that the range between the min and max are nicely divisible by 4. This will give you pretty numbers for the scale that is displayed next to the color bar.
--> Let's take, for example, a data set where your min and max velocity values are 5464 and 5841. There is nothing wrong (from the standpoint of running the program) to give min and max color bar values of 5463 and 5842. However, the numbers appearing on the scale will be much prettier if you were to choose something like 5450 and 5850 for you min and max.


Other ways to display your data with

Sometimes you need to make plots that show multiple data values for the same fibers. For example, maybe for this one position on some galaxy, you want to display the velocity derived from H-alpha AND the velocity derived from another line. Or maybe you want to plot the H-alpha velocities for two different positions but show it on the same plot of relative fiber positions. Whatever the reason, there is a way to plot two data values for each fiber in

Plotting two sets of values for each fiber - one small circle within a larger circle for every fiber:

This is very similar to the normal use of You must still create a .crd file, but now with a slightly different format. Every line in the file must still only have three values - the x coordinate, the y coordinate, and the data value (velocity). To plot your second data value for the same fiber, you write the same fiber coordinates with the new data value on a new line DIRECTLY BELOW the line of the first data value. This will result in a smaller circle representing a different velocity inside of the first data point for that fiber. Here is an example:

***table with 3 columns***

   First velocity 
  (plotting only this one for a given fiber)

   0.92  -14.92  6895.32 
  -0.92  -14.92  6888.45

  Second velocity
 (plotting only this one for a given fiber)

   0.92  -14.92  6801.27
  -0.92  -14.92  6799.19

  Both velocities
 (plotted on the same fiber)

   0.92  -14.92  6895.32 
   0.92  -14.92  6801.27
  -0.92  -14.92  6888.45
  -0.92  -14.92  6799.19

Here is an example of what the differing data tables will look like when plotted out:

***Table w/ 2 columns & 2 rows: top is single plot, bottom is double plot. left is table entry example, right is resulting .ps plot***

*** short example of .crd, then link whole .crd file.*** *** picture of right-pointing arrow *** *** .ps file from the .crd ***
*** short example of double .crd, then link whole .crd file.*** *** picture of right-pointing arrow *** *** double point .ps file from the .crd ***

- Multiple arrays on one plot...

* Do it with
* Do it with Photoshop / The GIMP

Interpreting your plot

     ... more to come ...

I will try to update this guide and elaborate on the points as much as time allows.

You will probably want to plot up this data in some nice format after you complete this... Again, the program by Birgit Otte does the trick nicely. I will try to include a short section on plotting data with here in the future.

That's it! You're done (as far as my guide is concerned, anyway)! So go reward yourself with something nice... I usually opt for cookies and/or video games.



IRAF Spectroscopy Documents

Birgit Otte's Page Dealing with IDL Plots

In progress: A brief guide for L.A.Cosmic

Original from 2004. Last Updated March 11, 2008