Notes on the Essentials of Astronomy Data

Table of Contents

1 Introduction

These notes outline the basic process for reducing images from a CCD on an telescope. The primary purpose of this tutorial is to give a small group of software developers some experience reducing actual data. The examples are constructed more to clarify some of the things that need to be done, rather than showing the most convenient tools one might actually use in practice.

2 Online Data

2.1 The Sloan Digital Sky Survey

  1. The SDSS Catalog Archive Server

    The SDSS CAS is a front end to a database of object parameters and spectra for objects detected in the SDSS. One of the easiest ways to start exploring is through navigator. Other tools let users explore the database schema and even make SQL queries of the database.

  2. The SDSS Data Archive Server

    The SDSS DAS serves most of the files read and written by the SDSS data reduction pipeline in the course of reducing the SDSS data. It is from files in the DAS that the CAS database is generated. The DAS documentation provides a high level summary of the SDSS processing pipelines, including file read and written by different programs.

    The most interesting of these files are the corrected frames and the final object catalogs, examples of which can be found, for example, off of this page. These files, as with many in astronomy, are in FITS format.

2.2 Others

  • Multimission Archive at STScI: MAST is an archive of data from the Hubble Space Telescope (HST) and a number of other instruments.
  • NASA/IPAC Extragalactic Database: NED is an online database of data and catalogs, mostly collected from papers.
  • The SAO/NASA Astrophysical Data System: The ADS is an on-line, searchable database of papers. It contains abstracts for just about any paper related to astronomy, and links to papers on arXiv, the official journal site, or their own scanned versions. Its metadata on specific papers often includes links to the data used in that paper, if it is available on-line. (For example, a paper that uses HST data will have a link to the relavant data set in MAST.
  • CDS Simbad: Simbad is an online database of objects, with functionality that overlaps that of NED.
  • CDS Vizier: VizieR is on online database of catalogs.
  • SkyView: SkyView is a tool for exploring on-line data from a wide range of obseratiories and datasets.

3 Data Formats

3.1 The Flexible Image Transport System (FITS)

The simplest FITS file contains a single image. It consists of a header and a data section.

The header and data section are each an integer number of 2880 byte blocks long, and if the content of either does not naturally fall on block boundary, it is padded to do so. This block length has no other significance.

The header consists entirely of ASCII text, and divided into 80 byte header cards. Header cards are not terminated (and are not permitted to contain carriage returns); each one starts immediately after the one that preceeds it, such that the first header card occupies bytes 1 through 80, the second 81 through 160, etc. This is an irritation if one wants to view the FITS file on a terminal that is not exactly 80 characters wide. The following was recorded on a 78 character wide xterm:

bash$ head -c 400 mdR-00114173.fits
SIMPLE  =                    T
  BITPIX  =                   16
    NAXIS   =                    2
      NAXIS1  =                 2128
        NAXIS2  =                 2069
          bash$
bash$

Each 80 character header card starts with a keyword of up to 8 upper case letters and numbers. If the keyword has an associated value, the nineth and tenth characters are always "= ", and the remainder of the header card contains the value, represented in ASCII, and an optional comment. The comment is separated form the value with a "/" character.

The first header card always contians the keyword "SIMPLE" with the value "T" (true), indication that the file is a stardard FITS file. All fits headers contain a BITPIX, the number of bits per pixel, the NAXIS keyword, indicating the number of axis in the contained array (usually 2 for CCD images), and a sequence of NAXISx keywords, giving the size in each dimension. The last keyword in every header is "END", which is followed by enough padding to bring the total length of the header to an integer number of 2880 byte blocks.

The number of bytes in the data section can therefore be calculated from the header. For example, for a 2 dimensional image, the size is (BITPIX/8) * NAXIS1 * NAXIS2, plus padding to bring the data section up to an integer number of 2880 byte blocks.

The data section immediately follows the header, and is a simple sequence of pixel values. The type of the pixel values is determined by the value of BITPIX, following IEEE-754 stardards. Negative values of BITPIX indicate floating point values. So, a BITPIX of 32 indicates 4 byte signed integers, while a BITPIX of -32 indicates 4 byte floating point numbers. The data in FITS is always big endian.

The initial ("primary") HDU may be followed by additional ("extension") HDUs. Extension HDUs follow most of the same rules as the primary HDU, with a few exceptions.

  • While the primary HDU must be an image (or other regular array of bytes), extensions may be of other types, such as tables.
  • an extension HDU begins with the "XTENSION" instead of "SIMPLE". The value of this keyword gives the type of the extension.

The most important extention type is the binary table. A binary table extension still includes the BITPIX, NAXIS, NAXIS1, and NAXIS2 keywords; the BITPIX keyword always has a value of 8, and NAXIS a value of 2. The NAXIS1 and NAXIS2 keywords give the length of each row (in bytes) and the number of rows. This allows FITS readers that cannot read FITS binary table to calculate the length of the data section and skip to the HDU using the same calculation used for images.

FITS binary table headers also contain a number of keywords for each column in the table. For example, TTYPE4, TFORM4, TDISP4, and TUNIT4 contain the column name, data type, recommended display formating, and physical units for column 4 of the table. TFORMn is the only one of these that is actually required by the standard, as TFORM1 is required to determine the starting byte of TFORM2, etc., but at least TTYPEn is also usually present.

Note that cell values in FITS binary tables may be arrays themselves.

3.2 ASCII

ASCII tables, particulary white space separated value tables, are pretty common in astronomy. Some older tools use them for everything except images, while newer tools have moved to using FITS binary tables for many things.

ASCII configuration files are still ubiquitous.

3.3 VOTable

VOTable is an XML file format for tables, designed so that they can be directly mapped too and from FITS binary tables. Several newer FITS libraries use a common interface for VOTables and FITS binary tables.

VOTables have the advantage that they can be streamed before completion, while a FITS table must declare its length in its header.

4 Data Analysis and Exploration Software for Astronomy

4.1 Plain old UNIX/linux

One can explore at least the header of the FITS file using standard linux utilities. The header of the FITS file is plain ASCII text, but has a few quirks; each line of the header is exactly 80 characters long, but is not delimeted. One way to look at the header in linux that accomodates this is to use hexdump:

bash$ wget -q http://das.sdss.org/imaging/3630/40/dr/3/drObj-003630-3-40-0083.fit
bash$ ls -l drObj-003630-3-40-0083.fit
-rw-rw-r--  1 neilsen sdss 4933440 Aug  7 00:05 drObj-003630-3-40-0083.fit
bash$ hexdump -e '80/1 "%_p" "\n"'  drObj-003630-3-40-0083.fit | head
SIMPLE  =                    T
BITPIX  =                    8
NAXIS   =                    0
EXTEND  =                    T
RUN     =                 3630 / Imaging run number.
CAMCOL  =                    3 / Column in the imaging camera.
RERUN   =                   40 / Rerun number.
FIELD0  =                   83 / First field reduced.
NFIELDS =                    1 / Number of fields reduced.
STRIPE  =                   15 / Stripe number.

This hexdump command has the added advantage of not sending non-ASCII characters to your terminal when it reaches the data section of the file.

4.2 DS9

There are many tools available for exploring FITS files. One of the better viewing tools for FITS images is ds9.

bash$ wget -q http://das.sdss.org/imaging/3630/40/corr/3/fpC-003630-r3-0083.fit.gz
bash$ setup ds9
bash$ ds9 fpC-003630-r3-0083.fit.gz

ds9 can read gzipped fits files transparently; there is no need to decompress the file explicitly. When initially opened, the scaling of the image pixel values to gray levels is often unuseful. You can change the scaling by choosing the "scale" tab. In the example images, the "histogram" and "zscale" (under "more") options are good choices. You can also fiddle with the scaling using right mouse button on the scale bar at the bottom of the image.

You can center a point on the image by clicking on it with the middle mouse button.

You can also look at the header of a fits image using ds9 using the "File/Display Fits Header…" menu option.

For the example image, you can see that there is a sharp, vertical discontinuity in the sky level of the image. This is because different amplifiers read out the the right and left halves of the CCD.

4.3 ftools and fv

ftools is a collection of executables for manipulating FITS files. These include extraction of data into ASCII tables, printing headers, extracting or editing contents of either the headers or data, plotting and arithmatic, validity checking, and more.

On sdsslnx33, make ftools easy to set up, you need something like these lines in your .bashrc:

export HEADAS=/opt/headas-6.3/i686-pc-linux-gnu-libc2.3.4
alias heainit=". $HEADAS/headas-init.sh"

Then, before running any ftools commands, you need to trigger the heainit script:

bash$ heainit

fv, technically part of ftools but available and often used as a distinct entity, is an interactive GUI for exploring FITS files. It's real strength is in dealing with tables, and it has hooks for using ds9 for image viewing.

4.4 Python

There are several python libraries that support reduction of astronomical images. Three of the most important are:

  • numPy: a library of tools for manipulating arrays of data, such as images;
  • pyFITS: a library for reading, writing, and otherwise manipulating FITS files; and
  • numdisplay: a tool for interacting with ds9 from Python.

On the EAG cluster at Fermilab, using UPS to setup stsci\python provides python with these libraries:

bash$ PYTHONPATH=""
bash$ setup stsci_python
bash$ python
Python 2.5.1 (r251:54863, Dec 20 2007, 12:03:19)
[GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-59)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>

I empty my PYTHONPATH environmental variable to make sure I don't have any inappropriate versions of python libraries in my path; stsci\python sets up everything it needs in the FNAL/EAG environment.

To start using pyFITS, you need to load the relavant packages:

>>> import pyfits
>>> from numpy import *
>>> import numdisplay

You can explore a FITS header thus:

>>> hdr = pyfits.getheader('http://das.sdss.org/www/cgi-bin/drC?RUN=3630&RERUN=40&CAMCOL=3&FIELD=83&FILTER=r')
>>> print hdr
SIMPLE  =                    T
BITPIX  =                   16
NAXIS   =                    2
NAXIS1  =                 2048
NAXIS2  =                 1489
...
>>> print hdr['airmass']
1.23555004242

Note that you can give pyfits a URL instead of a file name, and it will work just the same.

You can load image data, manipulate it, and view it with ds9:

>>> im = pyfits.getdata('http://das.sdss.org/www/cgi-bin/drC?RUN=3630&RERUN=40&CAMCOL=3&FIELD=83&FILTER=r')
>>> numdisplay.display(im,z1=1000,z2=1500)
Image displayed with Z1:  1000  Z2: 1500

You can load data from FITS tables into Python arrays:

>>> catalog = pyfits.getdata('http://das.sdss.org/pt/objects/v8_0v3_4-0/GCL/NGC5053short/kaCalObj-00138142.fit')
>>> catalog.names
('id', 'ra', 'raErr', 'dec', 'decErr', 'mag', 'magErr', 'saturated')
>>> mags = catalog.fields('mag')
>>> mags_r = mags[:,2]
>>> max(mags_r)
18.5981006622

The image and table data all get stored in numpy arrays, and can be explored visually and mathematically using any of the many plotting and math tools available for it (eg matplotlib).

In the example, the mags field returned a two dimensional array; each row of the table was itself an array of numbers, in this case an array of five magnitude values, one for each SDSS filter. The slice (mags[:,2]) grabs just the third element, which corresponds to the r filter.

4.5 R

R is still fairly obscure within the astronomical community, but it is my personal favorite tool for plotting and statistics, and there is a FITS reader in CRAN. R is available as part of most linux distribution, and the FITS reader is easy to install. From within R:

> install.packages("FITSio")

downloads the FITS reader from CRAN and installs it. You need to load the package before using it:

> require("FITSio")
Loading required package: FITSio
[1] TRUE

To look at a header, and head a value for a specific keyword:

> im <- readFITS("http://das.sdss.org/www/cgi-bin/drC?RUN=3630&RERUN=40&CAMCOL=3&FIELD=83&FILTER=r")
> im$hdr[1:10]
 [1] "SIMPLE" "T"      "BITPIX" "16"     "NAXIS"  "2"      "NAXIS1" "2048"
 [9] "NAXIS2" "1489"
> im$hdr[which(im$hdr=="AIRMASS")+1]
[1] "1.23555004241777"

Like pyFITS, R can take a URL in place of a file name.

To read a table and make some plots and histograms:

> require(FITSio)
Loading required package: FITSio
[1] TRUE
> catalog <- readFrameFromFITS("http://das.sdss.org/pt/objects/v8_0v3_4-0/GCL/NGC5053short/kaCalObj-00138142.fit")
> attach(catalog)
> plot(ra,dec)
> hist(mag.2,xlim=c(10,20),ylim=c(0,50),breaks=500)

(The handling table cells that are themselves arrays is a little awkward in R.)

You can even display an image in R:

> im <- readFITS("http://das.sdss.org/www/cgi-bin/drC?RUN=3630&RERUN=40&CAMCOL=3&FIELD=83&FILTER=r")
> image(im$imDat,zlim=c(1100,1200),col=gray.colors(256))

but images aren't R's real strength.

4.6 IRAF and PyRAF

IRAF is an environment and large collection of tools for reducing astronomical data. The framework itself and many of its tools were developed by NOAO. The NOAO collection has been supplemented signifacantly by STScI.

IRAF is installed on the EAG cluster as a UPS package. The UPS setup calls an IRAF environment setup script that only works in csh or descendents, such as tcsh:

bash$ # IRAF startup scripts need to be run from csh
bash$ tcsh
tcsh$ setup iraf

The first time you run it, you may get instructions on preparing the image viewing link used by ds9 (or ximtool); follow these instructions.

Before running IRAF for the first time, you need to prepare a login.cl file. From the directory within which you will be starting IRAF (or PyRAF), run mkiraf:

tcsh$ cd ~
tcsh$ mkiraf

One of the STScI contributions is PyRAF, which allows IRAF utilities to be called from within a python interpreter instead of the standard IRAF "cl" command line.

bash$ setup ds9
bash$ ds9 &
[1] 32199
bash$ # IRAF startup scripts need to be run from csh
bash$ tcsh
tcsh$ set PYTHONPATH=""
tcsh$ setup iraf
tcsh$ setup pyraf
just set PYTHONVERS 2.5
tcsh$ pyraf
    NOAO/IRAFNET PC-IRAF Revision 2.14 Fri Nov 30 15:27:05 MST 2007
    This is the RELEASED version of IRAF V2.14 supporting PC systems.


  Welcome to IRAF.  To list the available commands, type ? or ??.  To get
  detailed information about a command, type `help <command>'.  To run  a
  command  or  load  a  package,  type  its name.   Type  `bye' to exit a
  package, or `logout' to get out  of the CL.    Type `news' to find  out
  what is new in the version of the system you are using.

  Visit http://iraf.net if you have questions or to report problems.

  The following commands or packages are currently defined:



      +------------------------------------------------------------+
      |             Space Telescope Tables Package                 |
      |           TABLES Version 3.8, February 2008                |
      |                                                            |
      |   Space Telescope Science Institute, Baltimore, Maryland   |
      |   Copyright (C) 2003 Association of Universities for       |
      |            Research in Astronomy, Inc.(AURA)               |
      |       See stsdas$copyright.stsdas for terms of use.        |
      |         For help, send e-mail to help@stsci.edu            |
      +------------------------------------------------------------+
clpackage/:
 apropos        fitsutil/       language/       plot/           stsdas/
 clpackage/     galphot/        lists/          proto/          system/
 color/         gemini/         nmisc/          rvsao/          tables/
 dataio/        gmisc/          noao/           softools/       user/
 dbms/          images/         obsolete/       stecf/          utilities/
PyRAF 1.4 (2007Jun15) Copyright (c) 2002 AURA
Python 2.5.1 Copyright (c) 2001-2007 Python Software Foundation.
Python/CL command line wrapper
  .help describes executive commands
-->

One quirk if IRAF is that the setup stript only works under csh or one of its descendents (eg tcsh), and not sh or bash. Hence the execution of tcsh above.

Early versions of IRAF were designed for interaction with the user on a text terminal, and to show images on an IIS Model 70 display. As monitor and display technology progressed, programs such as ximtool, SAOimage, SAOtng, and now ds9 have replaced this external display, but still interact with IRAF using a (now modified) version of the IIS protocol either fifo pipes or UNIX or inet sockets. This interaction continues to be a bit brittle, and sometimes the IRAF, the viewer, or both need to be restarted.

4.7 Others

Many other tools for display and analysis of FITS images are available. Some examples include:

  • Dervish and astrotools: Dervish and astrotools consist of a collection of C and fortran libraries with tcl wrappers for analysis of astronomy data. There tools were originally developed for the SDSS. Dervish uses a similar display interface for images as IRAF, allowing the use of ds9 for display of images. See my notes on Dervish for more information on using Dervish and astrotools.
  • Aladin: Aladin is a Java utility that lets one download image and catalog data from a variety of archives and perform some basic image analysis. The functionality overlaps ds9.
  • Tool for OPerations on Catalogues And Tables: TOPCAT is a tool for exploring and analyzing catalogs and tables, with functionalty that overlaps fv. (See Astrogrid below.)
  • IDL astronomy users library at Goddard includes IDL FITS and other astronomy tools
  • MissFITS, Terapix tools for manipulating FITS files
  • AIPS++, a tool set written in C++ for analysis of radio data
  • Astrogrid is a collection of tools for doing astronomy with web and grid services. The collection contains TOPCAT. Download the java jar files from the Astrogrid download page, and start them with java:
sdsslnx33$ java -Xmx512M -jar vodesktop-1.2.2.app.jar > /dev/null &
[5] 19192
sdsslnx33$ java -Xmx512M -jar topcat-full.jar &
  • CFITSIO is probably the most popular FITS library, and even includes native remote access to files, not just through http and FTP but even gridftp.

The FITS support office has much more extensive lists of FITS libraries and FITS viewers.

5 The example data for reduction

The SDSS Photometric Telescope (PT) and its CCD camera is a reasonable example of a "plain vanilla" astronomical insturment. As an example exposere, this tutorial will use PT exposure 114174, taken on MJD 52008, the night of April 8, 2001. The raw data for this entire night can be found in the SDSS DAS dirctory for that night.

Start by downloading the relavant raw data and unpacking it

bash$ wget --quiet http://home.fnal.gov/~neilsen/tmp/samplePT.tgz
bash$ tar -zxf samplePT.tgz
bash$ cd samplePT

The exposure itself is FITS file.

6 Reducing a Set of Images: an Outline

  1. Process calibration frames
    • master bias for each night
    • flat field for each filter of each night
  2. Correct each exposure
    • bias subtraction
    • flat field correction
    • bad pixel masking
  3. Generate an initial catalog for each exposure
  4. Calibrate each exposure astrometrically
  5. Calibrate each exposure photometrically
  6. Model the PSF in each exposure
  7. Remap each exposure to a common coordinate system
  8. Coadd exposures
    • combine all exposures with a common filter
    • refine bad pixel mask
  9. Generate a catalog from the the coadded images
  10. Export the catalogs and cross match them with other catalogs

7 Image correction

7.1 Introduction

The number of counts recorded in the image on a pixel is a linear function of the flux falling on that pixel in the CCD:

\[ \text{counts} = a \cdot \text{flux} + b \]

Both a and b vary by pixel, and may also vary be time. The time variation is such that it is generally wise to recalculate them on a daily basis, although, on the PT, in practice one can get away with using values from nearby nights.

The value of b, the bais, can be measured using a combination of zero second exposures and overscans. The linear constant, a, can the measured using observations of a uniformly illuminated screen, or perhaps a blank area of the sky.

The process described below is neigther the optimal or most convenient appreach. Rather, it is designed to clarify what needs to be done.

This page only describes the most common image correction operations: bias subtraction and flat fielding. A few others are sometimes needed. Charge Transfer Efficiency (CTE) correction accomodates inefficiency in the transfer of charge between pixels durning readout. Dark correction removes the natural accumulation of charge over time not due to hits by photons. The former is unimportant for most uses of PT data, and the camera on the PT has a negligible dark current, which can be ignored.

7.2 Bias subtraction

  1. The Master Bias

    The sample data directory contains a sequence of bias images, exposures 114031 through 114041. These are all zero second exposures, readouts of the CCD taken without opening the shutter. These images may still, however, contain cosmic rays. A quick-and-dirty way to get a bias frame is to median filter the images. I will use Python and a few Python libraries: pyfits, a Python FITS reader; numarray, which handles arrays such as FITS images; and numdisplay, which provides a Python interface to ds9 for veiwing data arrays.

    Start python as in 4.4, then

    Python 2.5.1 (r251:54863, Dec 20 2007, 12:03:19)
    [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-59)] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import pyfits
    >>> from numpy import *
    >>> import numdisplay
    >>> biasData = array([pyfits.getdata("mdR-%08d.fits" % expId) for expId in range(114031,114042)])
    >>> bias = median(biasData)
    

    If we want, we can save our bias as a FITS file:

    >>> hdu=pyfits.PrimaryHDU(bias)
    >>> hdu.header.add_comment("Median of PT exposures 114031 through 114042")
    >>> pyfits.writeto("bias.fits",bias)
    

    We can display the FITS image using ds9. Alternately, the numdisplay package provides a Python interface to ds9 for veiwing data arrays. We do not even have to save the array to use this interface.

    >>> median(bias)
    array([ 1346.,  1346.,  1346., ...,  1272.,  1272.,  1272.], dtype=float32)
    >>> numdisplay.display(bias,z1=1200,z2=1400)
    Image displayed with Z1:  1200  Z2: 1400
    

    The numdisplay display quantizes the image to 256 gray levels before sending it to ds9, and does not always choose useful transformations automatically. It has a number of parameters that let you control this. In the above example, I use the median function to get the medians of the columns in the bias array to get an idea of what the bias level, and force the min (z1) and max (z2) limits of the transformation accordingly.

    Now, load the data image and apply the bias, and take a look:

    >>> rawImage = pyfits.getdata("mdR-00114174.fits")
    >>> dbImage = rawImage - bias
    >>> numdisplay.display(dbImage)
    Image displayed with Z1:  -6.0  Z2: 64182.0
    >>> numdisplay.display(dbImage,z1=0,z2=1000)
    Image displayed with Z1:  0  Z2: 1000
    

    Here, I chose the scaling using an educated guess.

  2. The Overscan Region

    While the master bias frame takes into account pixel to pixel differences in bais, the bias can change significantly with time over the course of the night; the true bias of the data image will be slightly different. These differences can be estimated using overscan regions: pixels in the final image that have been read through the CCD amplifier but that were not part of the exposed images. For the PT, these are data values are obtained from the CCD by reading more pixels per row and more rows per readout than are actually present on the CCD. In other instruments, actual obscured pixels are used.

    There are no officially standard FITS keywords for designating what sections of the a readout are overscan, but the information is generally present somewhere in the header. The PT data uses the header keywords BIASSEC0, BIASSEC1, DATASEC0, and DATASEC1 to designate the areas of the CCD used for the row bias and data sections for amplifiers 1 and 2, respectively. There is also a column bias section, but it is not explicitly specified in the header.

    Bias variations over the night are often modest. We can check whether use of the overscan region is necessary by looking at the statistics of the overscan regions of the bias image, as debiased by the master bias frame alone. If these are near zero with minimal variance, then the master bias frame alone is doing a good job.

    I start by reading the keywords from the header that tell me where on in the image the overscan regions are stored, and then extract the corresponding regiens:

    >>> dataHdr=pyfits.getheader("mdR-00114174.fits")
    >>>
    >>> print dataHdr['BIASSEC0']
    [21:40,22:2069]
    >>> biasOvscn0 = dbImage[21:2069,20:40]
    >>>
    >>> print dataHdr['BIASSEC1']
    [2089:2108,22:2069]
    >>> biasOvscn1 = dbImage[21:2069,2088:2108]
    

    Note the difference between the ordering of axis and pixel indexing and slicing conventions in Python and the FITS header, whose keywords were written folling IRAF convencions.

    (Normally, I would extract the numbers from the strings returned from the header, but the example is easier to follow without the string handling.)

    Carefully examine the overscan regions, first by looking at the images:

    >>> numdisplay.display(biasOvscn0)
    Image displayed with Z1:  -6.0  Z2: 8.0
    >>> numdisplay.display(biasOvscn1)
    Image displayed with Z1:  -2.0  Z2: 10.0
    

    Notice that there are signs that something funny is going on; when zoomed out, you can see faint stripeing. Further exploration shows that, although strange, the level of the signal involved is much lower than the noise we expect in our final image, and so we ignore it.

    Now we look at some statistics to see whether we need to change the level based on the overscan region, and whether differences in pixel to pixel variations are significant:

    >>> biasLevel0 = mean(reshape(biasOvscn0,size(biasOvscn0)))
    >>> biasStd0 = std(reshape(biasOvscn0,size(biasOvscn0)))
    >>> expectedStd0 = dataHdr['RDNOISE0']/dataHdr['GAIN0']
    >>> print "Amp 0: mean %f, measured std. %f, expected std. %f" % (biasLevel0, biasStd0, expectedStd0)
    Amp 0: mean 1.954028, measured std. 1.359730, expected std. 1.520000
    >>>
    >>> biasLevel1 = mean(reshape(biasOvscn1,size(biasOvscn1)))
    >>> biasStd1 = std(reshape(biasOvscn1,size(biasOvscn1)))
    >>> expectedStd1 = dataHdr['RDNOISE1']/dataHdr['GAIN1']
    >>> print "Amp 1: mean %f, measured std. %f, expected std. %f" % (biasLevel1, biasStd1, expectedStd1)
    Amp 1: mean 4.498950, measured std. 1.476259, expected std. 1.770000
    >>>
    

    Note that the standard deviation of the bias section is less than than expect from the read noise alone after the master bias is applied; the master bias does a good job taking out the pixel to pixel variations in the bias. So, we only need to apply the offset in bias level:

    >>> print dataHdr['DATASEC0']
    [41:1064,22:2069]
    >>> dbImage[:,:1064] = dbImage[:,:1064] - biasLevel0
    >>>
    >>> print dataHdr['DATASEC1']
    [1065:2088,22:2069]
    >>> dbImage[:,1064:] = dbImage[:,1064:] - biasLevel1
    

    We now have our debiased image stored in the dbImage python array.

7.3 Flat fielding

The next step is to "flat field" the image, or rescale each pixel so that the "a" term has a common value for all pixels.

Combine all the flat field exposures and scale the result so that

>>> # Create an empty list in which to accumulate flat field exposures
... flatList = []
>>> # Look up the exposure IDs for the r flat fields from the night log
... flatExpIds = (114044,114049,114054,114060)
>>> for expId in flatExpIds:
...   #
...   # Read the raw flat field exposure from disk
...   #
...   flatRaw = pyfits.getdata("mdR-%08d.fits" % expId)
...   #
...   # The flatfield exposures need to be debiased, just like the data
...   # exposure. Here, I assume the overscan regions are the same
...   # as in the data exposure. It would be safer to etract the region from
...   # each header.
...   #
...   dbFlat = flatRaw - bias
...   biasOvscn0 = dbFlat[21:2069,20:40]
...   biasLevel0 = mean(reshape(biasOvscn0,size(biasOvscn0)))
...   dbFlat[:,:1064] = dbFlat[:,:1064] - biasLevel0
...   biasOvscn1 = dbFlat[21:2069,20:40]
...   biasLevel1 = mean(reshape(biasOvscn1,size(biasOvscn1)))
...   dbFlat[:,1064:] = dbFlat[:,1064:] - biasLevel1
...   #
...   # Append our debiased flat field to the list
...   #
...   flatList.append(dbFlat)
...
>>>

Create a single array that is the median of the flat field exposures:

>>> flat = median(array(flatList))

The "array" function turns a list of arrays of the same size and type into a single array, in which the first axis corresponds to the index or the list. The median command takes the median along the first axis.

Now extract just the data section of the flat field. Again, it would be safer to determine the limits using the DATASEC0 and DATASEC1 keywords from the FITS header:

>>> dataFlat = flat[21:2069,40:2088]

Now normalize the flat field to have a mean of 1. The "mean" function only takes the mean along the first axis, so I use the "reshape" function to cast the entire 2-dimensional flat field image into a one dimensional array.

>>> flat = flat / (mean(reshape(dataFlat,size(dataFlat))))

Finally, apply the flat field to data image, and display the result:

>>> ffImage = dbImage/flat
>>> numdisplay.display(ffImage,z1=0,z2=1000)
Image displayed with Z1:  0  Z2: 1000
>>> numdisplay.display(ffImage,z1=400,z2=500)
Image displayed with Z1:  400  Z2: 500

7.4 Write the data image, with necessary changes to the header

We now have the data in our original image bias subtracted and flat fielded. We now need to extract the part of the array which is true sky data:

>>> ffData = ffImage[21:2069,40:2088]

and write it to a FITS file on disk. Most of the metadata in the FITS header in the original exposure applies the the corrected image, so we can copy the header from the original exposure FITS file and then update or remove the invalidated FITS keywords:

>>> ffData = ffImage[21:2069,40:2088]
>>> hdu=pyfits.PrimaryHDU(ffData,dataHdr)
>>> for keyword in ["DATASEC0","DATASEC1","BIASSEC0","BIASSEC1"]:
...   del hdu.header[keyword]
...
>>> hdu.header["CRPIX1"] = hdu.header["CRPIX1"] - 40
>>> hdu.header["CRPIX2"] = hdu.header["CRPIX2"] - 21
>>> hdu.header.add_comment("Bias subtracted, flat fielded, and trimmed")
>>> outContent = pyfits.HDUList([hdu])
>>> outContent.writeto("im-114174.fits")

8 Looking over the image, and estimating the sky statistics and PSF width

Start python as in 4.6, then:

--> cd ${DATAPATH}/samplePT
--> display im-114174.fits 1
z1=358.1834 z2=475.576
--> imexam frame=1
images to be examined ('im-114174.fits'):

Once you see the circular mouse cursor hovering over your ds9 window, imexam is up and running. Be careful with your window focus when running imexam; it needs to be on the ds9 window for most of the interesting commands to work, and yet under many window managers it gets sent elswhere regularly.

There are many useful commands under imexam, including:

command use
a aperature photometry around pointed at pixel
m pixels in a rectangular region around the pointed at pixel
z print the 10x10 grid of pixel values around the pixel
h plot a histogram of pixel values around the pixel
r plot a radial profile around the pixel

Go around the image in ds9, and choose a few isolated stars. Hover over each, and hit "r" to plot the radial profile. If it looks reasonable, hit "a" to prints aperature photometry (and PSF fit information) to the terminal.

After you have done this, find a few empty regions of the sky, and hit "m" over each to get pixel statistics. In the end, your output might look something like this:

--> imexam frame=1
images to be examined ('im-114174.fits'):
r#   COL    LINE    COORDINATES
#     R    MAG    FLUX     SKY    PEAK    E   PA BETA ENCLOSED   MOFFAT DIRECT
 760.20 1220.11 760.20 1220.11
   5.70  16.95   1660.   417.2   333.7 0.79  -30 3.85     1.77     2.01   1.90
-->
-->
-->
--> imexam frame=1
images to be examined ('im-114174.fits'):
#   COL    LINE    COORDINATES
#     R    MAG    FLUX     SKY    PEAK    E   PA BETA ENCLOSED   MOFFAT DIRECT
 760.20 1220.11 760.20 1220.11
   5.70  16.95   1660.   417.2   333.7 0.79  -30 3.85     1.77     2.01   1.90
1341.58 1138.22 1341.58 1138.22
   6.08  16.69   2103.   413.8   319.9 0.20   42 4.68     1.95     2.16   2.03
 781.59  818.52  781.59 818.52
   6.47  16.71   2069.   417.5   440.7 0.17  -59 3.10     2.08     1.79   2.16
 465.13 1662.21 465.13 1662.21
   6.45  15.04   9627.   417.5   1474. 0.07  -31 10.6     2.04     2.24   2.15
 566.92  382.47  566.92 382.47
   6.11  14.54  15301.   418.6   2869. 0.08    3 1.45     1.97     1.80   2.04
1894.49  319.20 1894.49 319.20
   7.37  16.21   3279.   416.9   427.8 0.46  -46 4.56     2.15     2.58   2.46
1840.48 1660.90 1840.48 1660.90
   7.71  15.64   5531.   412.9   609.1 0.12  -87 10.9     2.45     2.70   2.57
#            SECTION     NPIX     MEAN   MEDIAN   STDDEV      MIN      MAX
[1247:1256,1021:1030]      100    415.1    415.6    9.659    392.5    435.5
 [383:392,1613:1622]      100    416.6     417.    8.678     394.    438.8
   [640:649,324:333]      100    420.3    420.3    10.43    392.8    451.2
 [1341:1350,397:406]      100    415.6     415.    10.57    389.2     446.
[1735:1744,1726:1735]      100    415.4     416.    9.683    391.5    433.8
-->

From this, a rough guess at our sky is 417 with a standard deviation of 10, and a PSF full width at half max (FWHM) of about 2.

9 Pixels with bad values

As an examination of the image with ds9 will make clear, there are a number of pixels in the final corrected image with bad value. Sometimes, these bad pixels are due to defects in the CCD or obscuration of the CCD by other elements in the detector. Examples of this in our sample are column 735, and the rectangular regions at the top and bottom of the image.

Defects in the CCD can be picked out by hand, and are relatively stable over time (although new ones may appear with time). Bad pixel masks are often discovered and recorded manually.

The other major cause of bad pixel values are cosmic rays. While bad pixels due to CCD defects persist with time, each image has a separates set of cosmic ray pixels. When confronted with a single exposure, the cosmic rays must be distinguished from astronomical objects morphologically. Images taken through the telescope are blurred slightly by the atmosphere and optics of the telescope, and therefore do not have sharp edges. Cosmic rays, on the other hand, do have sharp edges.

For example, see this image from our sample exposure:

crexample.jpeg

Compare the two bright pixels on the left (at column 870, row 1854 of the image) to the fuzzy blob on the right (at column 978, row 1894 of the image). The later is a star, and has the appearance of a point source; any true image of the sky must be at least this blurry. The pixels on the left are cleary much sharper; they were caused by a cosmic ray.

There are several appreaches to detecting cosmic rays automatically. The IRAF cosmicrays command compares each pixel to the statistics of nearby pixels. SExtractor can use a neural network approach to distinguish cosmic rays from astronomical sources.

One of the best methods for detecting bad pixels is to use multiple exposures of the same part of the sky. If these exposures are to be used to detect CCD defects as well as cosmic rays, then the exposures should be offset slightly in pointing so that given point on the sky is seen by different pixels in each exposure. With these multiple exposures, pixels that give wildly different values from other exposures of the same part of the sky can be flagged as bad pixels. Pixels that are bad in all exposures can be flagged as CCD defects, while those that are bad in only one exposure can be considered cosmic rays. This is the appreach used by the the IRAF crrej task.

Once detected, the locations of bad pixels must be recorded so that further steps in processing know to ignore them. Typically, this is done either with a simple table of pixels, area boundaries, or with a mask in which a bit in the mask marks the pixel as good or bad.

10 Astrometric calibration

10.1 The World Coordinate System (WCS)

When examining an image, we measure positions of object by measuring the coordinates of the pixels on which the image falls. Positions on the sky, however, are directions in space; the coordinates of a sky position are the angular coordinates of a spherical coordinates system. There are several such coordinate systems in general use by astronomers, the most common of which is the equatorial cooridinate system. An object's coordinates in this system are its right ascension (abbreviated R. A., RA, and also referred to as α) and Declination (abbreviated Dec., and also referred to as δ), which correspond to the Earth's longitude and latitude.

Standard FITS uses a sequence of operations to transform pixel coordinates to transform the x, y coordinates of a pixel to RA and Dec. These operations are outlined by this diagram:

wcsDataFlow.png

The red text shows the corresponding FITS keywords. A typical set of these looks something like this:

CTYPE1  = 'RA---TAN'
CTYPE2  = 'DEC--TAN'
CUNIT1  = 'deg     '
CUNIT2  = 'deg     '
CRPIX1  = 1.02450000000000E+03 / Column Pixel Coordinate of Ref. Pixel
CRPIX2  = 7.44500000000000E+02 / Row Pixel Coordinate of Ref. Pixel
CRVAL1  = 5.86305204900000E+01 / RA at Reference Pixel
CRVAL2  = -3.1363144000000E-01 / DEC at Reference Pixel
CD1_1   = 1.92282275464897E-08 / RA  degrees per column pixel
CD1_2   = 1.09997827820690E-04 / RA  degrees per row pixel
CD2_1   = 1.09973486328125E-04 / DEC degrees per column pixel
CD2_2   = -9.5161290322482E-09 / DEC degrees per row pixel

Note that not all transforms use all keywords; the TYPESs given in the above example mean that the PV, LANPOLE, and LONPOLE keywords are unnecessary.

Several projects have found the simple linear transform insufficient, and have used alternate transformations which use higher order terms. For example, in addition to including the official FITS WCS transform in image headers, the SDSS supplies a FITS binary table with higher order terms. There is a proposed extention to the FITS standard, but it has not been officially approved.

10.2 Determining the Astrometric Transform the Hard Way

  1. The Strategy

    I determine the astrometric transformation by fitting to a table of stars for which we have both RA and Dec. and the x, y position on the image. The process consists of several steps:

    1. Obtian a list of RA and Dec coordinates of stars in our field. The observatory supplied WCS should good enough to make this easy.
    2. Detect star like features in our field, and measure good x, y coordinates for their centers.
    3. Match reference catalog RA and Dec. stars to detected stars.
    4. Fit transformation parameters using this data set.
    5. Store the result in the FITS header.
  2. Getting a reference catalog

    To get a catalog of reference stars and some idea of how good the astrometry info in the header is, use ds9 directly on the FITS file (not through PyRAF):

    bash$ ds9 im-114174.fits
    

    Go the the "Analysis" menu (upper right of the ds9 window), choose "catalogs", then "HST Guide Star Catalag". You should then get a table of guide stars, and their approximate coordiates should be marked on the image. The distance these marks are from the locations is a measure of how well the actual pointing of the telescope matches with where the telescope thought it was pointed.

    Save the catalog using File (in the catalog tool window), Save… and choosing a file name (eg HSTguidestars.rawcat). Examining this file with emacs, I see that columns 4 and 5 have the data I want, and that the first 32 lines are header and metadata. I use awk to extract juts the data I am looking for:

    bash$ awk 'NR>32 {print $4, $5}' HSTguidestars.rawcat > HSTguidestars.cat
    

    To help match up these stars to those we will detect, convert these coordinates into x, y coordinates using the observatory supplied estimate of the astrometric transformation.

    Start PyRAF as shown in PyRAF and IRAF, change to the directory with the data, then:

    --> # Load the IRAF astrometry package
    --> imcoords
    --> wcsctran  HSTguidestars.cat HSTguidestars-114174.xy im-114174.fits world logical verbose-
    
  3. Finding reference stars in the data image

    We want to generate a collection of

    The starfind command in the imcoords package of IRAF searches an image for good reference stars.

    Start PyRAF as shown in PyRAF and IRAF, change to the directory with the data, then:

    --> imcoords
    -->
    --> # Ask starfind to locate point sources more than 1000 counts above the
    --> # background, assuming a PSF with a half-width half max of 1.0 and that
    --> # pixel values between 350 and 50000 are legitamate.
    --> # Why does starfind use half-width half max when everyone else uses
    --> # full width half max? Got me...
    -->      
    --> starfind im-114174.fits im-114174.starfind 1.0 1000 datamin=350 datamax=50000 roundhi=0.5
    -->
    --> # have a look
    -->
    --> display im-114174.fits 1
    z1=358.1834 z2=475.576
    --> tvmark 1 im-114174.starfind mark="point"
    --> tvmark 1 HSTguidestars-114174.xy mark="circle"
    

    I made the HWHM estimate based on the results of Looking over the image with imexamine. The threshhold of 1000 counts I arrived at through trial and error, aiming for a level that resulted in detecting stars bright enough to be in our reference catalog. The tvmark commands above were very helpful for this.

    You can make plots using IRAF plotting tools to examine our test stars. Load the IRAF plotting package:

    --> stsdas
    
    
          +------------------------------------------------------------+
          |       Space Telescope Science Data Analysis System         |
          |            STSDAS Version 3.8, Feb 2008                    |
          |                                                            |
          |   Space Telescope Science Institute, Baltimore, Maryland   |
          |   Copyright (C) 2003 Association of Universities for       |
          |            Research in Astronomy, Inc.(AURA)               |
          |       See stsdas$copyright.stsdas for terms of use.        |
          |         For help, send e-mail to help@stsci.edu            |
          |                                                            |
          +------------------------------------------------------------+
    stsdas/:
     analysis/      examples        hst_calib/      sobsolete/
     contrib/       fitsio/         playpen/        toolbox/
     describe       graphics/       problems
    --> graphics
    --> stplot
    

    Now, actually make some plots:

    --> histogram im-114174.starfind colname=5 xlabel=HWHM logplot-
    --> sgraph "im-114174.starfind 1 5" point+ xlabel=x ylabel=HWHM
    --> sgraph "im-114174.starfind 2 5" point+ xlabel=y ylabel=HWHM
    --> sgraph "im-114174.starfind 1 6" point+ xlabel=x ylabel=roundness
    --> sgraph "im-114174.starfind 2 6" point+ xlabel=y ylabel=roundness
    

    Spurious detections from bad pixels are clearly visible in these plots.

    The reference files, im-114174.starfind, is just a text file, so you can use almost anything to plot it. This example uses R:

    > colnames = c("x","y","mag","area","hwhm","roundness","pa","sharpness","label")
    > stars <- read.table("im-114174.starfind",row.names=9,col.names=colnames)
    > attach(stars)
    > r <- sqrt((x-1024)^2+(y-1024)^2)
    > fwhm <- hwhm * 2.0
    > plot(r,fwhm)
    > angle <- atan2(y-1024,x-1024)
    > plot(angle,roundness)
    > plot(angle,pa)
    >
    > # Ignore points at the top and bottom,
    > # most of which are in the bad sections.
    > ok <- which(y<1940 & y > 120)
    > plot(r[ok],fwhm[ok])
    > plot(angle[ok],roundness[ok])
    > plot(angle[ok],pa[ok])
    

    These plots show an oddity in the PT data: the roundness of the PSF depends strongly on the position on the CCD.

  4. Matching the found stars with the reference catalog

    We need to establish a few starting match points by hand. Going back to the ds9 image with the catalog superposed, we can see a number of matches. Hover the mouse over the central pixel of a matching star, and record the x,y. Then, read the ra, dec from the circle near it. These are a few I get:

    ref x ref y ra dec starfind x starfind y  
    878.80963 1133.539 199.07577 +17.65164 866.344 1131.263  
    1320.5986 852.7561 199.16994 +17.79276 1301.906 849.755  
    1258.9945 1207.396 199.05097 +17.77308 1243.331 1201.481  

    Now get the exact centroids of these by finding the corresponding entries in im-114174.starfind and HSTguidestars.cat. We need to enter these into a text file, which I called anchors-114174.obj:

    199.07577 17.65164 199.16994 17.79276 199.05097 17.77308
    866.344 1131.263 1301.906 849.755 1243.331 1201.481
    

    Now, run ccxymatch in iraf. (If the imcoords package is not loaded already, we need to load it.)

    --> ccxymatch im-114174.starfind HSTguidestars.cat matched-114174.obj 3 3 lngunits="degrees" refpoints="anchors-114174.obj" matching="tolerance"
    
    Input: im-114174.starfind  Reference: HSTguidestars.cat  Number of tie points: 3
        tie point:   1  ref:  -129.689  -111.184  input:   866.344  1131.263
        tie point:   2  ref:   193.210   396.865  input:  1301.906   849.755
        tie point:   3  ref:  -214.621   326.023  input:  1243.331  1201.481
    
    Initial linear transformation
          xi[tie] =   1191.556 + -0.0090379 * x[tie] +  -1.161017 * y[tie]
         eta[tie] =  -1108.135 +   1.161235 * x[tie] + -0.0080239 * y[tie]
        dx: 1191.56 dy: -1108.14 xmag: 1.161 ymag: 1.161 xrot: 90.4 yrot: 90.4
    
    45 reference coordinates matched
    -->
    
  5. Generate the astrometric solution from the matched pairs

    Now we can do the fit for an astrometric solution:

    --> ccmap matched-114174.obj ccmap-114174.db image=im-114174.fits xcolumn=3 ycolumn=4 lngcolumn=1 latcolumn=2 lngunits="degrees" results=STDOUT interactive-
    
  6. Add the solution to the header

    Finally, we add the solution to the header. I want to preserve the original file, so I make a copy of the image and update that:

    --> imcopy im-114174.fits im-114174-ac.fits
    im-114174.fits -> im-114174-ac.fits
    --> ccsetwcs im-114174-ac.fits ccmap-114174.db im-114174-ac.fits
    Image: im-114174-ac.fits  Database: ccmap-114174.db  Solution: im-114174-ac.fits
    Coordinate mapping parameters
        Sky projection geometry: tan
        Reference point: 199:05:13.16  17:40:36.64  (degrees   degrees)
        Ra/Dec logical image axes: 1  2
        Reference point: 943.983  1097.784  (pixels  pixels)
        X and Y scale: 1.159  1.159  (arcsec/pixel  arcsec/pixel)
        X and Y coordinate rotation: 269.551  269.585  (degrees  degrees)
    Updating image header wcs
    -->
    
  7. Spot check the solution

    We can check the image header for the WCS keywords:

    bash$ hexdump -e '80/1 "%_p" "\n"' im-114174-ac.fits | \
    > egrep 'CRPIX|CRVAL|CTYPE|CD[12]_[12]'
    CRPIX1  =     943.982818113944 / Center pixel coord
    CRPIX2  =     1097.78439775546 / Center pxl coord
    CRVAL1  =     199.086990259878 / RA in degrees
    CRVAL2  =     17.6768432926617 / Dec in deg.
    CTYPE1  = 'RA---TAN'           / coord type
    CTYPE2  = 'DEC--TAN'           / coord type
    CD1_1   =  -2.5250312486836E-6
    CD1_2   =  -3.2181097362216E-4
    CD2_1   =  3.21946311440172E-4
    CD2_2   =  -2.3296183854133E-6
    bash$
    

    These keywords provide the coordinates of a reference pixel (CRPIX, CRVAL) and the derivatives of each celestiol coordiate with respect to each CCD coordinate, in degrees per pixel.

    Load the image in ds9 (outside of PyRAF), and load the HST guide star catalog again. The marks should land on the reference stars.

    We can see that the points near the corners do not fit well. This is because the PT is badly distorted, unusually so for a professional instrument. However, the stardards WCS standard for FITS headers includes only linear terms.

10.3 Determining the Astrometric Transform the Easy Way

There are several online tools that can accept a FITS image with an approximate WCS coordinate transform (such as that generated during observation) and refine it automatically. Examples include:

All of these seem to fail regularly for PT images, however.

11 Object detection using SExtractor

Create a new SExtractor configureation file with default parameters:

bash$ setup sextractor
bash$ sex -d > config-114174.sex

Edit config-114174.sex. Much of what goes here can be guessed from the FITS header:

bash$ hexdump -e '80/1 "%_p" "\n"' im-114174-ac.fits | egrep '(SATUR|GAIN)'
GAIN0   = 4.89000000000000E+00 / Gain for amp 0
GAIN1   = 4.44000000000000E+00 / Gain for amp 1
SATURAT0= 6.24440000000000E+04 / Full well level for amp 0 (counts)
SATURAT1= 6.41490000000000E+04 / Full well level for amp 1 (counts)

If we were doing this properly, we would split the file into two, one for each amplifier, and run SExtractor twice, once on each half, using the values appropriate for each. The differences are small, so well will push ahead.

After editing the configuration file, check our changes:

bash$ diff <(sex -d) config-114174.sex
7,8c7,8
< CATALOG_NAME     test.cat       # name of the output catalog
< CATALOG_TYPE     ASCII_HEAD     # NONE,ASCII,ASCII_HEAD, ASCII_SKYCAT,
---
> CATALOG_NAME     cat-114174.fits # name of the output catalog
> CATALOG_TYPE     FITS_LDAC      # NONE,ASCII,ASCII_HEAD, ASCII_SKYCAT,
10c10
< PARAMETERS_NAME  default.param  # name of the file containing catalog contents
---
> PARAMETERS_NAME  sex-114174.param # name of the file containing catalog contents
20c20
< FILTER_NAME      default.conv   # name of the file containing the filter
---
> FILTER_NAME      gauss_2.0_5x5.conv   # name of the file containing the filter
38c38
< SATUR_LEVEL      50000.0        # level (in ADUs) at which arises saturation
---
> SATUR_LEVEL      60000.0        # level (in ADUs) at which arises saturation
42,43c42,43
< GAIN             0.0            # detector gain in e-/ADU
< PIXEL_SCALE      1.0            # size of pixel in arcsec (0=use FITS WCS info)
---
> GAIN             4.5            # detector gain in e-/ADU
> PIXEL_SCALE      0                    # size of pixel in arcsec (0=use FITS WCS info)
47c47
< SEEING_FWHM      1.2            # stellar FWHM in arcsec
---
> SEEING_FWHM      2.0            # stellar FWHM in arcsec
bash$

We need to supply a list of parameters to measure for each detected object. Start with the default:

bash$ cp /sdss/ups/prd/sextractor/v2_5_0/Linux/config/default.param sex-114174.param

I then edit default.param to include the columns we want:

bash$ egrep '^[^#]' sex-114174.param
NUMBER
FLUX_APER(1)
FLUXERR_APER(1)
MAG_APER(1)
MAGERR_APER(1)
FLUX_AUTO
FLUXERR_AUTO
MAG_AUTO
MAGERR_AUTO
FLUX_BEST
FLUXERR_BEST
MAG_BEST
MAGERR_BEST
BACKGROUND
X_IMAGE
Y_IMAGE
ALPHA_J2000
DELTA_J2000
ELONGATION
ELLIPTICITY
FWHM_IMAGE
FWHM_WORLD
FLAGS
CLASS_STAR

We alse need to supply a filter for matched filter detection. The optimal matched filter is an image of a typical object near the detection limit. After looking at the image in ds9, I decide to use one from the default library: the gaussian with FWHM of 2.0:

bash$ cp /sdss/ups/prd/sextractor/v2_5_0/Linux/config/gauss_2.0_5x5.conv .

Obtain the neural net configuration used for star-galaxy separation:

bash$ cp /sdss/ups/prd/sextractor/v2_5_0/Linux/config/default.nnw .

Finally, run SExtractor:

bash$ sex im-114174-ac.fits -c config-114174.sex
----- SExtractor 2.5.0 started on 2009-01-16 at 15:20:55 with 1 thread

Measuring from: "GCS:NGC5053"  / 2048 x 2048 / 32 bits FLOATING POINT data
(M+D) Background: 416.591    RMS: 9.48548    / Threshold: 14.2282
Objects: detected 1149     / sextracted 1057
> All done (in 1 s)

Note that while the list of things SExtractor can measure can seem a bit intimidating, it is in fact fairly limited. It is much smaller, for example, than that provided by SDSS's photo.

12 Examining the object catalog

There are many ways to look at the resulting analysis table. One of the most convenient to use interactively is fv:

bash$ setup fv
bash$ fv cat-114174.fits

This will present you with a GUI for exploring the FITS file. To see the contents of the catalog, choose the "All" button (under "View") on the "LDAC\OBJECTS" row.

This then gives you a window with the values in the table. You can also use the features under the "File" menu to save the contents in a text file (CSV, constant field width, or delimited with some other character). The "Tools" menu lets you plot values, make histograms, and do other simple table operations.

You can plot the objects found by SExtractor over the image itself using R:

> require(FITSio)
Loading required package: FITSio
[1] TRUE
> catalog <- readFrameFromFITS("cat-114174.fits",2)
> attach(catalog)
> im <- readFITS("im-114174.fits")
> image(c(1:2048),c(1:2048),im$imDat,zlim=c(350,500),col=gray.colors(256))
> points(X_IMAGE,Y_IMAGE)

You should also plot some of the measured parameters against each other:

> hist(FWHM_IMAGE, breaks=500)
> plot(X_IMAGE,FWHM_IMAGE)
> plot(Y_IMAGE,FWHM_IMAGE)
> r <- sqrt((X_IMAGE-1024)^2+(Y_IMAGE-1024)^2)
> angle <- atan2(Y_IMAGE-1024,X_IMAGE-1024)
> plot(r,FWHM_IMAGE)
> plot(angle,FWHM_IMAGE)
> hist(FWHM_IMAGE, breaks=500, xlim=c(0,10))
> big <- which(FWHM_IMAGE > 6)
> image(c(1:2048),c(1:2048),im$imDat,zlim=c(350,500),col=gray.colors(256))
> points(X_IMAGE[big],Y_IMAGE[big])

You can also do useful things with ftools, for example, this command extracts the columns with x and y centers of objects into an ASCII text file:

bash$ heainit
bash$ fdump cat-114174.fits[2] cat-114174.xy "X_IMAGE, Y_IMAGE" - prhead- showcol- showrow- showunit-

The "heainit" runs an ftools script that sets up environmental variables for other ftools commands.

The two in square braces is there to tell fdump you want the table in HDU 2.

You can use this x,y table to plot positions in PyRAF:

--> display im-114174-ac.fits 1
z1=358.1834 z2=475.576
--> tvmark 1 cat-114174.xy

13 Photometric calibration

13.1 Magnitudes and the photometric equation

Astronomers usually measure the brightness of objects using magnitudes. The diffirence in magnitudes between to objects is defined to be \[ m_1 - m_2 = -2.5 \log \frac{f_1}{f_2} \]

Historically, the star Vega has been defined to have a magnitude of 0, providing an absolute relation between flux in physical units and magnitudes.

This system was designed by Pogson so that the brightness measure as derived from physical units roughly matched up with the system used by the ancient greeks.

Note that the higher the magnitude of an object, the fainter it is. Also note that 5 magnitudes corresponds to a factor of 100 in flux.

If the number of counts measured for an object is proportional to the physical flux from that object (as astronomical CCDs are designed to be), then we can relate the counts detected from an object in an image to its magnitude using this equation: \[ m = m_0 - 2.5 \log(f) \] where m0 is the photometric zero point for the image.

(This equation only holds for the "natural" photometric system of the instrument and filter used. If one is attempting to measure the magnitude in some stardard system, one must take into account differences in the wavelength sensativity between the natural system and the reference system, which introduces color terms in the above equation.)

Photometric calibration consists of measuring m0 for the image in question. This can be done by measuring the flux for several sources with known magnitudes in the reference system, and solving for m0 in each case. Typically, one measures the photometric calibration using careful observations of non-variable sources with well known magnitudes in the reference system, and then derives m0 for the observation of the target using m0 as measured from the reference image and differences in exposure time. In observations from the ground, one must also correct for differences based on the airmass of the observation, or the amount of atmosphere the light from the target passes through before it reaches the telescope.

13.2 Photometric calibration of the sample image

In our example, we will use much simpler approach, and choose a number of sources in our data image are references.

bash$ heainit
bash$ fdump cat-114174.fits[2] cat-114174.dat \
> "ALPHA_J2000, DELTA_J2000, MAG_BEST, MAGERR_BEST" - prhead- showrow- showunit-

Fiddle the table headings and add an index:

bash$ awk 'BEGIN {i=0; print "name","ra","dec"}
> NR > 3 && $4 < 0.1 {i++;print i, $1, $2}' \
> cat-114174.dat > cat-114174-cashead.dat

The NR contition removes the header generated by ftools, and the $4 condition removes objects with high uncertainty in the SExtractor measured magniture.

Upload cat-114174-cashead.dat into CAS cross-id, and use the following query:

SELECT 
   p.objID, p.ra, p.dec, p.modelMag_r 
FROM #x x, #upload u, PhotoTag p
WHERE u.up_id = x.up_id 
  AND x.objID=p.objID 
  AND p.modelMag_r > 15
ORDER BY x.up_id

select CSV as the output format.

Note that we have excluded object brighter than mag 15, as the magnitudes for these objects are very bright for the SDSS.

Submit and save the result in sdssmatch-114174.csv.

Replace the output comma delimiters with tabs:

bash$ sed "s/,/\t/g" sdssmatch-114174.csv > sdssmatch-114174.tsv

Edit sdssmatch-114174.tsv to remove headings.

Get the SExtractor magnitures:

bash$ awk 'BEGIN {i=0}
> NR > 3 && $4 < 0.1 {i++;printf "%d\t%f\t%f\t%f\n", i, $1, $2, $3}' \
> cat-114174.dat > instmags-114174.dat

Note that we have been sure to use the same criteria as for the upload list, so we have the same objects in the same order.

Combine with the SDSS catalog output:

bash$ join instmags-114174.dat sdssmatch-114174.tsv | \
> awk '{print $8, ($2-$6)*3600.0, ($3-$7)*3600.0, $8-$4}' | \
> awk '$2*$2<1 && $3*$3<1' > sdssmatchjoin-114174.dat

The join matches columns from SDSS to columns from SExtractor, the first awk generates columns with SDSS magnitude, RA offset in arcseconds, Dec offset in arcseconds, and the resulting zeropoint for this object.

Plot in R

> colnames=c("mag","dra","ddec","zp")
> zptable <- read.table("sdssmatchjoin-114174.dat",col.names=colnames)
> attach(zptable)
> plot(mag,zp)
> # there are clearly some outliers at faint point... remove them
> ok <- which ((zp < 30) & (zp > 20))
> plot(mag[ok],zp[ok])
> # there are still some outliers, but they don't corrupt
> # the scaling too much 
> hist(zp[ok])
> hist(zp[ok],breaks=200,xlim=c(24.5,25.0))

From these plots, it looks like the photometric zero point is about 24.78. This zero point is obviously only good to within maybe 0.05 magnitudes. If we did this more carefully with proper standards, we could do much, much better.

14 Obtaining colors

14.1 Reduce the g exposure

  1. Image correction

    To obtain colors for our objects, we must repeat the above procedure for another exposure in a different filter. The g exposure in our sample sequence has PT exposure id 114173. A sloppy image correction of this exposure can be done using this python script, which I call reduce\g.py:

    import pyfits
    from numpy import *
    
    # Generate the bias.
    # We could have reused the saved bias from the r exposure
    biasData = array([pyfits.getdata("mdR-%08d.fits" % expId) for expId in range(114031,114042)])
    bias = median(biasData)
    
    # Lead the data exposure and header
    dataHdr=pyfits.getheader("mdR-00114174.fits")
    rawImage = pyfits.getdata("mdR-00114173.fits")
    
    # Apply the bias
    dbImage = rawImage - bias
    
    # Use the overscan regions to refine the bias
    # It would be better if I reread the FITS header to
    # verify that it hasn't changed, but I know that it's
    # stable
    biasOvscn0 = dbImage[21:2069,20:40]
    biasLevel0 = mean(reshape(biasOvscn0,size(biasOvscn0)))
    biasStd0 = std(reshape(biasOvscn0,size(biasOvscn0)))
    expectedStd0 = dataHdr['RDNOISE0']/dataHdr['GAIN0']
    
    biasOvscn1 = dbImage[21:2069,2088:2108]
    biasLevel1 = mean(reshape(biasOvscn1,size(biasOvscn1)))
    biasStd1 = std(reshape(biasOvscn1,size(biasOvscn1)))
    expectedStd1 = dataHdr['RDNOISE1']/dataHdr['GAIN1']
    
    # Apply the refinement of the bias from the overscan
    dbImage[:,:1064] = dbImage[:,:1064] - biasLevel0
    dbImage[:,1064:] = dbImage[:,1064:] - biasLevel1
    
    # Build a flat field. This will be different than the
    # r flat field
    flatList = []
    for expId in (114043,114048,114054,114059):
      flatRaw = pyfits.getdata("mdR-%08d.fits" % expId)
      dbFlat = flatRaw - bias
      biasOvscn0 = dbFlat[21:2069,20:40]
      biasLevel0 = mean(reshape(biasOvscn0,size(biasOvscn0)))
      dbFlat[:,:1064] = dbFlat[:,:1064] - biasLevel0
      biasOvscn1 = dbFlat[21:2069,20:40]
      biasLevel1 = mean(reshape(biasOvscn1,size(biasOvscn1)))
      dbFlat[:,1064:] = dbFlat[:,1064:] - biasLevel1
      flatList.append(dbFlat)
    
    flat = median(array(flatList))
    dataFlat = flat[21:2069,40:2088]
    flat = flat / (mean(reshape(dataFlat,size(dataFlat))))
    
    # Apply the flat field
    ffImage = dbImage/flat
    
    ffData = ffImage[21:2069,40:2088]
    
    # Save the result
    hdu=pyfits.PrimaryHDU(ffData,dataHdr)
    for keyword in ["DATASEC0","DATASEC1","BIASSEC0","BIASSEC1"]:
      del hdu.header[keyword]
    
    hdu.header["CRPIX1"] = hdu.header["CRPIX1"] - 40
    hdu.header["CRPIX2"] = hdu.header["CRPIX2"] - 21
    hdu.header.add_comment("Bias subtracted, flat fielded, and trimmed")
    outContent = pyfits.HDUList([hdu])
    outContent.writeto("im-114173.fits")
    

    Running the script:

    bash$ PYTHONPATH=""
    bash$ setup stsci_python
    just set PYTHONVERS 2.5
    bash$ python reduce_g.py
    

    Note we can load both the previously reduced r image and the new g image into ds9 at once.

    bash$ ds9 im-114173.fits im-114174.fits
    

    Experiment with setting the scale so that both display the data well and with a similar appearance. I find setting both to "log" scaling, the g limits to 480 and 680, and the r limits to 400 and 600 works well. (You can set the scaling limits using "scale parameters" under the scale menu from the menu bar.)

    Because the two exposure were taken in quick succession without slewing or offset, they are well aligned. Blinking between the two exposures is particularly useful. Choose "Frame" from the ds9 menu bar, then "blink frames". Note that the a number of cosmic rays in each image become apparent, but the CCD defects remain the same across images.

    We use imexam to check out the results as before.

  2. Astrometric calibration

    We can reuse the same guide stars from the r image, which we saved in HSTguidestars.cat. We do need to get xy coordinats using estimates from exposure 114173:

    --> # Load the IRAF astrometry package
    --> imcoords
    --> wcsctran  HSTguidestars.cat HSTguidestars-114173.xy im-114173.fits world logical verbose-
    

    We find our sample stars just as we did for the r exposure:

    --> starfind im-114173.fits im-114173.starfind 1.0 600 datamin=350 datamax=50000 roundhi=0.5
    --> display im-114173.fits 1
    z1=436.1698 z2=562.5272
    --> tvmark 1 im-114173.starfind mark="point"
    --> tvmark 1 HSTguidestars-114173.xy mark="circle"
    

    Check the results using the same plotting tools as before.

    We can attempt to reuse the same anchor stars as before. The true RA and Dec of these stars will not have changed. We just need to update our table with the x, y positions from the g image:

    bash$ cp anchors-114174.obj anchors-114173.obj
    bash$ emacs anchors-114173.obj &
    [2] 16933
    bash$ awk '$1 > 864 && $1 < 868' im-114173.starfind
         866.408  1129.790     -8.99     19   1.00  0.176   90.3   0.997    106
    bash$ awk '$1 > 1300 && $1 < 1304' im-114173.starfind
        1302.885    40.311     -8.87      7   1.24  0.410  164.6   1.240     32
        1302.033   661.814    -10.34     21   1.12  0.349   95.4   1.117     75
        1301.983   848.289     -9.04     21   1.02  0.179  105.0   1.018     79
        1303.809  2033.494     -9.71     11   1.04  0.224   98.3   1.042    195
    bash$ awk '$1 > 1241 && $1 < 1248' im-114173.starfind
        1243.385  1200.049     -9.95     20   1.03  0.249   98.2   1.027    107
    bash$ diff anchors-114174.obj anchors-114173.obj
    2c2
    < 866.344 1131.263 1301.906 849.755 1243.331 1201.481
    ---
    > 866.408  1129.790 1301.983   848.289 1243.385  1200.049
    [2]+  Done                    emacs anchors-114173.obj
    bash$
    

    Now match the stars:

    --> ccxymatch im-114173.starfind HSTguidestars.cat matched-114173.obj 3 3 lngunits="degrees" refpoints="anchors-114173.obj" matching="tolerance"
    
    Input: im-114173.starfind  Reference: HSTguidestars.cat  Number of tie points: 3
        tie point:   1  ref:  -129.689  -111.184  input:   866.408  1129.790
        tie point:   2  ref:   193.210   396.865  input:  1301.983   848.289
        tie point:   3  ref:  -214.621   326.023  input:  1243.385  1200.049
    
    Initial linear transformation
          xi[tie] =   1189.613 + -0.0089362 * x[tie] +  -1.160888 * y[tie]
         eta[tie] =  -1108.326 +   1.161252 * x[tie] + -0.0079441 * y[tie]
        dx: 1189.61 dy: -1108.33 xmag: 1.161 ymag: 1.161 xrot: 90.4 yrot: 90.4
    
    45 reference coordinates matched
    -->
    

    Calculate the astrometric solution,

    --> imcopy im-114173.fits im-114173-ac.fits
    im-114173.fits -> im-114173-ac.fits
    --> ccmap matched-114173.obj ccmap-114173.db image=im-114173-ac.fits xcolumn=3 ycolumn=4 lngcolumn=1 latcolumn=2 lngunits="degrees" results=STDOUT interactive-
    

    and create a g data image with the newly calculated astrometric solution:

    --> ccsetwcs im-114173-ac.fits ccmap-114173.db im-114173-ac.fits
    Image: im-114173-ac.fits  Database: ccmap-114173.db  Solution: im-114173-ac.fits
    Coordinate mapping parameters
        Sky projection geometry: tan
        Reference point: 199:05:36.19  17:40:36.72  (degrees   degrees)
        Ra/Dec logical image axes: 1  2
        Reference point: 943.946  1077.352  (pixels  pixels)
        X and Y scale: 1.159  1.158  (arcsec/pixel  arcsec/pixel)
        X and Y coordinate rotation: 269.561  269.573  (degrees  degrees)
    Updating image header wcs
    -->
    
  3. Generate a table of g image coordinates from the r image SExtractor output

    SExtractor needs a list of pixel value for cross identification of sources.

    Get a list of ra, dec positions from the catalog generated using the r image:

    --> stsdas
    --> tables
    --> ttools
    --> tprint cat-114174.fits[2] showhdr- showrow- columns="ALPHA_J2000, DELTA_J2000" > r.radec
    

    Convert these images to the g x, y coordinates using the g WCS transform, and use tvmark to check it:

    --> wcsctran  r.radec r.gxy im-114174-ac.fits world logical
    --> display im-114173.fits 1
    z1=436.1698 z2=562.5272
    --> tvmark 1 r.gxy
    

    Remove the header from r.gxy, and add in index to our x,y table:

    bash$ emacs r.gxy
    bash$ cat -n r.gxy > r.gixy
    
  4. Run SExtractor on the g image

    Create an SExtractor config file appropriate for the g image:

    bash$ diff <(sex -d) config-114173.sex
    7,8c7,8
    < CATALOG_NAME     test.cat       # name of the output catalog
    < CATALOG_TYPE     ASCII_HEAD     # NONE,ASCII,ASCII_HEAD, ASCII_SKYCAT,
    ---
    > CATALOG_NAME     cat-114173.fits # name of the output catalog
    > CATALOG_TYPE     FITS_LDAC       # NONE,ASCII,ASCII_HEAD, ASCII_SKYCAT,
    10c10
    < PARAMETERS_NAME  default.param  # name of the file containing catalog contents
    ---
    > PARAMETERS_NAME  sex-114173.param # name of the file containing catalog contents
    20c20
    < FILTER_NAME      default.conv   # name of the file containing the filter
    ---
    > FILTER_NAME      gauss_2.0_5x5.conv   # name of the file containing the filter
    38c38
    < SATUR_LEVEL      50000.0        # level (in ADUs) at which arises saturation
    ---
    > SATUR_LEVEL      60000.0        # level (in ADUs) at which arises saturation
    42,43c42,43
    < GAIN             0.0            # detector gain in e-/ADU
    < PIXEL_SCALE      1.0            # size of pixel in arcsec (0=use FITS WCS info)
    ---
    > GAIN             4.5            # detector gain in e-/ADU
    > PIXEL_SCALE      0                    # size of pixel in arcsec (0=use FITS WCS info)
    47c47
    < SEEING_FWHM      1.2            # stellar FWHM in arcsec
    ---
    > SEEING_FWHM      2.0            # stellar FWHM in arcsec
    75a76,80
    >
    > ASSOC_NAME       r.gixy
    > ASSOC_PARAMS     2,3
    > ASSOC_TYPE       NEAREST
    > ASSOC_DATA       1
    

    and also a sex-114173.param:

    bash$ egrep '^[^#]' sex-114173.param
    NUMBER
    FLUX_APER(1)
    FLUXERR_APER(1)
    MAG_APER(1)
    MAGERR_APER(1)
    FLUX_AUTO
    FLUXERR_AUTO
    MAG_AUTO
    MAGERR_AUTO
    FLUX_BEST
    FLUXERR_BEST
    MAG_BEST
    MAGERR_BEST
    BACKGROUND
    X_IMAGE
    Y_IMAGE
    ALPHA_J2000
    DELTA_J2000
    ELONGATION
    ELLIPTICITY
    FWHM_IMAGE
    FWHM_WORLD
    FLAGS
    CLASS_STAR
    VECTOR_ASSOC
    NUMBER_ASSOC
    bash$
    

    Run SExtractor, and look at the results:

    bash$ sex im-114173-ac.fits -c config-114173.sex
    ----- SExtractor 2.5.0 started on 2009-01-16 at 16:23:37 with 1 thread
    
    Measuring from: "GCS:NGC5053"  / 2048 x 2048 / 32 bits FLOATING POINT data
    (M+D) Background: 499.725    RMS: 10.4751    / Threshold: 15.7126
    Objects: detected 1079     / sextracted 504
    > All done (in 1 s)
    bash$ fv cat-114173.fits
    
  5. Photometric calibration of the g image

    We proceed as before, starting by extracting the needed contents of the SExtractor catalog:

    bash$ heainit
    bash$ fdump cat-114173.fits[2] cat-114173.dat \
    > "ALPHA_J2000, DELTA_J2000, MAG_BEST, MAGERR_BEST" - prhead- showrow- showunit-
    

    Fiddle the table headings and add an index:

    bash$ awk 'BEGIN {i=0; print "name","ra","dec"}
    > NR > 3 && $4 < 0.1 {i++;print i, $1, $2}' \
    > cat-114173.dat > cat-114173-cashead.dat
    

    Upload cat-114173-cashead.dat into CAS cross-id, and use the following query:

    SELECT 
       p.objID, p.ra, p.dec, p.modelMag_g 
    FROM #x x, #upload u, PhotoTag p
    WHERE u.up_id = x.up_id 
      AND x.objID=p.objID 
      AND p.modelMag_g > 15
    ORDER BY x.up_id
    

    select CSV as the output format in the file sdssmatch-114173.csv.

    Replace the output comma delimiters with tabs:

    bash$ sed "s/,/\t/g" sdssmatch-114173.csv > sdssmatch-114173.tsv
    

    Edit sdssmatch-114173.tsv to remove headings.

    Get the SExtractor magnitures:

    bash$ awk 'BEGIN {i=0}
    > NR > 3 && $4 < 0.1 {i++;printf "%d\t%f\t%f\t%f\n", i, $1, $2, $3}' \
    > cat-114173.dat > instmags-114173.dat
    

    Note that we have been sure to use the same criteria as for the upload list, so we have the same objects in the same order.

    Combine with the SDSS catalog output:

    bash$ join instmags-114173.dat sdssmatch-114173.tsv | \
    > awk '{print $8, ($2-$6)*3600.0, ($3-$7)*3600.0, $8-$4}' | \
    > awk '$2*$2<1 && $3*$3<1' > sdssmatchjoin-114173.dat
    

    The join matches columns from SDSS to columns from SExtractor, the first awk generates columns with SDSS magnitude, RA offset in arcseconds, Dec offset in arcseconds, and the resulting zeropoint for this object.

    Plot in R

    > colnames=c("mag","dra","ddec","zp")
    > zptable <- read.table("sdssmatchjoin-114173.dat",col.names=colnames)
    > attach(zptable)
    > plot(mag,zp)
    > # again there is an an outlier on the faint end. Remove it
    > ok <- which ((zp < 30) & (zp > 20))
    > plot(mag[ok],zp[ok])
    > # there are still some outliers, but they don't corrupt
    > # the scaling too much 
    > hist(zp[ok])
    > hist(zp[ok],breaks=200,xlim=c(24.5,25.5))
    

    From these plots, it looks like the photometric zero point is also about 24.82. It is coincidental (and partially because our estimate is so rough) that it is the same as the r zero point.

14.2 Matching magnitudes from each color

bash$ fdump cat-114174.fits[2] mags-114174-plain.dat "MAG_BEST" - prhead- showrow- showunit-
bash$ fdump cat-114173.fits[2] mags-114173-findex.dat "VECTOR_ASSOC, MAG_BEST" - prhead- showrow- showunit-

Edit these to remove column headings and leading spaces, then make sure each is indexed by integers:

bash$ cat -n mags-114174-plain.dat > mags-114174.dat
bash$ awk '{printf "%d\t%f\n", $1, $2}' mags-114173-findex.dat > mags-114173.dat

Join the two:

bash$ join mags-114173.dat mags-114174.dat > mags_g_r.dat

Examine in R:

> colnames=c("index","g","r")
> mags <- read.table("mags_g_r.dat",col.names=colnames)
> attach(mags)
> gmag <- g+24.82
> rmag <- r+24.78
> hist(rmag,breaks=20)
> hist(gmag,breaks=20)

Try replicating the color-magnitude diagram from diagram: http://www.iop.org/EJ/article/1538-3881/116/4/1775/980164.html, transforming the colors using B and V magnitude transforms from http://www.sdss.org/dr6/algorithms/sdssUBVRITransform.html

> bmv <- (gmag-rmag+0.23)/1.09
> vmag <- rmag + 0.19*bmv
> plot(bmv,vmag,ylim=c(19,12))

Wow, that looks awful. Not surprising, though.

Lets try limiting our sample to stars near the object.

Again extract ASCII form the FITS files:

bash$ fdump cat-114173.fits[2] mags-114173-findex-xys.dat "VECTOR_ASSOC, MAG_BEST, X_IMAGE, Y_IMAGE, FWHM_IMAGE" - prhead- showrow- showunit-
bash$ emacs  mags-114173-findex-xys.dat
bash$ awk '{printf "%d\t%f\t%f\t%f\t%f\n", $1, $2, $3, $4, $5}' mags-114173-findex-xys.dat > mags-114173-xys.dat
bash$ join mags-114173-xys.dat mags-114174.dat > mags_g_r_restr.dat

Now plot it up in R:

> colnames=c("index","g","x","y","fwhm","r")
> mags <- read.table("mags_g_r_restr.dat",col.names=colnames)
> attach(mags)
> hist(fwhm)
> hist(fwhm,xlim=c(0,5),breaks=500)
> stars <- which((fwhm < 2.5) & (fwhm > 2))
> gmag <- g+24.82
> rmag <- r+24.78
> bmv <- (gmag-rmag+0.23)/1.09
> vmag <- rmag + 0.19*bmv
> r <- sqrt((x-1024)^2+(y-1024)^2)
> cntrstr <- which((r < 400) & (fwhm < 2.5) & (fwhm > 2))
> plot(bmv[cntrstr],vmag[cntrstr],ylim=c(24,12),xlim=c(-0.4,1.6))

This looks more reasonable; the blue horizontal branch is clearly visible. It still doesn't look very impressive.

15 Refining the object catalogs

Once these initial object catalogs have been created, the catalogs of stars used for photometric and astrometric calibration can be filtered and limited to the best sources for the specific images in question. One can also generate models of the sky background and the point spread function, both of which will vary over the field of view. With these refined calibrations and models one can repeat the object detection and measurement for better results.

16 Additional steps

16.1 Image registration and coaddition

For a multitude of reasons, it is useful for all exposures on a particular region of the sky share a coordinate system. In other words, all of the images should have the same transformation to world coordinates.

Accomplishing this requires generating a new image with the desired astrometric transform, and interpolating all of the pixel values for the coresponding input image. There are several utilities for doing this, including drizzle, IRAF's imalign task, and the Terapix SWarp utility.

Once different exposures are on the same coordinate system, several operations become possible. One can create color images by assigning images through different filters to r, g, and b channels on the color image. You can look for variable objects by scaling and subtracting, and you can obtain a single, higher signal to noise image by adding the images together.

16.2 Object detection on multiple images

One of the most common techniques for detecting low signal objects using multiple exposures is simply to detect objects in the weighted addition of the coadded exposures. Improved techniques that can more optimally combine exposures in different filters and with different PSFs are being developed, but are not in widespread use.

16.3 Exporting the catalogs

Once the final catalogs have been generated, they need to be provided to users in a useful way.

Author: Eric H. Neilsen, Jr.

Created: 2014-03-31 Mon 12:33

Emacs 24.3.1 (Org mode 8.2.5c)

Validate