logo kso logo uni graz


Kanzelhöhe Photosphere Telescope (KPT)

 

Telescope

The main purpose of this instrument is a high precision full disk imaging of the photosphere in order to derive sunspot and faculae areas and positions as well. Other scientific objectives are studies of the underlying photosphere before, during and after chromospheric flares, hunting for the mystery white light flares at the begin of a new solar cycle and providing a data set for developing an automatic determination of sunspot relative numbers by image segmentation techniques. The obtained time series will also continue the support of the Debrecen Photoheliographic Data by daily synoptic images.

The KPT is based on the Kanzelhöhe Photoheliograph (PhoKa) which went into operation in 1989 and was described in detail in Pettauer, T.: 1990, in L. Deszö (ed.), The Dynamic Sun, Publ. Debrecen Heliophysical Observatory, Debrecen, p. 62(1990).

Unlike the old Kanzelhöhe Photoheliograph (PhoKa) which needed an enlarging eyepiece system to fit the solar disk image approbiate onto a 13x18 cm2 planar film, is the chip size (15x15 mm2) of the proposed CCD cam model even too small for the acquisition of the full solar disk in the primary focus of the superb Jena AS objective lens which should remain in use. Therefore considerable changes of the telescope design in order to adapt for the relatively small detector and image sizes were required. Downsizing the primary focus image with an eyepiece lens would be an option, but a calculation of the optical parameters showed, that this combination would need a field lens and would increase the aberations. The alternative was to insert another positive focus lens within the focus distance of the objective lens L1 and reduce the effective focus length (for details see figure 1). The back side of the Jena AS front lens (air gap side) carries a gold coating with a transmission less than 0.1% to reduce the amount of light which enters the telescope to avoid image distortion caused by heating the telescope. The spectral range of the solar continuum is limited by an interference filter with a FWHM of 10 nm at a wavelength of 546 nm.

Figure 1: Layout of the Kanzelhöhe Photosphere Telescope. Choosing achromats for L1 and L2 reduces also non-chromatic aberations. The effective focus length of the complete system is about 1460 mm which corresponds to a disk size of about 13.7 mm in diameter. The diaphragms D1...D3 for the reduction of straylight were kept from the old system. An extra neutral filter N with a transmission of 10% became necessary to avoid very short exposure times (< 2 ms) and smearing which is specific to interline transfer CCD cams. The lens L2, the interference filter IF, the neutral filter and the CCD cam are fixed in an inner tube which is inserted from the back end into the main telescope tube. The focus control is made manually by shifting the lens L1 back and forth with a worm gear.

Figure 2a: A 3-D ray trace sketch of the Kanzelhöhe Photosphere Telescope visualizes the physical setup of the instrument.

Figure 2b: The whole photosphere telescope system is mounted piggy-back on the main patrol telescope cabinet.

 
 

Kanzelhöhe Photosphere Digital Camera (KPDC)


Camera System

The experience gained during several years of Hα observations showed that mastering of not always perfect observing and seeing conditions might be even more important than maximum resolution. Key features for the selection of the camera model were

  • presence of an electronic shutter, as mechanical shutters had problems with the huge number of exposures on high cadence time series
  • high frame rate for use of frame selection
  • software controllable exposure times to adapt for changing opacity of the atmosphere
  • reasonable high resolution (spatial and intensity)
  • digital read-out and progressive scanning which simplifies the data acquisition

The selected JAI Pulnix TM--4100CL features a Kodak interline CCD chip with microlenses and a built-in electronic shutter, the maximum of the spectral sensitivity is close to the observed band. The edge length of 7.4 μm of the 2kx2k square pixels corresponds to 1.04 arcsec/pix which is almost equal to the diffraction limit for the telescope of 1.06 arcsec at 546 nm. According to Nyquist's sampling theorem the image is undersampled but it is good compromise between mostly prevailing seeing conditions and present camera technology. The shutter can be controlled by the length of an external pulse, typical exposure/integration times are in the range of 5 ms and the maximum frame rate is about 10 frames/s. The output is digitized to 10 bit, the lower level as well as the gain can be set according to the incident light level to exploit the full dynamic range but avoid non-linearity due to saturation. The dual tap CCD has the advantage of higher frame rates but the drawback of the need of a careful tuning of the individual gains and lower levels of the A/D-converter for the two taps to achieve equal dense half-images. Details of the camera setup procedures and parameters can be found in the KPDC - Data Processing Procedures section below.

A CameraLink interface transfers the data to a Silicon Software ME3 frame grabber in an Intel based 3 GHz industrial 19" PC and allows also the control of the camera by the image acquisition software.

Figure 3: Spectral sensitivity of the Kodak KAI4021 CCD, both taken from the manufacturer's product description. The selected band is close to the maximum of the sensitivity.

Figure 4: Output voltage vs. illumination. The max. light level for which the output is linear depends on the fixed, factory-set voltage Vsub, therefore the gain of the A/D-converter had to be adjusted to 12 dB to cover the full 10 bit range within the linear part and a noise floor slightly above zero, i. e. of some units per pixel.

Although integration times of less than 1 ms are possible and would be favourable due to the seeing it turned out that smearing which is inherent to interline transfer CCDs does not allow exposure times much shorter than 2...3 ms. It origins from spurious illumination of the interline columns (which are not perfectly masked) during read-out and shows bright (vertical) columns on parts where single bright pixels reside. For attenuation of the incident light level to achieve longer exposure times we put a neutral filter with a transmission of 10% in front of the camera window.

Figure 5: The hump parallel to the pixel columns (y-axis) outside the solar disk is due to smearing of the interline CCD without (left) and with the extra neutral filter T = 10%. The effect is clearly damped on exposure times longer than 2...3 ms which are typical with the filter and clear sky. However there is a higher level of spurious light outside the solar disk - probably straylight from the filter. The intensity of the disk center is about 850 units, but the scale is spread and cut at 200 to make the effect better visible.

The image acquisition application is written in C++ and makes use of the Common Vision Blox library. It is running under Windows XP and grabs continuously frames from the camera. Each frame is evaluated in a user defined rectangle (area-of-interest AOI) with regard to mean pixel value and standard deviation. The mean is used to control the exposure time and keeping the brightness level of the images fairly constant, the standard deviation is a measure for the blurring which is the main factor of the seeing at exposure times of some milliseconds which freeze the image motion component. The image with the best seeing of a consecutive number of frames is then written onto harddisk, the standard format is FITS, JPEG copies are optional. The whole procedure can be repeated after a user defined interval for automatic acquisition of time series. A block diagram and further details of the software which also used for the Hα observations at Kanzelhöhe can be found in Otruba, W.: 2005, Hvar Observatory Bulletin 29, 279.

Figure 6: A screen-shot of the image acqusition application. In the middle of the window the live solar disk image from the cam, in the center the white box indicating the AOI which is used for the calculation of the image parameters. They are displayed in real-time in the right part of the main window. Left: the control buttons for taking dark frames, snapping images right on user requests or for activating the recording of time series. Time intervals, number of frames used for frame selection and AOI-coordinates or a fixed exposure time can be specified via pull-down menues and dialog boxes.

 
 

KPDC - Data Processing Procedures


Data Processing Pipeline

The data processing pipeline describes the work flow from the data acqusition to the final data repository in the archives and substantial processing steps. The methods and procedures are explained in the sections below:

At Observation Time

-- expose to constant "density"vary exposure time to adapt for equal average intensities in AOI in each frame
-- frame selection on pixel contrastcalculate standard deviation of intensities in AOI and select frame with max. std. dev. within a certain number of frames
--» A. Raw Level Data FITS, normally not available

After Observation Processing

-- discard corrupted imagescheck for reasonable disk radii and intensities in all quadrants
-- cadence reductionselect "best of hour" by the Optimum Window Method, or keep all during high activity periods
-- determination of actual disk center coord. and radii
-- shift disk to center of FoVonly by integer number of pixel to avoid any data averaging
-- calculate properties and parameters for achive level datae. g. angle Θ, image scale,...
-- mask outside disk = 0set pixels far outside the disk to zero for a better data compression
--» B. Archive Level Data FITS, optional JPEG available
-- application of a gain table (flatfield)"Burlov"-method, which assumes and reduces the image to a circular symmetry on large scales and delivers also a CLV profile
-- calculate properties and parameters for synoptic level data e. g. P, B0, L0 and SOHO standard FITS keys for synoptic observations
-- contrast enhancement and derotationunsharp masking, intensity range rescaling, rotated by P and Θ to have solar N up
--» C1 Synoptic Level Data denominated as "low contrast", FITS and JPEG available.
-- normalizationdivide by CLV (quiet Sun map)
-- contrast enhancement and derotationunsharp masking, linear intensity range rescaling, rotated by P and Θ to have solar N up
--» C2 Synoptic Level Data - normalized denominated as "high contrast", FITS and JPEG available.

Positional Information

Determination of the solar limb is crucial for deriving geometrical information from the solar images. Properties of the disk (center coordinates and radius) are needed for the shifting of the disks towards the center of the field of view and the actual image scale. They are given in the FITS headers of archives and synoptic level data.

The solar limb is commonly defined as the inflexion point of a radial intensity profile. A stable method which works also on images which show only a part of the disk (cf fig. 10) is to find limb points on a set of disk profiles parallel to the CCD y-axis. A profile is smoothed, the first derivative is calculated and again smoothed. The max and the min of the derivative should indicate the position of the inflexion point of the intensity profile. A circle (x0, y0, r) representing the disk is derived by fitting this set of limb points with the least squares method.

Figure 7: Position of the intensity profiles used for limb detection. Starting from column 200 to col. 1800 each ten pixel columns a profile is selected.

Figure 8: Limb profile of a continuum image. The dots represent the individual pixel values along the radial profile (in this plot parallel to the CCD x-axis), the solid line is a running average over 15 pixel values. The dashed line is the smoothed derivative (again running average over 15) of the intensity profile. The vertical line shows the max of the derivative and therefore the position of the solar limb.

In a second step the individual radial distances of the set of limb points to the fitted circle from the above described first approximation is calculated, limb points with distances above a certain threshold (e.g. 10 pix) are neglected for a new estimation of the limb circle. With this iteration we yield usually standard errors in radius (from the least squares method) of < 1 pixel.

In order to derive heliophysical coordinates of solar features on solar images one has to know the orientation of a pixel axis of the CCD with respect to celestial coordinates. The equatorial mounting of the telescope has the advantage of no image rotation during the diurnal movement of the Sun across the celestial sphere. Apart from small errors due to non-perfect telescope setup or bending of the optical axis (on the changing direction of the weight of the telescope during a day) the angle Θ between the pixel axis and e.g. the celestial E-W-direction should be constant as long as the system will not be disassembled for maintenance. If the tracking system of the telescope is switched off and one neglects the change of the solar declination and of the refraction during a few minutes the track of a solar feature or of the disk center will represent the celestial E-W-direction.

Figure 9: The principle of the determination of the E-W direction in the images. Inclination of the Solar rotation axis (P and B0) is reckoned from celestial North which is perpendicular to the derived E-W track.

Therefore we obtain from time to time a series of about 20 images with no tracking having the solar disk moving across the field of view. Applying an edge detection filter, fitting circles through the solar limb points and calculating the center coordinates yield finally a track of the disk centers and the angle Θ.

Figure 10, top: Drift of the solar disk centers of an images series taken at 2007-08-01 around 6:50 UTC. The + mark the calculated disk center coordinates of the individual images. Θ is derived by a linear fit from these positions y = tanΘ.x + d. Bottom: The image series taken at 2007-08-01 around 6:50 UTC covers a range where about a half disk is visible at the eastern limb of the field of view until still a half disk is visible at the opposite end of the CCD field.

The standard error of this procedure was determined of about 0.02° for Θ and the daily variation due to the above described imperfections is in the order of 0.05°. The standard error for the deviation of individual limb pixels from the calculated circles is within one image less than 1 pixel, the observed variation of 2...3 pixels typically in the disk radii of about 930 pixels in image series is rather an influence of the seeing than an error in the fitting procedure. So the individual calculated disk radii have an error in the order 0.1% and we can estimate a reliability of better than 1° in heliographic positions in our images.

Figure 11: The derived radii of the images (gain from the series taken at 2007-08-01 around 6:50 UTC) are relatively stable even when only a part of the disk is visible, the variation is mainly due to seeing effects.

Figure 12: The variation of Θ from 3 images series taken on 2007-08-01. The observation time (middle of each series) is indicated by the hour angle of the telescope at the observation time. We assume that the variation should depend on the geometrical position i. e. the hour angle of the parallactic telescope mounting. The dashed line is a second order polynomial fit which should be symmetric to noon.

Photometric Information

CCDs have basically a linear relation between the produced charge (and the output voltage) and the amount of the incident light, but there exists a level of saturation which limits the linear range. Therefore the amount of light during exposure has to be limited below that level. The TM-4100CL allows to set the gain and the lower level of the A/D converter, so that the full dynamic range (which corresponds to pixel value 0...1023) can be mapped into the linear part of the CCDs charge vs. incident light relation. The zero level is set properly when some noise floor (dark current) is visible. Signal noise limits the dynamic resolution of the pixels (the number of useful bits). For setting and checking of these properties we used a simple procedure which was proposed by ESO (see Deiries, S.: 1995, ESO Doc. No. VLT-INS-ESO-13670-0001).

Figure 13, left: Single dark current frame of the TM4100CL. The mean of the pixel values is dc = 4.3 counts with σ = 2.8. The read-out noise, defined as the standard deviation of the pixel differences of 2 dark current frames, is rmsnoise = 2.77 which yields a dynamic range of 51 dB or 8.5 significant bits. Right: Smoothed average of 17 dark current frames with integration times from less than 1 ms to about 10 ms. There are variations in the dark current of the individual taps of the dual tap CCD array visible, however we noticed that a general subtraction of such a smoothed average dark current frame did not improve the images.

In the lab the cam was illuminated with a stable and uniform light source (e.g. a LED with diffusor) and a set of paired frames (with identical exposure times) were taken in a wide range of exposure times. Sums and differences of individual pixel values of the 2 frames of each pair were used to calculate statistical properties (mean, standard deviation - for details see the ESO paper and the image captions). These statistics were made on sub-fields (e.g. of size 256x256 pixels) to avoid averaging over potentially non-uniform areas: The means of the pixel values as a function of the amount of incident light (i. e. of the exposure time) showed a non-linearity of 0.2%. The standard deviations of the difference of the equally exposed frame pairs give the noise in the data. The dark current proved to be fairly independent from the exposure time and is noise dominated (see fig. 13 and 14), probably produced by the electronics (read-out noise). The ratio between full scale and noise yields finally the useful dynamic range and can be expressed in the effective number bits.

Figure 14: Parameters for the TM4100 cam with the finally selected gain and lower level for the A/D converter, derived from the statistical evaluation of the pairs of frames uniformly illuminated as described in the text. The plots show the numbers for a 256x256 pixels area starting from pixel (1024,768), values for other sub-fields do not differ substantially, even on fields with double linear dimensions. Top left the variances of the individual pixel differences of the frame pairs which can be used for an estimation of the conversion factor (data units per absorbed electron/photon). Top, right: Linearity of the output signal as a function of the incident light, which is proportional to the exposure/integration time if we assume a stable light source. Bottom row: Dark current (mean and standard deviation) of the individual non-illuminated frames but with a variation of integration times. Right: The standard deviations of the individual pixel differences of the frame pairs of identical integration times. Both plots show that the values are integration-time independent and of the same order, therefore the dark current is rather a noise than a bias. As noise is unpredictable, it is an uncertaincy in the signal and cannot be subtracted. The methods are explained in detail in the mentioned ESO paper.

A flatfield frame can be considered as a multiplicative map of non-uniform gain of the individual pixels. It may origin from a real non-uniform gain or sensitivity of the pixels or may be a result of non-uniform illumination of the telescope, due to e.g. tilt lenses. This non-uniformity was investigated and calculated with 2 methods which deliver such maps and a center-to-limb variation/darkening (CLV) estimate as well:

  • Method A) - denominated as Kuhn-Lin method - by a set 8-10 consecutively taken solar image frames which assume to show the same object but at shifted positions (see Kuhn, J. R., Lin, H., & Loranz, D.: 1991, Publ. Astr. Soc. Pacific, 103, 1097), any variation of the solar image in the set must origin from non-uniformity.
  • Method B) - denominated as Burlov method - by assuming circular symmetry in a (spotless) solar disk image. The disk is split into concentric rings. The intensity function of each ring and therefore potential asymmetries are mapped by fitting polynomials. These polynomial represent the non-uni-formity and can be used to calculate a flatfield map and correct the images. Real asymmetries in the images, e. g. caused by active regions, are of smaller scale and should not be corrected. This can be handled by limiting the order of the polynomials. This method was basically developed by K. Burlov-Vasiljev and P. N. Brandt at KIS Freiburg in 1996 for the processing of RISE/PSPT images but never published.

Figure 15: Comparison of flatfield methods described in the text, left method A) and right method B), bottom row is a TV-representation where higher values are brighter. Clearly visible that in method A) the noise which is from nature variable from frame to frame produces more small-scale non-uniformity than any visible large scale variation. Method B) uses only a single frame and no differences, assumes large scale circular symmetry and stops compensation of the variation at a certain scale.

Both methods showed that the effect of noise dominates over non-uniformity which is in the single percent range (Fig. 15). Fig. 16 shows for comparison two very high contrast solar images of a spotless Sun where limb darkening is compensated by the CLV function which is a by-product of the both methods. It is clearly visible that both methods fail at the extreme limb regions, but method B) seems to be a little bit better (see left part of the disk), but it has to be further checked on images with sunspots.

Figure 16: Very high contrast images of the spotless Sun from 2007-08-01 to check the quality of the flat fielding methods. Left processed with method A) and right with method B). Limb darkening is compensated by applying the CLV function derived from the flat-fielding procedures. Between the black and white level there is only an intensity variation of 1%.

In practice the method B) has advantage that it can be applied as a post-processing on any archived image without any prerequisite at observation time, but needs some computing power for the calculation of the rings and the fitting of the polynomials. Method A) however needs a recording of the set of displaced images on a clear sky from time to time followed by a calculation of the flat-field frame, but allows then to apply this frame to a series of subsequently taken images.