Twas a time, possibly before some of you reading this were born, when astroimaging was an ordeal. Imagine a world with manual guiding. You’d spend hours staring at a star centered in an illuminated reticle until your eye teared up or you passed out into the eyepiece from fatigue.
The equipment was also heavy and a pain to transport. If you were using a Newtonian reflector, you would be standing on a ladder for hours watching the guide star. Often, after about 30 minutes, the eyepiece’s position would place you uncomfortably between the rungs of the ladder and you would be stuck in this death crouch, clinging to the ladder in the freezing cold, trying to keep the star centered on the crosshairs.
As if this wasn’t bad enough, you also had no idea — until you developed the film — if you were making a mistake. More than half the results would show trailed stars or that everything was out of focus. Such was astroimaging when I started about 35 years ago. The average “lifespan” of an imager back then was three to five years. After that, most people couldn’t take it anymore.
In the coming years, the Santa Barbara Instrument Group produced the first autoguider that actually worked: the SBIG ST-4. It immediately changed everything. Imagers could digitize film using a scanner and then process the image on a computer using an early (and now considered crude) version of Photoshop. After that, the first CCD (charge-coupled device) cameras became available. They were tiny things, some smaller than a postage stamp.
At first, CCD images were a curiosity, certainly no match for hypered — sensitized by a gas — large-format film images exposed through superb telescopes. But little by little, CCD technology improved. What finally elevated it above film was its quantum efficiency.
With film, you were lucky if it recorded 3 to 5 percent of the light that fell on it. With a CCD, that number ballooned to over 50 percent. Suddenly, images were being made that were not possible just a year before, and amateur astronomers were capturing new details on familiar objects for the first time.
Soon after the CCD appeared, another new detector arrived: the CMOS, short for complementary metal-oxide semiconductor. Unlike a CCD, where data is downloaded in rows and read at the edge of the chip, a CMOS chip reads every pixel at its position. It does this with transistors surrounding each photosensitive pixel. Initially, this created two problems. First, the transistors took up real estate where more pixels could go; second, after a certain level of exposure, amplifier glow fogged the image.
But CMOS chips were easier to make and thus less expensive. They also required considerably less power to run. In addition, CMOS chips have significantly higher frame rates, allowing users to take successive images faster because the readout is quicker. Manufacturers kept improving CMOS chips until they had lower noise and higher resolution than their CCD counterparts.
With the advent of commercially available back-illuminated chips, sensitivity and quantum efficiency skyrocketed. In such a chip, all the electronics are grouped behind the pixels, allowing all the space facing the sky to be filled with photoreceptors. This means that all incoming photons strike the pixels.
Back-illuminated chips also increased the deep well capacity — how many photons can be recorded before the photo receptor site “fills up” and can’t record any more data. The Sony Corporation accomplished much of this progress and most of today’s high-end CMOS astronomy cameras contain Sony chips.
In fact, many manufacturers have ceased to make CCDs.