Skip to main content

Shack-Hartmann sensor resolution - how much is good?

If you are new to adaptive optics (AO) like me, the selection of right hardware can be daunting. Starting with a wavefront sensor - they range in price, resolution, and many options which are not obvious. By practical trial and error I learned something about resolution, which wasn't obvious to me a year ago.

The Shack-Hartmann wavefront sensor (WFS) is essentially a camera with a lenslet array instead of an objective.
 There are sensors with 15x15 lenses, 30x30 and higher. Naively, you might think "the more the better" - we are digital age kids used to get high-res for cheap.

However, there is a catch. High-res sensor, say, 30x30 lenslets, divides your photon count by 900 per spot. Roughly speaking, when you image a fluorescent bead (or another point source) by a camera with "normal lens" (not a lenslet array), and your peak intensity is 2000, this makes a very nice, high SNR bead image. However, is you switch to Fourier (pupil) plane and image the wavefront using 900 lenslets, each lenslet creates a bead image with peak intensity 2000/900 ~ 2.2, which is probably about your sensor noise (*). So, instead of nicely resolved wavefront reconstruction you may get a noisy crap out of your wavefront sensor.

Of course, you can crank up the laser power - but there is a limit of how much photons you can squeeze from a fluorescent source (even a very bright bead), how fast it bleaches, and how much photo-damage you do to the rest of your sample. The good news is that your fluorescent beads can be relatively large (a few microns, for example), which will boost their photon output.
However, next time I see a product "Wavefront sensor with high resolution and speed", I will be very careful about it. Both these features require VERY bright point sources, and you just might not have enough photons.

So, the right answer (I think) is: less sensor resolution is better. By reducing your WFS lenslet resolution x2 you increase your illumination intensity (per spot) x4.

Another thing to consider - the resolution of your AO corrector. If this is a deformable mirror with 30-100 actuators, a sensor with 10x10 lenslets will probably suffice. Measuring wavefront with resolution much higher than you can control it will unlikely give you an advantage in adaptive control. But I may be wrong on this, being a judgemental noob in the field, and jumping to conclusions.

(*) Update. A more experienced colleague rightly pointed out that the above calculation of spot intensity at WFS is very inaccurate. At least two corrections are needed:
1. WFS lenslet focal distance is typically much smaller (f~5mm) than tube lens for camera imaging (200 mm). This makes system magnification and PSF proportionally smaller and hence higher illumination per sensor pixel, by a factor 200/5 = 40x. So the peak intensity of a point source at WFS sensor should look much brighter that 2.2 count: 2.2 x 40 = 88, which is quite good.
2. A lenset has smaller NA than the tube lens (~3x), which makes the PSF of a point source more blurry at WFS, and hence illumination per pixel is proportionally smaller. So, the corrected peak intensity now is 2.2 x 40 / 3 = 29 counts.
Again, these calculations are very rough and do not diminish the main point here: the light intensity that reaches the WFS sensor per lenslet scales as a square of the sensor resolution, and photon budget matters here, even for such bright samples as fluorescent beads.



Comments

Popular posts from this blog

3D modeling in a lab

About once a week I am asked by my colleagues which 3D modeling software I am using - usually when I am staring at the new part being 3D printed. I am using  Autodesk Inventor for a few reasons: it is a professional software for engineers and has huge community around it it provides free academic license there are thousands of youtube videos with detailed tutorials by enthusiasts easy to learn at a basic level, but there is always a lot of room for growth In a lab, there are two main workflows where Inventor is necessary: 3D modeling of complex assemblies (like custom-built microscope) and 3D printing. There are many youtube tutorials for beginners , so I here only review some things that Inventor can do, without any specific instructions.  3D modeling of parts and assemblies Before building a new microscope, you can create its virtual model and check dimensions, required adapters, and whether things will fit together. Luckily, Thorlabs has 3D model of near...

Programming NI DAQmx board in Python: easier than you think!

For my DIY microscope I had a task - generate a train of digital pulses which simulate camera trigger, so that other devices (galvo and laser) are synched. I wanted to do it in Python , so that it seamlessly integrates in my data acquisition and analysis Jupyter notebook. After some quick search I found a PyDAQmx library which seemed mature and had good examples to begin with. Installation was smooth: download, unzip, open Anaconda prompt, python setup.py install After only 30 min fiddling, I was able to solve my problem in just a few lines of code: Holy crap, it just works, out of the box. Oscilloscope shows nice digital pulses every 100 ms, each 1 ms long. The code is much shorter and cleaner than would be in C, C#, or LabView. PyDAQmx appears to be a full-power wrapper around native NI DAQmx drivers (yes, they need to be installed), so presumably it can do all that can be done in C or even LabView (this statement needs to be tested). One can use PyDAQmx to control ga...

How to connect a rotary encoder to Arduino and make your first PCB board

After I discovered the OpenStage project for cheap DIY microscopy stage automation, I decided to add a twist to it - control the stage positions manually with a rotary encoder, in addition to already-implemented serial port (USB). I found a nice RGB illuminated rotary encoder from Sparkfun  - it's shaft works as a button, and it is internally illuminated by built-in 3-color LEDs - a perfect device to switch speeds and manually control the stages. Hooking it up to Arduino seemed easy, and there is a very nice Encoder library to do just that. But when I started to test it, I fell into a deep rabbit hole called 'debouncing'. In short, real-world switches are never perfect and the 'moment' of switching has many messy things happening between the two leads, creating noise in the logic of reading device (Arduino). So, the voltage readout from a real rotary encoder looks like this: Note the high-frequency chirp in yellow line when it falls from high to low. T...