How does Depth of Field affect Telecentric Lenses?

It is a general misunderstanding that the telecentric lenses innately enjoy a preponderant depth of field than the usual conventional lenses. While depth of field is still conclusively governed by the wavelength and f/# of the lens, it is true that the telecentric lenses can have a greater adaptable depth of field than the conventional lenses due to the symmetrical glaring on either side of best focus. As the part under examination shifts toward or away from the lens, it will follow the angular field of view (or the main ray) that is correlated with it. In a non-telecentric lens, when an object is shifted in and out of focus, the part blurs asymmetrically due to the prolix and the magnification change that is correlated with its angular field of view. However, telecentric Lenses, blur symmetrically since there is no angular component to the field of view. In practice, this means that features such as edges possess their center of mass location; a judicious measurement can still be made when the object is beyond best focus as long as the variation remains high enough for the contrivance being used by the machine vision system to work properly.

While it may seem counterintuitive, blur can be used advantageously in certain applications with Telecentric Lenses. For example, if a machine vision system needs to find the center location of a pin, as shown in Figure 3a, the transition from white to black is quite sharp when the lens is in focus. In Figure 3b, the same pin is shown slightly defocused.

The Same Pin Imaged both In and Out of Focus

Figure 3: The Same Pin Imaged both In and Out of Focus. Note that the transition from white to black covers many more pixels when the lens is slightly out of focus (b). This can be advantageous!

Looking at a plot of the image grey levels from a line profile taken across the edge of the part, as in Figure 4, the slope of the line is much shallower for the slightly defocused image, as the pin edge is spread over more pixels. Due to the symmetric blurring of the Telecentric Lens, this blur is still usable as the centroid has not moved and the amount of sub-pixel interpolation needed is decreased. This reduces sensitivity to grey level fluctuations caused by sensor noise and allows the pin center location to be found more reliably and with higher repeatability.

Plot showing the difference in Slope between a Focused and Defocused Edge

Figure 4: Plot showing the difference in Slope between a Focused and Defocused Edge where the defocused edge takes up more pixels; searching the edge becomes easier without relying on sub-pixel interpolation.

 Article is posted by Opticsforhire.com –  An Optical Design Consultant.

Using Structured Illumination

Illumination is a censorious intrinsic of any machine vision system, and can often become the digression between a good imaging system and of a great one. The illumination location and wavelength requires to be solely considered for each appositeness not only does, but certain systems require structured illumination to aggrandize system performance.

Structured illumination takes advantage of straight-out impressions of light to detain the geometric contour and intensity of objects. A compelling 3D system can be cobbled up by illuminating objects with different impressions, such as dots, lines or grids, while reducing cost, parts, and multiplicity.

As a well-deliberated system aggrandizes measurement exactitude, it is substantial to know clearly that structured illumination isn’t solitary, and assured structures should be used to achieve certain measurements. For example, a dot grid impression may be good enough to scrutinize a few flecks on a gadget, but a line or a multiple line impression is needed to measure an gadget’s 3D contour.

The table given below shows some conventional structured illumination impressions and their prototypical appositeness.

Common Structured Illumination Patterns
Structured Illumination Method of Determination Purpose
 1-usi2-usi.jpg Triangulation Based Determining the dimensions of most objects while the object is scanned
 3-usi4-usi Shadow and Triangulation Based Determining the dimensions of refractive objects while the object is scanned
 5-usi6-usi Distortion Based Determining the depth information at multiple discrete points in a single exposure

Article is posted by Optics For Hire – lens design and manufacturing Consultant.

The Advantages of Telecentricity

To be able to perform repeatable quickly, high exactitude measurements is quite critical to aggrandize the performance of many automation vision systems. For these kind of systems, a telecentric lens grants the highest possible verity to be obtained. Here we will discusses the solitary performance characteristics of Telecentric Lenses and how telecentricity can jounce a system’s performance.

What is Zero Angular Field of View and the Elimination of Parallax Error?

Conventional lenses have an angular field of view in such a way that as the distance between the object and lens cumulates, the magnification shrinks. This is exactly how the human vision behaves, and dispenses to our depth sagacity. This angular field of view consequences in parallax, also known as perspective error, which downturns exactitude, as the inspected measurement of the vision system will change if the object is displaced (even when remaining within the depth of field) due to the magnification tempering. Telecentric Lenses dispose of the parallax error characteristic of standard lenses by having a constant, non-angular field of view; at any outpost from the lens, a Telecentric Lens will always have the same field of view. See figure below to understand the difference between a non-telecentric and a telecentric field of view.

Telecentric Lens

Figure 1: Field of View comparison of a Conventional and Telecentric Lens. Note the conventional lens’s angular field of view and the Telecentric Lens’s zero angle field of view.

A Telecentric Lenses’ invariable field of view has both benefits and suppression for gauging applications. The primary advantage of a Telecentric Lens is that its magnification does not modify in respect to depth. The figure below shows two different objects at different working stretches, both imaged by a Fixed Focal Length (non-telecentric) Lens (center) and a Telecentric Lens (right). Take a note that in the image taken with a Telecentric Lens, it is almost impossible to say which object is anterior of the other while with the Fixed Focal Length Lens, it is quite evident that the object that emerges smaller is located farther from the lens.

Optical-Lens

Figure 2: The Angular Field of View of the Fixed Focal Length Lens decodes to Parallax Error in the Image and makes the two Cubes emerge to be of contrasting sizes.

While the above figure is radical in terms of a dynamic distance shift, it delineates the importance of minimizing parallax error. Several automated investigation tasks are imaging objects that run through the field of view of an imaging system, and the location of parts is seldom perfectly repeatable. If the working distance is not indistinguishable for each object that the lens is imaging, the measurement of each object will dissent due to the magnification shift. A machine vision system that yields different results based on a magnification calibration error (which is compulsory with a Fixed Focal Length Lens) is a vulnerable solution and cannot be used when high fidelity is compulsory. Telecentric Lenses remove the concern about measurement errors that would else occur due to factors such as a vibrating conveyor or vague part positions.

Article is posted by Optics For Hire – lens design and manufacturing Consultant.

Understanding Resolution

Understanding a manufacturer’s specifications for a lens can greatly simplify the research and purchasing processes. In order to know how a lens works, it is critical to understand resolution, magnification, contrast, f/#, and how to read common performance curves including Modulation Transfer Function (MTF), Depth of Field (DOF), Relative Illumination, and distortion. Resolution is a measurement of an imaging system’s ability to reproduce object detail, and can be influenced by factors such as the type of lighting used, the pixel size of the sensor, or the capabilities of the optics. The smaller the object detail, the higher the required resolution.

Dividing the number of horizontal or vertical pixels on a sensor into the size of the object one wishes to observe will indicate how much space each pixel covers on the object and can be used to estimate resolution. However, this does not truly determine if the information on the pixel is distinguishable from the information on any other pixel.

As a starting point, it is important to understand what can actually limit system resolution. An example can be shown in Figure 1 a pair of squares on a white background. If the squares are imaged onto neighboring pixels on the camera sensor, then they will appear to be one larger rectangle in the image (1a) rather than two separate squares (1b). In order to distinguish the squares, a certain amount of space is needed between them, at least one pixel. This minimum distance is the limiting resolution of the system. The absolute limitation is defined by the size of the pixels on the sensor as well as the number of pixels on the sensor.

Camera Resolution Limit

Figure 1: Resolving Two Squares. If the space between the squares is too small (a) the camera sensor will be unable to resolve them as separate objects

The Line Pair and Sensor Limitations

The relationship between alternating black and white squares is often described as a line pair. Typically, the resolution is defined by the frequency measured in line pairs per millimeter (lp/mm). A lens’s resolution is unfortunately not an absolute number. At a given resolution, the ability to see the two squares as separate entities will be dependent on grey scale level. The bigger the separation in the grey scale between the squares and space between them (Figure 1b), the more robust is the ability to resolve the squares. This grey scale separation is known as contrast (at a specified frequency). The spatial frequency is given in lp/mm. For this reason, calculating resolution in terms of lp/mm is extremely useful when comparing lenses and for determining the best choice for given sensors and applications. Contrast is explained in more detail in this application note.

The sensor is where the system resolution calculation begins. By starting with the sensor, it is easier to determine what lens performance is required to match the sensor or other application requirements. The highest frequency which can be resolved by a sensor, the Nyquist frequency, is effectively two pixels or one line pair. Table 1 shows the Nyquist limit associated with pixel sizes found on some highly used sensors. The resolution of the sensor, also referred to as the image space resolution for the system, can be calculated by multiplying the pixel size in μm by 2 (to create a pair), and dividing that into 1000 to convert to mm :

optics

Sensors with larger pixels will have lower limiting resolutions. Sensors with smaller pixels will have higher limiting resolutions.

With this information, the limiting resolution on the object to be viewed can be calculated. In order to do so, the relationships between the sensor size, the field of view, and the number of pixels on the sensor need to be understood.

Sensor size refers to the size of a camera sensor’s active area, typically specified by the sensor format size. However, the exact sensor proportions will vary depending on the aspect ratio, and the nominal sensor formats should be used only as a guideline, especially for telecentric lenses and high magnification objectives. The sensor size can be directly calculated from the pixel size and the number of active pixels on the sensor.

pic1

pic2

Pixel Size (μm) Associated Nyquist Limit (lp/mm)
1.67 299.4
2.2 227.3
3.45 144.9
4.54 110.1
5.5 90.9

Table 1: As pixel sizes get smaller the associated Nyquist limit in lp/mm rises proportionally.

Article is posted by Optics For Hire – lens design and manufacturing Consultant.

Choose the Correct Illumination

Often, a customer struggles with contrast and resolution problems in an imaging system, while underestimating the power of proper illumination. In fact, the desired image quality can typically be met by improving a system’s illumination rather than investing in higher resolution detectors, imaging lenses, and software. The system integrators should remember that proper light intensity in the final image is directly dependent upon component selection.

Correct illumination is very critical to an image system and improper illumination can cause a variety of image problems. Blooming or hot spots, for example, can hide important image information, as can shadowing. In addition, shadowing can also cause false edge calculations when measuring, resulting in inaccurate measurements. Poor illumination can also result in a low signal-to-noise ratio. Non-uniform lighting, in particular, can harm signal-to-noise ratios and make tasks such as thresholding more difficult. These are only a few of the reasons why correct illumination for your application is so important.

The pitfalls of improper illumination are clear, but how are they avoided? To ensure optimal illumination when integrating a system, it is important to recognize the role that choosing the right components plays. Every component affects the amount of light incident on the sensor and, therefore, the system’s image quality. The imaging lens’ aperture (f/#) impacts the amount of light incident on the camera. Illumination should be increased as the lens aperture is closed (i.e. higher f/#). High power lenses usually require more illumination, as smaller areas viewed reflect less light back into the lens. The camera’s minimum sensitivity is also important in determining the minimum amount of light required in the system. In addition, camera settings such as gain, shutter speed, etc., affect the sensor’s sensitivity. Fiber optic illumination usually involves an illuminator and light guide, each of which should be integrated to optimize lighting at the object.

Table 1: Key Photometric Units
1 footcandle = 1 lumen/ft2
1 footcandle = 10.764 meter candles
1 footcandle = 10.764 lux
1 candle = 1 lumen/steradian
1 candle = 3.142 x 10-4 Lambert
1 Lambert = 2.054 candle/in2
1 lux = meter candle
1 lux = 0.0929 footcandle
1 meter candle = 1 lumen/m2

The light intensity for our illumination products is typically specified in terms of footcandles (English unit). Lux, the SI unit equivalent, can be related to footcandles as follows: 1 lux = 0.0929 footcandle.

Table 2: Illumination Comparison
Application Requirement Object Under Inspection Suggested Type of Illumination
Reduction of specularity Shiny object Diffuse front, diffuse axial, polarizing
Even illumination of object Any type of object Diffuse front, diffuse axial, ring light
Highlight surface defects or topology Nearly flat (2-D) object Single directional, structured light
Highlight texture of object with shadows Any type of object Directional, structured light
Reduce shadows Object with protrusions, 3-D object Diffuse front, diffuse axial, ring light
Highlight defects within object Transparent object Darkfield
Silhouetting object Any type of object Backlighting
3-D shape profiling of object Object with protrusions, 3-D object Structured light

Types of Illumination :-

Since proper illumination is often the determining factor between a system’s success and failure, many specific products and techniques have been developed to overcome the most common lighting obstacles. The target used throughout this section was developed to demonstrate the strengths and weaknesses of these various lighting schemes for a variety of object features. The grooves, colors, surface deformations, and specular areas on the target represent some of the common trouble areas that may demand special attention in actual applications.

 fig-1-cci

Directional Illumination – Point source illumination from single or multiple sources. Lenses can be used to focus or spread out illumination.

Pros

Bright, flexible, and can be used in various applications. Easily fit into different packaging.

Cons

Shadowing and glare.

Useful Products

Fiber optic light guides, focusing assemblies, LED spot lights, and incandescent light.

Application

Inspection and measurement of matte and flat objects.

Directional Illumination

Directional Illumination2

Glancing Illumination – Point source illumination similar to directional illumination, except at a sharp angle of incidence.

Pros

Shows surface structure and enhances object topography.

Cons Hot spots and extreme shadowing.
Useful Products Fiber optic light guides, focusing assemblies, LED spot lights, and incandescent light and line light guides.
Application

Identifying defects in an object with depth and examining finish of opaque objects.

Glancing Illumination

Glancing Illumination2

Diffuse Illumination – Diffuse, even light from an extended source.

Pros

Reduces glare and provides even illumination.
Cons Large and difficult to fit in confined spaces.
Useful Products Fluorescent linear lights.
Application

Best for imaging large, shiny objects with large working distances.

Diffuse Illumination

Diffuse Illumination2

Ring Light – Coaxial illumination that mounts directly on a lens.

Pros

Mounts directly to lens and reduces shadowing. Uniform illumination when used at proper distances.
Cons Circular glare pattern from reflective surfaces. Works only in relatively short working distances.
Useful Products

Fiber optic ring light guides and fluorescent ring lights; LED ring lights.

Application

Wide variety of inspection and measurement systems with matte objects.

Ring Light

Ring Light2

Diffuse Axial Illumination – Diffuse light in-line with the optics. Lens looks through a beamsplitter that is reflecting light onto the object. Illumination is coaxial to imaging access.

Pros

Very even and diffuse; greatly reduces shadowing; very little glare.

Cons

Large and difficult to mount; limited working distance; low throughput such that multiple fiber optic sources may be needed to provide sufficient illumination.

Useful Products

Fiber optic diffuse axial attachment. Single or multiple fiber optic illuminators. Single, dual, or quad fiber bundles depending on size of attachment and number of illuminators used. LED diffuse axial illuminator.

Application

Measurements and inspection of shiny objects.

Diffuse Axial Illumination

Diffuse Axial Illumination2

Structured Light (Line Generators) – Patterns that are projected onto the object. Typically laser projected lines, spots, grids, or circles.

Pros

Enhances surface features by providing intense illumination over a small area. Can be used to get depth information from object.

Cons

May cause blooming and is absorbed by some colors.

Useful Products

Lasers with line generating or diffractive pattern generating optics.

Application

Inspection of three-dimensional objects for missing features. Topography measurements.

Structured Light

Structured Light2

Polarized Light – A type of directional illumination that makes use of polarized light to remove specularities and hot spots.

Pros

Provides even illumination over the entire surface of the object under polarization. Reduces glare to make surface features discernable.

Cons

Overall intensity of light is reduced after polarization filter is placed in front of light source and/or imaging lens.

Useful Products

Polarization filters and Polarizer/ Analyzer adapters.

Application

Measurements and inspection of shiny objects.

Polarized Light

Polarized Light2

Darkfield – Light enters a transparent or translucent object through the edges perpendicular to the lens.

Pros

High contrast of internal and surface details. Enhances scratches, cracks, and bubbles in clear objects.

Cons

Poor edge contrast. Not useful for opaque objects.

Useful Products

Fiber optic darkfield attachment, line light guides, and laser line generators.

Application

Glass and plastic inspection.

Darkfield

Darkfield2

Brightfield/Backlight – Object is lit from behind. Used to silhouette opaque objects or for imaging through transparent objects.

Pros

High contrast for edge detection.

Cons

Eliminates surface detail.

Useful Products

Fiber optic backlights and LED backlights.

Application

Targets and test patterns, edge detection, measurement of opaque objects and sorting of translucent colored objects.

Backlight

Backlight2

Filtering Provides Various Levels of Contrast

Examples illustrate darkfield and backlight illumination with assorted color filters. Note: Images taken with 10X Close Focus Zoom Lens #54-363: Field of View = 30mm, Working Distance = 200mm.

fig-11a-cciDarkfield Only Defects appear white

fig-11b-cciDarkfield with Blue Filter Defects appear blue

fig-11c-cciDarkfield and Backlight No filter used, but edge contrast improves

fig-11d-cciDarkfield without Filter and Backlight with Yellow Filter Enhances overall contrast, defects appear white in contrast to rest of field

Image Enhancement using Polarizers

A polarizer is useful for eliminating specular reflections (glare) and bringing out surface defects in an image. A polarizer can be mounted either on the light source, on the video lens, or on both depending upon the object under inspection. When two polarizers are used, one on the illumination source and one on the video lens, their polarization axes must be oriented perpendicular to each other. The following are polarization solutions to glare problems for several material types and circumstances.

Problem 1:-

The object is non-metallic and illumination strikes it at a sharp angle.

Solution:-

A polarizer on the lens is usually sufficient for blocking glare. (Rotate the polarizer until glare is at a minimum.) Add a polarizer in front of the light source if glare is still present.

fig-12a-cciWithout Polarizers

fig-12b-cciUsing Polarizers

Problem 2:-

The object has a metallic or shiny surface.

Solution:-

Mounting a polarizer on the light source as well as on the lens is recommended for enhancing contrast and bringing out surface details. The polarized light incident on the shiny surface will remain polarized when it’s reflected. Surface defects in the metal will alter the polarization of the reflected light. Turning the polarizer on the lens so its polarization axis is perpendicular to that of the illumination source will reduce the glare and make scratches and digs in the surface visible.

fig-13a-cciWithout Polarizers

fig-13b-cciUsing Polarizers

Problem 3:-

The object has both highly reflective and diffuse areas.

Solution:-

Using two polarizers with perpendicular orientation will eliminate hot spots in the image caused by the metallic parts. The rest of the field will be evenly illuminated due to the diffuse areas reflecting randomly polarized light to the lens.

fig-14a-cciWithout Polarizers

fig-14b-cciUsing Polarizers

Article is posted by Optics For Hire – lens design and manufacturing Consultant.

Build Your Own Light Pipe

A Step-by-Step Guide to Build Your Own Light Pipe: Light guides are physical tools that transmit light along a path from an illumination source while maintaining constant brightness. They’re often used in products to create a line of light that is consistently bright and dynamic-looking, even when it comes from a single source. Unfortunately, there is no simple how-to guide available for making them. The process is kept tightly guarded by optical engineering companies. You can follow a quick process to make your own light guide.

We built a light pipe, which, in principle, can be viewed as a fiber optic cable. All the sides are polished to a clear finish so that the light can be reflected internally inside the pipe. If the light hits a section of pipe where the surface is not perfectly reflective, like if it’s gouged or scratched, it will use that section as an “emitter” and attempt to escape the pipe. We take advantage of this by covering the bottom surface of the pipe with dimples. To make the path consistently bright, we arranged the dimples in a gradient with fewer dimples closer to the source, where the light from LEDs is brightest, and more dimples to catch the light farther along the light path (where the light is dimmest).

This step-by-step guide describes our process of quick experimentation and prototyping to achieve good illumination without having to run extensive light bouncing simulations. The design is created in Illustrator using blend modes, and the light pipe is manufactured with a laser cutter on 1/4″ thick clear acrylic. Please note that results will vary depending on many factors. The optical design, LED brightness, and beam angle are important factors to consider.

1: Open a new Illustrator document and draw a rectangle. We chose 8″ x .5″.

2: Make a guide and space it roughly 1/4″ away from the edge of the rectangle. This is where the dimple pattern will begin. (Leaving a 1/4” margin allows us to later mask off the area to prevent seeing excessive LED brightness near the source in the light pipe.) Now draw a circle with a .005″ diameter.

3: ALT + Click and drag the circle to duplicate it, and move it to the other side of the rectangle. The two circles mark the two ends of the dimple pattern.

4: Select both circles and create a blend.

5: Open up the blend options to fine-tune the blend.

6: Change the spacing to “Specified Steps” and set the number to 700.

7: You should see 700 evenly spaced dots! In order to get them to start off sparsely spaced and then get closer together, we’ll modify the path of the blend. Select the Anchor Point Tool (Shift + C).

8: Use the white arrow to select the first point and drag it until you see the spacing start to change.

9: Create another guide halfway along the path, and then drag the anchor point (with the white arrow selection tool) just past the guide. Repeat the same process with the last circle, but move its anchor point about half an inch to the right.

10: You should now have a gradient spacing of 700 circles becoming more closely spaced as you move from left…

11: …to right.

12: Once you are satisfied with the blend, expand it.

13: ALT+Click and drag the blend to copy it, and place it at the bottom of the rectangle.

14: Select both blends and then use the instructions from Step 4 to create another blend.

15: Use the instructions from Step 5 to fine-tune the new blend.

16: We chose to give our pattern a little bit of randomness, so we ALT+Clicked and dragged the blend down and to the right ever so slightly.

17: With that, we’re ready for the laser cutter! This process is pretty quick to prototype (roughly 15 minutes from start to finish), so explore different gradient patterns and techniques, and try light pipes that are curved or bent (just keep in mind the critical angle).

18: Next, we raster the dot pattern on 1/4″ thick acrylic. Be sure the acrylic surface is clean and scratch-free.

19: Different laser cutters have different settings for varying materials and thicknesses, so keep that in mind when choosing your settings. We used a speed of 80, power of 50 and maximum PPI for the raster. For the vector cuts, we took 4 cuts at speed of 4, power of 95, and maximum PPI.

20: Now take your light guide and examine the edges. Make sure the material is transparent; otherwise you’ll need to do some hand-polishing.

21: The slightly bumpy yet clear finish the laser leaves after a cut works great for our purposes. You’ll want to surround all clear surfaces with white reflective paper or tape, and mask off the quarter inch where the light first enters the pipe.

22: Here are the results with a super-bright LED! You can see that the light starts to drop off a little toward the end of the pipe.

Build Your Own Light Pipe

23: We could come back to tweak the blend pattern (reducing the number of dots at the beginning of the light pipe and increasing it toward the end).

Article is posted by Opticsforhire.com –  An Optical Design Consultant.

Best Practices for Better Imaging System

Whether your application is in machine vision, the life sciences, security, or traffic solutions, understanding the fundamentals of imaging technology significantly eases the development and deployment of sophisticated imaging systems. While advancements in sensor and illumination technologies suggest limitless system capabilities, there are physical limitations in the design and manufacture of these technologies. Optical components are not an exception to such limitations, and optics can often be the limiting factor in a system’s performance. The content provided in this guide is designed to help you specify an imaging system, maximize your system’s performance, and minimize cost.

In this blog you will read a compiled number of best practices for creating sophisticated, cost-effective imaging systems that are applicable for most applications. While the following list is nearly exhaustive and should be considered when designing any imaging system, every application is unique and additional considerations may be required.

Bigger, in many cases, is better. Allow ample room for the imaging system – Understanding a system’s space requirements before building is especially true for high resolution and high magnification requirements. While recent advancements in consumer camera technology have yielded strong results in a small package, they still do not approach the capabilities required for even intermediate-level industrial imaging systems – partially because of their size limitations. Many applications can require complex light geometries, large diameter and long length lenses, and large cameras, in addition to the cabling and power sources required to operate some of the equipment. Avoid having to make sacrifices to system performance just because the system’s space requirements were not considered. It is often advantageous to specify the vision portion of a system first, as it is typically easier to arrange the electronics and mechanics around the vision portion rather than the other way around. It is also important to remember that the illumination scheme is part of the vision system, and the geometry of the object under inspection can often necessitate the use of a large light source such as a diffuse dome.

Don’t believe your eyes – The human eye and brain work together to form an extremely advanced imaging and analysis system that is capable of filling in information that is not necessarily there. Additionally, humans see and process contrast differently than imaging systems. Software analysis should be used to ensure image quality and performance requirements are met. Images that look good to a human viewer may not be usable with an algorithm.

Danger! Don’t get too close – Due to the constraints of physics, attempting to look at fields of view that are too large relative to a lens’s working distance places excessive demands on the design of the optical component and can decrease system performance. It is recommended that a lens be chosen such that the working distance is roughly two to four times as long as the desired field of view is wide, in order to maximize performance while minimizing cost and complexity. Remember Point 1 and consider the imaging system’s space requirement before building the system. This practice also applies to the relationship between sensor size and focal length. It is best to have focal length to sensor diagonal ratios of two to four to maximize performance.

Untitled

Light up your life. It really does matter – While it can seem like an art form, selecting the appropriate lighting geometry is highly scientific. In order for a lens and sensor to effectively work together, strong contrast must be produced by properly lighting the object. The characteristics of the object under inspection and the nature of any defects must be understood so that the proper illumination geometry is used. Keep in mind that sometimes these lights can be very large. Learn more about illumination geometries in our Blog on Choosing the Correct Illumination application or) selected for the illumination can have an enormous impact on improving or reducing system performance. For instance, in an application using both high quality optics and a top-of-the-line sensor, switching from broadband to monochromatic illumination, or between specific wavelengths, can improve performance by a significant amount. As with Point 4, the proper choice of wavelength can make the difference between high contrast and no contrast. Depending on whether the wavelength is correctly chosen or not, the color of illumination can determine the success or failure of a system. Learn how proper filtering techniques can have an impact on system performance in our Filtering in Machine Vision application note.

There can be only one; high resolution and large depths of field struggle to coexist – As shown in f/# (Lens Iris/Aperture Setting), maximizing resolution and depth of field requires the same variable, the lens’s f/#, to move in opposite directions. Essentially, it is impossible to have very high resolution over a large depth of field. Physics dictates that this cannot be done and compromises will need to be made or more elaborate solutions, such as using multiple imaging systems, will need to be employed.

There is no universal solution; a single lens that can do everything does not exist – As resolution requirements increase, the ability to decrease aberrations (attributes of optical design that adversely affect performance) becomes increasingly difficult over a wide range of working distances and fields of view. Even without budget constraints, there are limitations. For this reason a wide range of lens solutions for similar applications are required.

Thoroughly understand the object to be inspected – The foundation of imaging is the ability to produce the highest level of contrast possible on the object under inspection, so an understanding of the object’s properties, such as its materials or finishes, is critical to the application’s success. Additionally, it is not enough to just know what parts are considered good or bad. Rather, to guarantee high levels of reliability and repeatability, the range of details that will be inspected and the margins for good and bad must be understood.

Be a control freak – The ability to control the environment into which the imaging system is deployed can significantly affect the reliability and consistency of results. Additionally, it also reduces the likelihood of unintended problems. Whether using filters to increase contrast, baffles to eliminate unwanted light from entering the system, or measurement devices to monitor light sources for spectral stability, controlling the environment will reduce unforeseen difficulties in the future. Some of these techniques are extremely low cost ways to protect and increase the performance of an expensive imaging system.

Be the squeaky wheel – Do not be afraid to ask why something will or will not work. Suppliers should be able to explain why different components in the system are or are not capable of achieving the desired result. The answer will not always be the same; sometimes the issues are laws of physics limitations and sometimes they are deficiencies related to the design or fabrication of the component. Optical manufacturing is a science, and the designers and manufacturers should be capable of explaining why things are happening.

Make a list; understand and define the fundamental parameters of the imaging systems – By narrowing down the specific parameters required for the imaging system, the wide range of available lenses and sensors can be reduced to a manageable selection of components. Fundamental parameters of an imaging systems are a great place to start and are detailed in the next section.

Article is posted by Optics For Hire – lens design Consultant.

 

 

any