JAI | JAI glossary: About machine vision and industrial camera…

Glossary

2-CCD HDR – A method of capturing high dynamic range (HDR) images using a beam-splitter prism to simultaneously send the identical high contrast scene to two precisely-aligned CCDs. By individually adjusting the exposure settings of the two CCDs, one imager can be set to properly expose the darker portions of the scene while the other can properly capture the brighter areas of the scene. An image processing algorithm, either in the camera or on an external computer, can then “fuse” these two images together to extend the dynamic range of the image beyond that of a single imager.

2-CCD / 2-CMOS multi-imager – A camera containing two CCDs or two CMOS sensors affixed to a beam-splitter prism and precisely-aligned to a common optical plane such that the same image is simultaneously passed to both imagers. By varying the sensors and the filter coatings on the prism, a 2-CCD/2-CMOS camera can be designed for a variety of multi-spectral configurations, such as simultaneous color images and near-infrared imaging of the same scene. Some older 2-CCD models were also designed for monochrome HDR, color HDR, and/or low-noise double-speed operation, though no current models are available with those capabilities.

3-CCD / 3-CMOS – Describes color CCD and CMOS cameras which have separate sensors for the Red, Green and Blue color bands. This is the typical construction of broadcast cameras, and this technology has been adopted for certain industrial and medical applications. In 4-CCD cameras, an extra chip has been added to simultaneously detect the near infrared light spectrum. The major advantage of this architecture is that the camera has full resolution in all 3 color bands.

A

AcquisitionTransferStart (Delayed Readout)
– A camera setting used to make a camera output stored image data in response to an external trigger signal (delayed readout). The number of frames that can be acquired and held in the camera for delayed readout depends on the camera's storage capacity and the resolution/bit depth of the images.

Action Commands (and control) - A feature of the GigE Vision standard that enables cameras to execute a pre-configured action when they receive an action command. Action commands can send both unicast and broadcast messages and give instructions for actions to multiple cameras simultaneously by broadcasting them. A camera equipped with this function can even give instructions for actions to different types of multiple cameras. Although this function includes jitter and delays, it is useful for controlling multiple cameras simultaneously.

Active Pixels – The pixels on a CCD or CMOS imager which contain image information when read out of the camera. This is typically less than the total number of pixels on an imager, as pixels around the edges of the sensor may be used as optical black (OB) pixels – used to establish black levels or to help with color interpolation – or may not be read out at all. The term “effective pixels” includes the active pixels plus the optical black pixels, i.e., all pixels that can be read out of the sensor, which may still be less than the total number of pixels on the chip. Please note that these terms are not always used consistently, especially in the consumer camera world, where “effective pixels” is often used in place of “active pixels.”

ALC (Automatic Level Control) – A JAI function that combines the automatic gain control (AGC/Auto Gain Control) and automatic exposure control (ASC/Auto Shutter Control) functions to handle changes in scene lighting. The function lets users define various parameters relating to the mix of shutter and gain adjustments that will be used when light levels change and how quickly the camera will react to such changes.

Analog Camera – Provides output in analog form, typically, but not necessarily, according to a video standard (CCIR / PAL for Europe and EIA/NTSC for USA and Japan).

AOI – Most commonly stands for "Automated Optical Inspection," which refers to any system that uses cameras and software programs to automatically search for specific defects in manufactured products or sub-components and generate pass/fail results. AOI systems are designed to replace manual inspections, providing both greater speed and accuracy than human inspectors. Systems can inspect for a single type of defect such as size, position, presence/absence, discoloration, scratches, etc., or can simultaneously inspect for multiple defects. In the past, the same abbreviation was also sometimes used for "Area of Interest" though today that usage has been almost completely replaced by ROI (Region of Interest).

Applications (examples)

Cotton
Industry where JAI targets OEM’s or integrators that make equipment to inspect and separate cotton for foreign materials.

Food industry
Target customer group that includes OEM’s for inspection and sorting of food products by grade, color, size, or other characteristics, and for removal of foreign material.

Life sciences industry
Industry focused on equipment and processes to study and examine living organisms. Life Sciences encompasses a wide array of fields including, but not limited to, microbiology, biotechnology, medical imaging, pathology, genomics and ophthalmology.

PCB inspection
Automated imaging of printed circuit boards or electronic subsystems to determine proper component placement, identify defects and evaluate overall quality

Recycling
Industry where JAI targets OEM’s or integrators that make equipment to identify and separate recyclable materials.

Area Scan - Denotes a camera (or imager) architecture in which images are captured in a square or rectangular form in a single cycle (similar to the images created by a DSLR or cell phone camera). This image is then read out in a single frame with its resolution being a function of its height and width in pixels. The alternative to area scan is Line Scan.

Automated Imaging - A terminology which summarizes all usage of cameras in industrial applications where image processing (using in-camera or external computer algorithms) is involved. Subcategories of Automated Imaging are Machine Vision or Factory Automation.

Auto-iris lens video - Cameras operating in outdoor environments are faced with varying light conditions. When light levels change in the images captured by the camera, the images will either be too bright or too dark. An auto-iris lens provides a solution to these problems. The lenses have an electric motor-driven iris which is opened or closed according to signals fed to it from the camera. A camera equipped with auto-iris could produce a video signal of constant brightness by opening or closing the auto-iris of the lens when light level changes.

B

Binning – A process that combines the signal values from two or more adjacent pixels to create a single “virtual” pixel having a higher signal level. The result is an image with lower pixel resolution (pixel detail) but with higher sensitivity. Common binning schemes include combining every two adjacent pixels in each horizontal line (horizontal binning), combining every two adjacent pixels in each vertical column (vertical binning), or combining each group of four pixels – two horizontal and two vertical (2 x 2 binning) – to create an image with four times greater sensitivity but ¼ the resolution. Some JAI cameras offer the option to have pixel values averaged when they are combined instead of being added together.

Blemish Compensation – Defective pixels can occur on image sensors over time. This camera feature substitutes values for defective pixels by interpolating using the surrounding pixels. The function works on defective (bright) pixels that are not adjacent to each other.

Blooming – The term used to describe when a set of pixels in an image is oversaturated by a bright spot (the sun, a light, a laser) and the charge contained in those pixels spills over into adjacent pixels causing the bright spot to “bloom” in a radiating pattern.

Brightness (Hue and Saturation) – Brightness is one aspect of color in the RGB color model. While hue defines the peak wavelength of the color, and saturation defines how “pure” the color is (how narrow or wide is the waveband), brightness defines the intensity or energy level of the color. This scheme, abbreviated HSB, is one of several similar (though not identical) color schemes used in machine vision.

Burst Mode (Burst Trigger Mode) – In this mode, a single external trigger signal causes the camera to acquire a "burst" of multiple images at, or close to, the sensor's maximum frame rate, which is faster than the camera's interface can support. Instead, the camera temporarily stores the images in memory, so they can then be read out at the slower speed of the interface. On a GigE Vision camera, for example, this enables the capture of image sets with interframe timing that is faster than could normally be handled over the 1 Gbps bandwidth limit of the interface. The number of frames that can be acquired in each burst depends on the camera's storage capacity and the resolution/bit-depth of the images.

C

Camera Link - Camera Link is a serial communication protocol designed for computer vision applications. It is based on the National Semiconductor interface Channel Link. It has been designed in order to standardize the digital communications (interface) between industrial video products such as cameras, cables and frame grabbers. The standard is maintained and administered by the global machine vision organization Automated Imaging Association or AIA.

Cat5e and Cat6e cables – Standard categories of Ethernet cables. Both use four twisted pairs of copper wires, however Cat6e features more stringent specifications for crosstalk and system noise, and supports higher signal frequencies - up to 250 MHz compared to 100 MHz for Cat5e. For this reason, Cat6e is strongly recommended for use with GigE cameras, especially if longer cable lengths are to be used.

CCD sensor - CCD stands for Charge Coupled Device. This term is generally used for the imaging sensor used in CCD cameras. A CCD sensor is divided up into discrete picture elements, called pixels, in the form of a matrix. The sensor converts light into electric charge, proportional to the amount of incident light. The charge is then transformed to a voltage as each pixel is read out.

CCIR – Refers to an analog video and television standard published in 1982 by International Telecommunication Union - Radiocommunications sector. This became the dominant video standard used for monochrome video in Europe and several other regions around the world. It is characterized by interlaced video running at 25 frames per second (50 fields per second) with a standard screen resolution of 752 pixels by 582 lines. In parts of the world where the standard power frequency is 60 Hz, such as North America, a different standard is used. See EIA for a description.

Chromatic Aberration Correction (lateral) – This camera function corrects for the chromatic aberration (color shifts) in an image caused when a lens produces slightly different magnifications of different color wavelengths. This can cause some of the color components (R,G,B) for the same point on a target to spread onto adjacent pixels on the sensor. Distortion can be amplified in prism cameras due to the additional refractive element in the optical path (the prism). In cases where chromatic aberration exists, if it is not corrected it will result in color "fringes" appearing around objects towards the edges of the image.

Chunk Data – A camera feature that adds camera configuration information to the image data that is output from the camera. Embedding camera configuration information in the image data allows you to use the serial number of the camera as a search key and find specific image data from among large volumes of image data. In addition, when images are acquired with a single camera in sequence under multiple setting conditions, you can search for images by their setting conditions.

Clock Frequency – Refers to the frequency of a sine wave typically generated by an oscillator crystal that sets the pace for how fast operations can take place inside the camera. Most commonly, a “pixel clock” will guide the speed at which the internal electronics can read out pixel information from the imager (CCD or CMOS) and pass it to the camera interface. The higher the clock frequency – typically expressed in MHz (millions of cycles per second) – the faster data can be extracted from the sensor, enabling a faster frame rate. For some interfaces, a second clock frequency governs how fast the data can then be organized and sent out of the camera. This frequency (the Camera Link pixel clock, for example), may be different than the pixel clock used for the imager.

CMOS – Complementary Metal Oxide Semiconductor. Originally used for µ-processor or memory chips. Can also be used to design image sensors. In the past, image sensors using CMOS technology had major drawbacks in the areas of noise and shutter technology, thus making them less interesting to use than CCD sensors. Today, new generations of CMOS imagers have alleviated many of these issues enabling them to overtake CCDs as the dominant type of image sensor used in machine vision cameras, as well as many other types of cameras.

C-Mount – A standard type of lens mount using screw threads to attach the lens securely to the camera, even in high-vibration factory environments. Because of the diameter of the C-mount opening, these lenses typically cannot be used on imagers with a format larger than 4/3” in diameter.

CoaXPress Interface – A point-to-point serial digital interface standard for machine vision cameras. CoaXPress uses traditional coaxial cables, similar to those used for older analog cameras, but adds a high bandwidth chipset capable of operating at up to 12.5 Gbps per cable (more than 12X Gigabit Ethernet speeds). It supports cables in excess of 100 m in length without repeaters or hubs.

Color Enhancer – A function available in some JAI cameras that boosts the intensity of certain colors in the image as specified by the user. Three primary and three complementary colors can be selected for enhancing up to 2X their normal intensity.

Color Space Conversion – A process that changes the standard color space (RGB) that is used to define the colors in an image into other ways of specifying color information. In JAI cameras equipped with color space conversion capabilities, available color spaces include sRGB, AdobeRGB, UserCustom RGB, CIE XYZ, and HSI. (HSI is not supported on some camera models).

Counter Function – A camera function that counts up change points in the camera’s internal signals using the camera’s internal counter and reads that information from the host side. This function is useful for verifying error conditions via the count value using internal camera operations.

CS-Mount – Similar to the screw-in C-mount, CS-mount has been used extensively in the security industry where smaller cameras and imagers are common. Due to focal length differences, adapters are available to enable C-mount lenses to be used on CS-mount cameras, however the reverse is not possible. A CS-mount lens cannot be used on a C-mount camera.

CXP Link Sharing – A feature of the CoaXPress interface standard (v2.0 and later) that allows cameras to be connected to multiple PCs. In Sharing Mode, the captured images can be divided and sent to each PC. In Duplicate Mode, the same captured image can be copied and sent to each PC.

D

Debayering – An interpolation function inside a camera that converts Bayer sensor data into an RGB pixel format for outputting (for example, RGB8, RGB10V1Packed, RGB10p32). In addition to alleviating the need for Debayering on a host computer, in-camera conversion to RGB format enables Color Enhancer and Color Space Conversion to be used on single-sensor (Bayer) cameras, instead of being limited exclusively to multi-sensor (prism) cameras.

Decimation – A camera function that performs downsampling of the image (typically by omitting every other pixel) in both the horizontal and vertical direction. This reduces the file size for processing or storage while maintaining the full field of view of the image.

Dichroic Coating – A coating placed on the face of a prism or other piece of optical glass that allows specific wavelengths of light to pass through while reflecting the remaining wavelengths. Dichroic coatings are used in JAI’s multi-imager prism cameras to split light into red, green, and blue wavelengths for color imaging, and can be used to separate near-infrared light for multi-spectral imaging. They can also be customized for specific spectral analysis tasks.

Digital Camera – All camera sensors are based on analog technology, i.e., pixel wells convert captured electrons into an analog signal value. In a digital camera this electrical charge is converted to a digital signal, using an A/D converter before it is transferred out of the camera, typically as either an 8-bit, 10-bit, or 12-bit value. Modern CMOS image sensors make the A/D conversion on the sensor itself enabling it to be output from the camera in formats already suitable for computer processing. Older analog cameras sent analog signals out of the camera, which were convenient for connecting directly to old analog TV monitors, but required analog-to-digital conversion inside the computer before image data could be analyzed by computer algorithms.

DSNU – Dark Signal Non-Uniformity. This refers to variations in individual pixel behavior that can be seen or measured even in the absence of any illumination. In simple terms, it refers to how different pixels perceive ”black” or the absence of light. Most of these ”dark signal” variations are affected by temperature and integration time. Other variances are driven more by electronic issues (on-chip amplifiers and converters) and remain fairly constant under different thermal conditions. These ”fixed” non-uniformities are typically considered part of an imager’s ”Fixed Pattern Noise” (see Fixed Pattern Noise/FPN). Compensation for DSNU issues are typically made at the factory as part of the camera testing process.


DSP – Digital Signal Processor. Some color cameras incorporate a DSP for the enhancement and correction of images in real time. Parameters, which are typically controlled, are: Gain, Shutter, White Balance, Gamma, and Aperture. DSPs can also be used for edge detection/enhancement, defective pixel correction, color interpolation, and other tasks. The output of a DSP camera is typically analog video.

Dual tap – This typically refers to a divide-and-conquer method used for reading information from a CCD image sensor whereby the sensor is divided into two regions – either left/right or top/bottom – and the pixels are read from both regions simultaneously. The frame rate of the sensor is effectively doubled, minus a little overhead, without resorting to overclocking, which increases noise. CMOS imagers, which have a more flexible readout architecture, may utilize many different taps to read out sections of the chip, producing high frame rates but also a phenomenon known as “fixed pattern noise.”

E

Edge Enhancer – A camera function that identifies boundaries within an image, such as lines or edges, and increases the contrast/sharpness of those boundaries.

EIA Interface – Also called RS-170, this refers to a standard for traditional monochrome television broadcasting in North America and other parts of the world where the typical power frequency is 60 Hz. The EIA standard calls for interlaced video at 30 frames per second (60 fields per second) with a standard screen resolution of 768 pixels by 494 lines. See CCIR for the European standard.

Encoder Control – A built-in feature available in some JAI line scan cameras. With encoder control, a camera can be directly connected to an encoder rather than receiving encoder signals via the frame grabber or other interface connection. Direct connection enables the camera to generate trigger signals or detect the scanning direction of the subject in response to signals output from the rotary encoder.

Event Control – A camera feature that outputs a signal change point inside the camera as information indicative of an event occurrence (event message). Events that can use the Event Control function include AcquisitionTrigger, FrameStart, etc. (varies depending on the camera model). You can specify whether or not to send an event message when an event occurs at each event.

Exposure Active (EEN) – A signal which can be output externally from the camera showing the timing at which video is being accumulated to the sensor.

F

Field of View (FOV) – Describes the area that the camera and lens are looking at. In machine vision inspection applications, this is typically expressed in a size measurement (e.g., 16 cm wide by 9 cm high). In traffic or surveillance applications, this can also be expressed in degrees (e.g., a 40-degree horizontal FOV).

FireWire – See IEEE 1394

Fixed Pattern Noise (FPN) – A non-random type of visible and/or measurable image “noise” that results from electrical signal differences, or “offsets,” that are not related to the amount of light striking the imager. This is most commonly seen in CMOS imagers where each pixel typically has its own amplifier and, in order to increase readout speed, “strips” of pixels are read out simultaneously through multiple amplifiers. The use of many different amplifiers, each with slight variations in electrical characteristics, can result in a “pattern” of slightly lighter and darker areas in the image. This is typically perceived as a vertically-oriented pattern that overlays the image. Because CCDs shift all pixels one row at a time through the same readout register, they are virtually immune to Fixed Pattern Noise, except in the case of multi-tap output where careful “tap balancing” is required to avoid a similar issue. FPN is considered a type of Dark-Signal Non-Uniformity (see DSNU) and can be compensated by measuring and mapping the pattern of amplifier differences and applying an image processing algorithm to adjust for these variances. This function is typically built into the camera and is not adjustable by the camera user.

Flat-field Correction – An in-camera technique that corrects for slight pixel-to-pixel sensitivity variations across an image sensor. Essentially, this calibration technique makes small adjustments in the gain applied to each pixel such that when the camera is pointed at a smooth, white card which has been evenly lit at less than 100% saturation, all pixels will have the same pixel value (see PRNU).

Four tap – Also called “quad-tap,” this is a divide-and-conquer method of reading information from a CCD image sensor whereby the CCD is divided into four regions and the pixels are read from all four regions simultaneously. The frame rate of the sensor is effectively increased by a factor of four, minus a little overhead, without resorting to overclocking, which increases noise. CMOS imagers, which have a more flexible readout architecture, may utilize many different taps to read out sections of the chip, producing high frame rates but also a phenomenon known as “fixed pattern noise.” See also dual-tap.

Frame Grabber (also sometimes called Acquisition Board) – A board inserted in to a PC for the function of acquiring images from a camera directly into the memory of the PC, where the image processing takes place. Certain Frame Grabbers also have on-board processors for doing image processing independently of the host computer.

Frame Rate – The rate at which an area scan camera can capture and read out an image. This is usually expressed in “frames per second” with the frame rates of typical machine vision cameras ranging from a few frames per second up to more than 200. Frame rate can be increased by using binning (though not necessarily), and by using partial scanning or region of interest (ROI) whereby only a portion of the active pixels are read out of the camera during each frame period.

Frame Start Trigger (Line Scan) – A camera setting used to tell a camera to capture an image. In line scan cameras, the Frame Start Trigger setting tells the camera to consolidate a user-defined set of line data into a frame for outputting. Data Leader and Data Trailer are added in every frame. The number of lines in one frame is set by Offset Y and Height of [Image Format Control]. After receiving a Frame Start Trigger signal, the camera will skip the image data from the number of lines indicated by Offset Y and then send the data of Data Leader, the image data, and the Data Trailer. Upon completion of data transmission for one frame, no data will be sent until the next Frame Start Trigger is received.

G

Gain – An amplification of the signal collected by the pixels in the CCD or CMOS imager. Applying gain is like “turning up the brightness knob” on the image. However, gain also increases the noise in the image, which may make it unusable for some types of machine vision inspection or measurement tasks. In some cases, “negative gain” can be applied to “turn down” the brightness of an image, though usually this is done with the shutter or the lens iris.

Gamma Correction – Adjusts the relationship between the values recorded by each pixel and those that are output in the image for viewing or processing. In a strictly linear relationship (gamma=1.0), a half-filled pixel well is output in 8-bit mode at a pixel value of 127 or 128 (half of the full value of 255). But gamma correction uses a nonlinear function to map well values to a different curve of pixel values. Sometimes this is done to mimic the responsiveness of a computer monitor or the human eye, which prefers a brighter set of gray tones (gamma = 0.45). Other times, it can be done to correct for high or low contrast within the image (see also Look-up Table).

General Imaging – The term collectively describing applications where typically no (or only limited) image processing is involved. Typically involves capturing an image and displaying it on a monitor, or recording images for later analysis. Surveillance is a type of General Imaging, as are surgical viewing applications.

GenICam - GenICam is a universal configuration interface across a wide range of standard interfaces, such as GigE Vision, Camera Link and IEEE 1394-IIDC, regardless of the camera type and image format. It allows the user to easily identify the camera type, the features and functions available in the specific camera and also to see what range of parameters are associated with each function. The core of the GenICam standard is the Descriptor File (in XML format) that resides in the camera, which maps the cameras internal registers to a standardized function list. GenICam is owned and licensed by EMVA (European Machine Vision Association).

Gigabit Ethernet – A computer networking standard for transmitting digital information in packets across a computer network at a speed of 1 billion bits of information per second (1 Gbps).

GigE Vision – An interface standard introduced in 2006 that uses Gigabit Ethernet data transmission (1000BASE-T) for outputting image data from industrial cameras. The GigE Vision standard is maintained and licensed by the Association for Advancing Automation (A3) and has become one of the most prevalent digital camera standards in the world. It utilizes standard Cat5e or Cat6e cables to transmit data up to 100 m at a rate of 1 Gbps (125 Mbytes/s). Because it is a networking standard, it also supports various multicasting and broadcast messaging capabilities that are not possible with point-to-point interfaces. Since its introduction, the standard has evolved to support additional Ethernet performance tiers. These include 2.5 Gbps (2.5GBASE-T), 5 Gbps (5GBASE-T), and 10 Gbps (10GBASE-T, also called 10 GigE). There are even a few machine vision cameras available in the market that can support 25 Gbps and 100 Gbps speeds.

GPIO – Stands for general purpose input and output. This typically refers to a set of functions and signal registers which can be accessed and programmed by users to perform a variety of fundamental camera tasks, such as triggering the camera, setting up a pulse generator or counter, and designating which inputs and outputs will be used for various tasks.

Gradation Compression (Mode) – A JAI camera mode that compresses the bit depth of captured images to enable scenes containing a wide range of pixel values to be output as a narrower set of intensity gradations. Maximum range of raw pixel values can be either 10 bits (0-1023) or 12 bits (0-4095). They are compressed using one or two user-defined knee points into 8-bit images for storage and display.

Grey Scale – This is another term for black-and-white, or monochrome imaging. It refers to images where all pixel values represent the level of light intensity, without including any color information. Thus, all pixels are expressed as varying shades of grey. The number of possible grey values depends on how many bits are used to hold each pixel value. In an 8-bit image, 256 values are possible. For a 10-bit image, 1024 different shades are available, while a 12-bit image can support 4096 different grey values.

GUF / GenICam Firmware Update – Refers to a standardized method of updating firmware in GenICam-compliant devices. Cameras that support this standard can be updated using a GenICam Update File (GUF) and the JAI GenICam Firmware Update Tool.

H

HDR (High Dynamic Range) – Refers to methods that can be used to minimize saturated and/or black pixels in scenes where the range of pixel intensity values exceeds that of the camera's sensor. Most cameras have a maximum sensor range of 12 bits (some may have less). High dynamic range methods can be used to extend the effective range of pixel values to 14 bits or more, which can then be output as raw values or in a compressed image format.

HDTV – Refers to the high-definition television standard developed for broadcasting. HDTV has several different levels, but it is most commonly used to mean a progressive-scan image with a resolution of 1920 pixels wide by 1080 lines high and a minimum frame rate of 30 frames per second. This is sometimes abbreviated as 1080p30 or just 1080p. More recently, both consumers and machine vision customers have shown a growing interest in 1080p HDTV running at 60 frames per second, which produces sharper images of moving objects.

Hue (Saturation and Brightness) – Hue is one aspect of color in the RGB color model. Hue defines the peak wavelength of the color, in other words, where it fits within the visible spectrum. Meanwhile, saturation defines how “pure” the color is (essentially, how narrow or wide is the waveband), and brightness defines the intensity or energy level of the color. This scheme, abbreviated HSB, is one of several similar (though not identical) color schemes used in machine vision.

I


ICCD – Intensified CCD, also Low Light Level CCD. The intensifier tube collects faint photon information and converts this to an electron which is accelerated onto a scintillation plate. This is in turn connected to a CCD sensor via fiber optics or a lens system. This can produce useful image quality even at nighttime under starlit or overcast skies (in night vision cameras or goggles, for example).

IEEE 1394 – Standard for serial transmission of digital data, which can be used for digital output cameras. Sony launched a family of industrial products based on this standard, but had low success in the market. The major reason for this was that at the time, IEEE 1394 still had virtually no acceptance from the PC market (not yet included on motherboards). As of the summer 2001, there was a brief period of increased activity and acceptance and a number of camera manufacturers launched IEEE 1394 models. This standard was initially launched by Apple Computer under the name of FireWire. While FireWire is still in use for many peripherals in the consumer market, FireWire cameras in the machine vision market have largely disappeared.

Image Compression – A process applied to an image to minimize its size in bytes for transmission or storage. Lossy compression sacrifices some amount of image quality without degrading it below an acceptable threshold. Lossless compression temporarily "rewrites" (encodes) the image data to make it smaller, while retaining the ability to restore it later to its original quality. See Xpress.

Image Flip (Reverse XY) – A camera function that outputs an image by inverting it horizontally and/or vertically. On color models, the Bayer array is changed by the Image Flip function. For example, BayerRG -> BayerGB (ReverseY = 1), BayerGR (ReverseX = 1), BayerBG (ReverseX =1 and ReverseY = 1)

Image Processing – Refers to using the images captured by CCD/CMOS cameras for the purpose of Automated Imaging. The image is analyzed by special software to provide a singular result. Typically, the result should be of Pass/Fail, or Go/No-Go type (examples would be: correct size of an object, correct position of an object, correct color of an object, correct number of objects, etc).

Image Scaling – Changing the number of pixels within a digital image without changing the size of the image, i.e., applying a larger or smaller pixel pitch to an image. See Xscale.

Infrared light – Covers any light with a wavelength starting at the upper edge of the visible spectrum (700 nm) and extending all the way to 1 mm (the lower edge of the microwave spectrum). Within the infrared band there are several sub-bands. These include near infrared (from 700 to 1000 nm), short wavelength or SWIR (1000 to 3000 nm), mid wavelength or MWIR (3000 to 8000 nm), long wavelength LWIR (8000 to 15000 nm), and far infrared. Because infrared wavelengths are longer than visible light, they are able to pass through the surface of some substances, especially organic materials and certain types of paints and plastics. This allows near infrared and SWIR cameras in particular to be used to see non-visible defects and see through smoke and certain types of packaging. LWIR cameras are known as thermal-imaging cameras because they are able to “see” thermal emissions from both living creatures and from factory machines.

Interlaced Scan – The basis for historical broadcast TV is Interlaced Scan. It involved capturing the image in two fields (odd and even lines at separate time intervals). The major advantage of using interlaced scan was that it conserved video bandwidth, as the latency of the eye put the two fields together again. Many early industrial cameras used interlaced scan, but with the resulting drawback of image artifacts if the object was moving while being captured.

J

JPEG – (also jpg) A method of compressing an image to reduce file size. The standard was developed by a group called the Joint Photographic Experts Group, hence the abbreviation JPEG. The level of compression can be adjusted by the user to determine the proper trade-off between file size and loss of image quality.

K

Knee Function – Has some similarities to gamma correction in that it refers to a way to change the relationship between actual pixel well values and their corresponding output value. In this case a different function is applied to a specific portion of the I/O graph starting at a “knee point” where the slope of the graph is changed. Knee functions are commonly used to “compress” the bright parts of an image so that they do not saturate when attempting to brighten the darker areas of an image.

L

Lens Control (Birger Mount) – This camera feature allows a camera to be connected to a Birger Mount Adapter via RS-232C. Lens control commands sent via the CoaXPress interface to the camera can be transferred to the servo-equipped lens mount, thus enabling control of functions like focus and aperture.

Light Spectrum – A range of wavelengths within the electromagnetic spectrum which are perceived as “light” by humans or instruments. These include visible light, with wavelengths from 400 to 700 nm, infrared light (700 nm to 1 mm) and ultraviolet light (wavelengths from 10 nm to 400 nm). Wavelengths shorter than 10 nm are considered x-rays and wavelengths longer than 1 mm are considered microwaves.

Line Scan – Denotes a camera (or imager) architecture that gathers an image on a line-by-line basis (requires that either the object is moving or the camera is moving). An image of arbitrary size (number of lines) is captured into the memory of the host computer. This operation can be compared to a fax machine. The alternative to line scan is Area Scan.

Lookup Table (LUT) – A user-programmable method of modifying the relationship between the values recorded by each pixel and those that are output in the image for viewing or processing (see also Gamma Correction). While “pre-set” gamma correction lets the user adjust this input-output relationship using one of several pre-defined curves, a Lookup Table lets the user define a custom mapping of input values to output values. This is done by selecting an “index” and assigning it a “value.” For example, Index 0 would typically represent a pixel with an exposure value of 0 – a black pixel. But by assigning a value of 8 to Index 0 in the Lookup Table, any pixel with the value of 0 would be “boosted” to an output value of 8. By repeating this process across all “Indexes,” the user can define many different custom ways to boost or reduce the intensity of various pixel values within the image. The number of Indexes available is also referred to as “points,” so a “256-point Lookup Table” has 256 indexes that can be mapped to an adjusted output value. The number of values to which each index can be mapped is often different than the number of indexes. For example, 256 index points could each be mapped to a value between 0 and 4095. The Lookup Table function, in this case, would handle the task of calculating the proper input and output values based on whether the camera was operating at 8-bits, 10-bits, 12-bits, etc.

M

Megapixel – Classifies cameras, which have a resolution of 1 million pixels or more. Most machine vision cameras today are megapixel cameras, though some camera manufacturers still produce sub-megapixel cameras. JAI’s highest resolution camera is currently the SP-45000, which has a resolution of 45 million pixels.

M-52 mount – A large format screw-type lens mount designed to accommodate cameras with very large area scan or very long line-scan imagers.

Mini-Camera-Link – A part of the Camera Link standard that specifies smaller connectors than the original Camera Link standard. Except for the size of the camera and cable connectors, Mini Camera Link adheres to all other Camera Link electrical and physical specifications. Thus, it is possible to connect a camera with a Mini Camera Link connector to a frame grabber with a Camera Link connector, provided the user has a cable with the proper connectors on each end.

Multi-imager – A term for any camera that has more than one CCD or CMOS sensor inside. In most cases, this requires the use of a prism to split the light and direct it to the multiple imagers. However, in some line scan cameras, multiple linear sensors may be placed side-by-side without the use of a prism block. These dual-line, tri-linear, and quad-linear arrangements create both timing challenges and parallax issues, depending on the application.

Multi-ROI - Refers to a camera's ability to select several smaller scanning areas within the full sensor area from a single exposure. By skipping areas that are not specified as regions of interest when scanning a frame, the ROI function can output the specified regions in a combined state or as individual frames. For No-Overlap Multi ROI, the scanning areas cannot be overlapped. For Overlap Multi ROI, the scanning areas can be overlapped.

N

Near Infrared Light (NIR) – The lowest band in the infrared light spectrum (see Infrared light) extending from 700 nm (the edge of the visible spectrum) to around 1000 nm. The longer wavelengths, though not visible to the naked eye, are able to penetrate through certain inks and plastics and penetrate below the surface of organic material like fruits and vegetables. By using CCDs or CMOS imagers that are sensitive to NIR light, cameras can be made to capture monochrome images showing various subsurface defects and hidden objects.

NTSC Standard – Similar to the EIA (RS-170) standard, except it refers to the analog color video imaging format in North America and other parts of the world. Basic characteristics are: interlaced color video, 30 frames per second (60 fields per second), standard resolution of 768 pixels by 494 lines.

O

OEM – Original Equipment Manufacturer. A customer who builds machines for a specific task or specific market segment, in large volume. Often purchases the JAI camera directly from JAI, but can also purchase JAI cameras (and other components) from a distributor. The machines built by an OEM are usually produced for a period of 3 – 5 years. Generally uses standard products but may require some degree of customization.

Optical Black – This is the term given to pixels around the edges of a CCD or CMOS imager which are fully functional from an electrical perspective, but have a metal shielding over the photosensitive area. By shielding these pixels, they will only output dark current and bias level, which can then be used as black reference for the signals that are read out of the active pixel region. Because they do not appear as part of the main image, when stating the resolution of a camera, JAI does not include the optical black pixels. However, some JAI cameras do allow the user to include optical black pixels in their full image readout. (see also Active Pixels).

Overlay Mode – A JAI camera mode that highlights specific areas within a displayed image by reducing the brightness of surrounding areas. Used as a visual aid to show users what the result of current settings will be for multi-ROIs or for photometry areas designated for ALC (Automatic Level Control) or AWB (Automatic White Balancing) functions.

P

PAL Standard – Similar to the CCIR standard, except it refers to the traditional analog color video imaging format used in Europe and other parts of the world. Basic characteristics are: interlaced color video, 25 frames per second (50 fields per second), standard resolution of 752 pixels by 582 lines.

Partial Scan – A technique for reading out a designated subset of the full number of lines from an imager. Because the full image is not read out, the frame rate of the camera is typically increased. Partial scan may involve pre-defined subsets of the image, or may be fully programmable, letting the user select the starting line and the height of the partial image.

P-iris Control – A camera feature that enables the camera to precisely control the iris diaphragm of a P-iris lens to maintain the best image with the highest resolution and depth of field. It can also combine with gain and electronic shutter to continuously maintain the appropriate iris position under changing lighting conditions (ALC function).

Pixels – Photosensitive sites that make up a CCD or CMOS imager. As a pixel is struck by photons, it produces a number of electrons which are stored as an electrical charge in a so-called “pixel well”. The more photons that strike a pixel, the more electrons that are produced. After a specified exposure time, the charges from each pixel are read out as an analog signal value, which is then converted to a digital value, corresponding to the intensity of light that struck that pixel. The result of all the pixel values, creates a digital image.

Pixel Clock – The name for a sine wave typically generated by an oscillator crystal that sets the pace for how fast operations can take place inside the camera. The pixel clock guides the speed at which the internal electronics can read out pixel information from the imager (CCD or CMOS) and pass it to the camera interface. The higher the clock frequency – typically expressed in MHz (millions of cycles per second) – the faster data can be extracted from the sensor, enabling a faster frame rate. For some interfaces, a second pixel clock governs how fast the data can then be organized and sent out of the camera. This frequency (the Camera Link pixel clock, for example), may be different than the pixel clock used for the imager.

Pixel Sensitivity Correction – An umbrella term covering both methods typically used in line scan cameras to correct small variations in the way the individual sensor pixels respond to the same lighting conditions. See DSNU and PRNU.

Power over Mini Camera Link – An extension of the original Camera Link standard that enables power to be supplied from a properly-equipped frame grabber to a Camera Link camera via the cable that is carrying data from the camera to the grabber. Power over Mini Camera Link specifies that mini-sized connectors are to be used, however the same approach can also be used with full-sized cables and connectors.

Prism – an optical element consisting of multiple polished glass pieces assembled in a way that refracts (bends) light as it passes through it. By positioning the faces of the glass pieces in particular ways and applying various coatings to the surfaces, a prism can be used to split one scene into two identical images with half the intensity, or can be used to send specific wavelengths (colors) of light to different sensors or imagers.

PRNU – Photo Response Non-Uniformity. This refers to pixel-specific differences in how an image sensor responds to equal amounts of light falling onto all pixels. In other words, when all pixels are exposed to the exact same shade of grey, they do not necessarily produce the exact same amount of signal. The small variations in response are called PRNU. This is a property of the sensor and is not related to lens properties which tend to distribute more light to the center of the imager and less towards the edges (see Shading Correction). A method called Flat Field Correction (FFC) is typically used with PRNU to make small adjustments in the gain applied to each pixel in order to “even out” the small pixel-to-pixel differences in response.

Progressive Scan – Captures an area scan image in a single line-by-line sequence, without dividing it up into odd and even lines as was done in the Interlaced Scan method. The major advantage of this is that sharp pictures of fast moving objects can be captured without any distortions or artifacts caused by the different timing of the odd/even lines.

PTP (Precision Time Protocol) – A capability within some cameras that allows the camera to work as a slave device in accordance with the Precision Time Protocol defined in IEEE 1588. When the IEEE 1588 master clock exists in the network where the camera is connected, this function synchronizes the camera to the time of the master clock. When PTP is On, Scheduled Action Commands can be used on some cameras.

Pulse Generator –- A camera function used to generate/format/create signals for internally triggering the camera. The signals can also be output through a TTL/OPT out pin to be used for triggering other cameras or accessories.

Q


Q.E. (Quantum Efficiency) – QE is a quantity defined for a photosensitive device such as photographic film or a charge-coupled device (CCD) which is the percentage of photons hitting the photoreactive surface that will produce an electron–hole pair. It is a key measurement of the device's electrical sensitivity to light.

R

RCT Mode – In RCT mode, the imaging operation of the camera runs continuously, even though an image is not output from the camera until a trigger signal is input. This enables the automatic gain control (AGC) function, the automatic shutter control (ASC) function, and the automatic white balance (AWB) function to be maintained while waiting for the next trigger. As a result, proper exposures and color balance can be maintained under changing light conditions without needing to continuously output frames from the camera. RCT stands for "Reset Continuous Trigger."

Remote Head Camera – Collective term for cameras where the CCD sensor is placed at a distance from the control circuit, via a cable with a length of around 2 – 5 meters. Examples are CM-030GE-RH and CV-M53x series. Sometimes also referred to as Micro Head or Separate Head.

ROI – Stands for "Region of Interest." In CCD cameras, sensor limitations typically required complete lines to be read out from the sensor. Users could limit the readout to only a portion of the full number of lines on the sensor (see Partial Scan) but generally had to read out the full width of the sensor. CMOS image sensors provide much greater flexibility for defining partial readouts in both the vertical and horizontal direction of the sensor. ROI refers to an output constituting less than the full sensor area by specifying the width, height, and horizontal/vertical offset of the area to scan. Many JAI cameras are also equipped with a Multi-ROI capability allowing more than one region on the sensor to be read out from a single exposure and combined into a single frame.

ROI Centered – A special ROI mode available on some cameras that centers (horizontally) the defined ROI within the full sensor width. When set to ON while using the ROI Function (Single ROI), OffsetX is disabled and the X offset is automatically set at half of the difference between the ROI width and the full width.

S

Sensitivity – Is a broad term that describes how readily a camera or imager responds to small amounts of illumination, whether visible or non-visible wavelengths. There are several factors that impact sensitivity, including the size of the pixels, how well they collect light, how efficiently they convert light to electrical signals, and how much “noise” they generate during this process. For the output of a camera or imager to be useful, one must be able to distinguish between the image information (signal) and the noise component (see Signal-to-noise ratio). The lower the amount of illumination required to produce detectable image information, the more “sensitive” a camera or imager is said to be. Sensitivity specifications can be expressed in several ways including the amount of “lux” (lumens per square meter) required to generate a meaningful signal; radiometric measurements (describing the “power” of the light in terms of watts per square meter); and “absolute sensitivity” measurements which state the minimum number of photons that must strike a pixel before meaningful image information can be obtained.

Sensor Digitization Bits – The bit depth to be used in the analog-to-digital conversion process on the sensor (8bits, 10bits, or 12bits). This is separate from the choice of pixel format that will be output from the camera (for example, 12-bit sensor digitization can be output from the camera as an 8-bit image).

Sensor Shutter Mode – A camera setting that specifies the Exposure Timing for JAI's Rolling Shutter models. Two options are available: Rolling or Global Reset.

Sequencer – A camera function that lets users define multiple combinations of exposure time, gain, ROI, and other settings which can be stepped through each time a trigger is received. This is particularly useful for quickly capturing multiple exposures of objects under inspection to adjust for areas or components with significantly different levels of reflectance. Users can specify the next index in the stepping sequence and the order in which indexes are executed. Multiple indexes can also be executed repeatedly. The Sequencer function in JAI cameras features two operation modes: TriggerSequencer mode and CommandSequencer mode.

Shading Correction – This is a compensation method designed to produce a flat, equal response to light under the same conditions in which the calibration routine was run. It is generally thought of as a coarse correction to variations in image brightness which typically result from optics-related shading issues originating with the lens and/or prism. In a multi-imager color camera, shading correction may be used to equalize the response of the three color channels.

Shutter – In film cameras, the shutter is an opaque device that physically “opens” to allow light to strike the film and the “closes” when the exposure is complete. For a digital sensor, an electronic shutter achieves the same effect by transferring the digital charge collected in the pixels to a light-shielded buffer area (transfer register) at the end of the designated exposure time. If all pixels are transferred at the same time, the shutter is said to be “global.” If pixels are transferred in a sequential fashion, the shutter is said to be “rolling.”

Signal-to-Noise Ratio – CCD and CMOS cameras produce several forms of “noise,” that is, variations in pixel charges not generated by the light striking the imager. These can be caused by thermal conditions, electronics, or simply the fundamental physical laws of how photons are converted to electrons. Noise can appear as random graininess, horizontal or vertical lines that become visible in low signal areas of the image, blotchy gradients between darker and lighter regions, and other manifestations. The signal-to-noise ratio is a measure of how much a typical image is corrupted by these noise sources. It is generally expressed in decibels – the higher the number, the “cleaner” the image.

Smear – Similar to blooming, smear is the result of one or more over saturated pixels transferring some of their charge to an adjacent pixel. Only in this case, the transfer occurs as the charges are being progressively moved down and out of the light sensitive area, resulting in a vertical streak on the image. This is most commonly seen in CCD imagers. CMOS imagers use a different method for shifting the pixel charges out of the unshielded part of the imager and typically do not experience this issue. Thus, marketers often refer to CMOS technology as “smearless.”

SMT – Surface Mount Technology. Allows mounting components on circuit board without the need for through-holes. Saves assembly time, as can be automated. JAI uses SMT for all products.

S/N ratio or SNR – See Signal-to-noise ratio

Solution – A combination of hardware and software that work in unison to solve a challenging customer problem.

Sub-sampling – Refers to a mode where every two pixels are read out. Accordingly, the readout rate is doubled. The FOV is not changed versus full scan mode but the resolution becomes half. Sub-Sampling is essentially the same as Decimation. But in JAI cameras, Decimation refers only to area scan cameras while Sub-Sampling refers to line scan cameras.

Surveillance
– See General Imaging

SVGA standard – One of several “standard” sensor resolutions defined and marketed by Sony. SVGA corresponds to a resolution of 776 x 582 pixels, or roughly 0.4 megapixels.

SXGA standard – One of several “standard” sensor resolutions defined and marketed by Sony. SXGA corresponds to a resolution of 1392 x 1040 pixels, or roughly 1.4 megapixels.

T

Temperature Warning Mode – A mode available in some JAI cameras that automatically controls the camera to prevent safety hazards or camera malfunctions due to heat generated by the camera. This mode automatically lowers the frame rate to reduce the heat generation when the camera's internal temperature rises significantly.

TIFF – Tagged image file format. This is a “raw” image format. Unlike JPEG, there is no potential loss of image information, however there is also no compression. Therefore, TIFF images have much larger file sizes than JPEG images.

Trilinear or Tri-linear – A line scan camera that uses three separate sensor lines, each with a unique color filter (Red, Green, Blue) to produce color line scan images. Can use three independent line scan sensors arranged side-by-side, though today it is more common to use a single sensor with multiple lines in order to minimize the physical spacing of the lines. Since they are side-by-side, the optical planes from the target to the three color lines are slightly different. This can cause encoding challenges and parallax issues.

U

Ultraviolet Light – The band containing the shortest wavelengths in the light spectrum. Ultraviolet ranges all the way from 10 nm (just above x-rays) to 400 nm, the bottom of the visible light range. Most ultraviolet imaging is performed at 300-400 nm or in the so-called “solar blind region” of 230-290 nm. However, the increasing miniaturization of semiconductor features has increased the number of applications for ultraviolet imaging at wavelengths below 200 nm, the so-called Deep UV (DUV) region. The short wavelengths of UV light allow for visualization of very small surface features, making it useful for inspecting microscopic details such as the surface of semiconductor chips.

USB, USB3 Vision – Universal Serial Bus. Used for connecting peripherals to PC computers via serial communication. Originally, USB was widely used for connecting simple cameras (web cam) to PCs. With the introduction of "superspeed" USB (USB 3.0) and USB3 Vision (a machine vision standard introduced in January 2013), USB has garnered a meaningful chunk of the machine vision cameras market. Its simple plug-and-play computer connections have made it a popular choice for microscopy, as well as various types of embedded systems. The USB3 Vision standard is managed and licensed by the Association for Advancing Automation (A3).

UXGA – One of several “standard” sensor resolutions defined and marketed by Sony. UXGA corresponds to a resolution of 1624 x 1236 pixels, or roughly 2 megapixels.

V

VGA – One of several “standard” sensor resolutions defined and marketed by Sony. UXGA corresponds to a resolution of 640 x 480 pixels, or roughly 0.3 megapixels.

Video Process Bypass Mode – A mode available in many JAI cameras that allows many processing functions, especially FPGA-related ones, to be bypassed. With this function enabled, the host computer receives less-processed (closer to native) images and can then execute image processing on the host, using proprietary or off-the-shelf third-party applications. Some JAI area scan cameras require this mode to be used when using 12-bit pixel formats.

Vision Technology – Imaging products such as cameras, lighting, lenses, frame grabbers, cabling and software that are developed specifically for imaging applications.

W

White Balance – The process of making sure that pixels with different color filters respond to the light source being used in the correct color proportions. Color cameras typically use Bayer filter arrays that place a mosaic of red, green, and blue filters over the imager’s pixels. However, because different light sources contain different mixes of these colors, the camera might perceive colors incorrectly. White balancing involves pointing the camera at a smooth white card or surface illuminated at a level below the saturation point, and then adding gain to pixels until all pixels have the same value as the color channel with the highest value (typically green). This calibration ensures that colors will now be rendered properly. White balancing can be done manually, in a one-push automatic fashion, or on a continuous automatic basis to account for changes in light sources.

Windowing – A type of ROI (Region of Interest). In this mode, only the effective pixels (half the size of the width) in the center portion can be readout and accordingly, the readout rate is doubled. FOV becomes half as compared to the full pixel readout. At JAI, this term is only used on selected line scan cameras and is essentially the same as ROICentered with a width equal to half the total sensor width.

X

XGA – One of several “standard” sensor resolutions defined and marketed by Sony. XGA corresponds to a resolution of 1024 x 768 pixels, or roughly 0.8 megapixels.

Xpress (ImageCompressionMode) – A JAI lossless compression function that algorithmically reduces the size of image data while enabling it to be fully reconstructed later on the host PC. Compression is performed in the camera’s FPGA using the principle of image redundancy.

Xscale (ImageScalingMode) – A JAI function that digitally reduces the sensor's pixel resolution by algorithmically combining pixel values into new virtual pixels in accordance with user-defined vertical and horizontal scaling ratios. This function can be used for both monochrome and color models and allows finer adjustment of resolution than binning (the scaling ratios can be specified in floating point values).


Y

YUV (also Y'UV) – Refers to a family of color spaces, all of which encode brightness information separately from color information. Like RGB, YUV uses three values to represent any color. The Y' component, also called luma, represents the brightness value of the color. The U and V components, also called chroma, provide the color value. The human eye is less sensitive to changes in hue than changes in brightness. By separating brightness from hue, the YUV format can be encoded in ways that sample the chroma information at a lower resolution than brightness, thereby reducing the data size of an image (compared to RGB) while perceptually seeming to be of the same image quality.

Z


You are using an outdated browser!

Update your browser to display this website correctly. Update my browser now

×