Main Content

Noise Removal and Image Sharpening

This example shows how to implement a front-end module of an image processing design. This front-end module removes noise and sharpens the image to provide a better initial condition for the subsequent processing.

An object out of focus results in a blurred image. Dead or stuck pixels on the camera or video sensor, or thermal noise from hardware components, contribute to the noise in the image. In this example, the front-end module is implemented using two pixel-stream filter blocks from the Vision HDL Toolbox™. The median filter removes the noise and the image filter sharpens the image. The example compares the pixel-stream results with those generated by the full-frame blocks from the Computer Vision Toolbox™.

This example model provides a hardware-compatible algorithm. You can implement this algorithm on a board using a Xilinx™ Zynq™ reference design. See Image Sharpening with Zynq-Based Hardware.

Structure of the Example

Computer Vision Toolbox blocks operate on an entire frame at a time. Vision HDL Toolbox blocks operate on a stream of pixel data, one pixel at a time. The conversion blocks in Vision HDL Toolbox, Frame To Pixels and Pixels To Frame, enable you to simulate streaming-pixel designs alongside full-frame designs.

The NoiseRemovalAndImageSharpeningHDL.slx system is shown below.

The following diagram shows the structure of the Full-Frame Behavioral Model subsystem, which consists of the frame-based Median Filter and 2-D FIR Filter. As mentioned before, median filter removes the noise and 2-D FIR Filter is configured to sharpen the image.

The Pixel-Stream HDL Model subsystem contains the streaming implementation of the median filter and 2-D FIR filter, as shown in the diagram below. You can generate HDL code from the Pixel-Stream HDL Model subsystem.

The Verification subsystem compares the results from full-frame processing with those from pixel-stream processing.

One frame of the blurred and noisy source video, its de-noised version after median filtering, and the sharpened output after 2-D FIR filtering, are shown from left to right in the diagram below.

Image Source

The following figure shows the Image Source subsystem.

The Image Source block imports a grayscale image, then uses a MATLAB function block named Blur and Add Noise to blur the image and inject salt-and-pepper noise. The IMFILTER function uses a 3-by-3 averaging kernel to blur the image. The salt-and-pepper noise is injected by calling the IMNOISE(I,'salt & pepper',D) command, where D is the noise density defined as the ratio of the combined number of salt and pepper pixels to the total pixels in the image. This density value is specified by the Noise Density constant block, and it must be between 0 and 1. The Image Source subsystem outputs a 2-D matrix of a full image.

Frame To Pixels: Generating a Pixel Stream

The Frame To Pixels block converts a full image frame to a pixel stream. The Number of components field is set to 1 for grayscale image input, and the Video format field is 240p to match that of the video source. The sample time of the Video Source is determined by the product of Total pixels per line and Total video lines in the Frame To Pixels block. For more information, see the Frame To Pixels block reference page.

Pixel-Stream HDL Model

The Median Filter block is used to remove the salt and pepper noise. To learn more, refer to the Median Filter block reference page.

Based on the filter coefficients, the Image Filter block can be used to blur, sharpen, or detect the edges of the recovered image after median filtering. In this example, Image Filter is configured to sharpen an image. To learn more, refer to the Image Filter block reference page.

Pixels To Frame: Converting Pixel Stream Back to Full Frame

The Pixels To Frame block converts a pixel stream to the full frame by making use of the synchronization signals. The Number of components field and the Video format field of the Pixels To Frame are set at 1 and 240p, respectively, to match the format of the video source.

Verifying the Pixel-Stream Processing Design

The Verification subsystem, as shown below, verifies the results from the pixel-stream HDL model against the full-frame behavioral model.

The peak signal to noise ratio (PSNR) is calculated between the reference image and the stream processed image. Ideally, the ratio should be inf, indicating that the output image from the Full-Frame Behavioral Model matches that generated from the Pixel-Stream HDL Model.

Generate HDL Code and Verify Its Behavior

To check and generate the HDL code referenced in this example, you must have an HDL Coder™ license.

To generate the HDL code, use the following command:

makehdl('NoiseRemovalAndImageSharpeningHDL/Pixel-Stream HDL Model');

To generate test bench, use the following command:

makehdltb('NoiseRemovalAndImageSharpeningHDL/Pixel-Stream HDL Model');