Quiz on ALL 1 to 10

Create an abstract image map depicting various aspects of image processing, such as segmentation, enhancement, and recognition, using vibrant colors and interconnected nodes.

Image Processing Knowledge Quiz

Test your understanding of image processing with this comprehensive quiz! Dive into various topics such as image segmentation, enhancement, and analysis.

  • 30 engaging multiple-choice questions
  • Learn key concepts in digital imaging
  • Perfect for students and enthusiasts alike
199 Questions50 MinutesCreated by AnalyzingPixel92
What is image segmentation?
Assigning labels to objects based on their descriptors.
Extracting the particular features which allow us to differentiate between objects.
Subdividing an image into constituent parts, or isolating certain aspects of an image.
Rejecting objects which are irrelevant to the current task or process.
What is image processing?
Changing the nature of an image to improve its pictorial information for human interpretation.
Making an image look more suitable for autonomous machine perception.
Both A and B
None of the above.
What are the main topics of the course?
Introduction to image processing.
Types of images.
Point Processing.
All of the above
What is object analysis?
Assigning labels to objects based on their descriptors.
Extracting the particular features which allow us to differentiate between objects.
Subdividing an image into constituent parts, or isolating certain aspects of an image.
Rejecting objects which are irrelevant to the current task or process.
What is the difference between image restoration and image enhancement?
Restoration deals with improving the appearance of an image, while enhancement is based on human subjective preferences regarding what constitutes a “good” enhancement result.
Restoration techniques tend to be based on mathematical or probabilistic models of image degradation, while enhancement is based on making an image look better.
Both A and B
None of the above.
What is a digital image?
The image seen by human eyes, or the images from analog cameras that uses films.
An image that consists of small units called pixels which can be manipulated by computer programs
Both A and B.
None of the above.
What is recognition and interpretation in image processing?
Subdividing an image into constituent parts, or isolating certain aspects of an image.
Extracting the particular features which allow us to differentiate between objects.
Assigning labels to objects based on their descriptors, and assigning meanings to those labels.
Rejecting objects which are irrelevant to the current task or process.
What is a knowledge base in an image processing system?
A database containing knowledge about a problem domain in an image processing system.
Extracting the particular features which allow us to differentiate between objects.
Subdividing an image into constituent parts, or isolating certain aspects of an image.
Assigning labels to objects based on their descriptors.
What is the difference between analog image and digital image?
Analog images are the images seen by human eyes, while digital images are the images from analog cameras that use films.
Analog images can be manipulated by computer programs, while digital images cannot.
Digital images consist of small units called pixels, while analog images do not
Both A and B.
What is the purpose of image restoration?
To improve the resolution of an image.
To remove noise and other distortions from an image.
To change the color balance of an image.
To enhance the contrast of an image.
What is the purpose of image segmentation?
To enhance the edges of an image to make it appear sharper.
To remove noise from an image.
To obtain the edges of an image for the measurement of objects in an image.
To divide an image into multiple segments or regions
What is the purpose of image enhancement?
To remove noise and other distortions from an image.
To improve the resolution of an image.
To change the color balance of an image.
To make an image more visually appealing or informative.
What are some examples of applications of image processing?
Image retrieval, license plates, face detection, smile detection.
Forensics, biometrics, fingerprint scanners on many new laptops and other devices, face recognition systems.
Medical imaging, 3D imaging, MRI, CT, image-guided surgery.
All of the above
What is the target object in an image processing system?
The database containing knowledge about a problem domain.
The image to be segmented.
The object to be identified from the system.
The features that differentiate between objects.
What is the representation and description stage of image processing?
Subdividing an image into constituent parts, or isolating certain aspects of an image.
Extracting the particular features which allow us to differentiate between objects.
Rejecting objects which are irrelevant to the current task or process.
Assigning labels to objects based on their descriptors.
What is the difference between image restoration and image acquisition?
Image acquisition involves producing a digital image from a scene using devices such as digital cameras or scanners, while image restoration involves reversing the damage done to an image by a known cause
Image restoration involves processing an image so that the result is more suitable for a particular application, while image acquisition deals with improving the appearance of an image.
Both A and B.
None of the above.
What are the two main purposes of image processing?
To create a digital image and to manipulate it.
To improve an image's pictorial information for human interpretation and to render it more suitable for autonomous machine perception
To remove noise from an image and to highlight its edges.
To enhance an image's contrast and to brighten it.
What are some examples of image enhancement?
Removing blur caused by linear motion and optical distortions.
Removing periodic interference and noise.
Highlighting edges and improving image contrast.
All of the above
What is the Fourier Transform?
A technique used for image segmentation.
A method for removing noise from an image.
A mathematical technique used to decompose an image into its frequency components
A method for removing blur from an image.
What is the difference between point processing and neighborhood processing?
Point processing involves manipulating individual pixels in an image, while neighborhood processing involves manipulating groups of pixels.
Neighborhood processing involves enhancing an image's edges, while point processing involves removing motion blur from an image.
Both A and B
None of the above.
What is a color image?
A single function pasted together.
Two functions pasted together.
Three functions pasted together.
Four functions pasted together.
How is resolution defined?
Number of functions in an image.
Number of pixels in an image.
Number of bytes in an image.
Number of colors in an image.
What are the four basic types of images?
1. Green images. 2. Black and white images. 3. Red images. 4. Blue images.
1. Binary images. 2. Grayscale images. 3. Color images. 4. Indexed images.
1. Bright images. 2. Dark images. 3. Pastel images. 4. Bold images.
1. Pink images. 2. Yellow images. 3. Purple images. 4. Orange images.
What happens when an image's resolution decreases?
Fine details are more clear.
All edges are more blocky.
The image is more recognizable.
The image is barely recognizable.
What is spatial resolution?
Measure of the smallest observable detail in an image.
Measure of the number of pixels in an image.
Measure of the number of colors in an image.
Measure of the number of bytes in an image.
What is a binary image?
An image that has a particular color; that color being described by the amount of red, green and blue in it.
An image that only has a small subset of the more than sixteen million possible colors.
An image where each pixel can be represented by exactly one bit (1/8 bytes).
An image where the pixels contain index numbers that point to the RGB value in the color map.
What is a grayscale image?
An image that only has a small subset of the more than sixteen million possible colors.
An image where each pixel has a particular color; that color being described by the amount of red, green and blue in it.
An image where the pixels contain index numbers that point to the RGB value in the color map.
An image where each pixel is a shade of grey, normally from 0 (black) to 255 (white).
How is spatial resolution measured for monitors?
Pixels per inch (PPI).
Dots per inch (DPI).
Number of pixels in an image.
Number of dots in an image.
What is a color image?
An image that has a particular color; that color being described by the amount of red, green and blue in it.
An image where each pixel can be represented by exactly one 3bytes (24 bits).
Each of these components has a range from 0 to 255, this gives a total of 2563 = 16,777,216 different possible colors in the image.
All of the above.
How many possible colors are there in a 24-bit color image?
256 colors.
16,777,216 colors.
8 colors.
64 colors.
What is the formula to calculate number of pixels in an image?
Н��𝑖𝑧𝑒 𝑖𝑛 𝑖𝑛𝑝ℎ𝑒𝑠 ∗ 𝑃𝑃𝝼
Н��𝑖𝑧𝑒 𝑖𝑛 𝑝𝑖𝑥𝑒𝑙𝑠 ∗ 𝑃𝑃𝝼
Н��𝑢𝑚𝑝𝑒𝑟 𝑜𝑓 𝑝𝑖𝑥𝑒𝑙𝑠 = 𝑆𝑖𝑧𝑒 𝑖𝑛 𝑖𝑛𝑝ℎ𝑒𝑠 ∗ 𝑃𝑃𝝼
Н��𝑃𝝼 = 𝑆𝑖𝑧𝑒 𝑖𝑛 𝑝𝑖𝑥𝑒𝑙𝑠 ∗ 𝑝𝑢𝑚𝑝𝑒𝑟 𝑜𝑓 𝑝𝑖𝑥𝑒𝑙𝑠
What is the measure of spatial resolution for printers?
Pixels per inch (PPI).
Dots per inch (DPI).
Number of pixels in an image.
Number of dots in an image.
How is image file size calculated?
Number of pixels in an image.
Number of bytes in an image.
Number of functions in an image.
Number of colors in an image.
What is an indexed image?
Each of these components has a range from 0 to 255, this gives a total of 2563 = 16,777,216 different possible colors in the image.
An image where each pixel can be represented by exactly one byte (8 bits).
An image where the pixels contain index numbers that point to the RGB value in the color map.
An image where each pixel is a shade of grey, normally from 0 (black) to 255 (white).
What is the most common type of image that uses indexed colors?
Grayscale images.
Binary images.
Color images.
GIF images.
How can an image be considered as a two-dimensional function?
The function values give the brightness (intensity values) of the image at any given point.
A digital image can be considered as a large array of sampled points, each of which has a particular quantized brightness.
An image may be continuous with respect to the x- and y-coordinates, and also in amplitude.
All of the above.
What is the measure of spatial resolution for displaying or printing an image?
Pixels per inch (PPI).
Dots per inch (DPI).
Bits per inch (BPI).
All of the above.
What is the file size of a 512 X 512 binary image?
32768 bytes
256 Kb
768 Kb
0.031 Mb
What is the file size of a color image of size 512 X 512?
32768 bytes
256 Kb
768 Kb
0.75 Mb
What are the two important terms in digital images?
Sampling and quantization.
Brightness and intensity.
Pixel and amplitude.
Color and image.
What happens to the quality of an image as its spatial resolution increases?
The quality decreases.
The quality remains the same.
The quality increases.
None of the above.
How can a grayscale image be considered?
An image where each pixel has a particular color; that color being described by the amount of red, green and blue in it.
An image where the pixels contain index numbers that point to the RGB value in the color map.
A function where 𝑓(𝑥, 𝑦) gives the intensity at position (𝑥, 𝑦).
An image where each pixel can be represented by exactly one byte (8 bits).
What is the pixel value (or intensity) of a grayscale image?
[0, 255].
[0, 1].
[0, 100].
[0, 10].
What is the number of bits required for each pixel in a color image?
1 bit.
8 bits.
16 bits.
24 bits.
How is resolution related to the size of an image?
Resolution is the same as the size of an image.
Resolution is the number of pixels in an image.
Resolution can be defined as M X N where M is the number of rows and N is the number of columns in an image.
None of the above.
What is the effect of decreasing pixelization on an image?
The image becomes more recognizable.
The image becomes less clear.
The image becomes blocky.
None of the above.
The more the spatial resolution, ........ Pixels are used to display the image
No effect
More
Less
As the Pixel density increases, the quality of the image ...........
Increases
Decreases
Not effected
In an indexed image, the value of each pixel represents its .........
Index of color in color map
Color
Intensity level
Grayscale level
In a colored image, each pixel has 3 values indicating the level of which colors?
Red, grey, and black
Red, green, and black
Red, green, and blue
Red, grey, and blue
In a grayscale image, value 255 of a pixel stands for which color?
Green
Red
White
Black
What are the three classes of image processing operations?
Point operations, neighborhood processing, and transforms
Arithmetic operations, grayscale images, and solarization
Brightness, contrast, and intensity
Dynamic range, photographic negative, and solarization
What is a histogram?
A graph indicating the number of times each gray level occurs in the image.
A table of the numbers 𝑛𝑖 of gray values.
A poorly contrasted image.
A uniformly bright image.
What are point operations?
Image processing operations that require knowledge of the value of the grey levels in a small neighborhood of pixels around the given pixel.
Image processing operations that process the entire image as a single large block.
Image processing operations where a pixel's grey value is changed without any knowledge of its surrounds.
None of the above.
What can we infer about an image from its histogram?
The number of pixels in the image.
The time the image was taken.
The appearance of the image, such as the level of contrast.
The location where the image was taken.
What happens to gray levels outside the range that is stretched during histogram stretching?
They are transformed according to the linear functions at the ends of the graph.
They are transformed according to the same linear function.
They are left alone.
They are deleted from the image.
What does the new histogram after stretching indicate about the image?
The image has fewer pixels than before stretching.
The image has more pixels than before stretching.
The image has greater contrast than the original image.
The image has less contrast than the original image.
What is the effect of adding a constant to an image?
It darkens the image.
It lightens the image.
It complements the image.
It does not affect the image.
What is dynamic range?
The difference between the maximum and minimum intensity levels in an image.
The gray level of the image at any pair of coordinates (x, y).
The range of values spanned by the gray scale, I.e., the lowest and highest intensity levels that an image can have.
Intensity as perceived by the human visual system.
What is the difference between neighborhood processing and point operations?
Neighborhood processing requires knowledge of the value of the grey levels in a small neighborhood of pixels around the given pixel, while point operations do not.
Point operations require knowledge of the value of the grey levels in a small neighborhood of pixels around the given pixel, while neighborhood processing does not.
Neighborhood processing processes the entire image as a single large block, while point operations do not.
None of the above.
What is contrast?
The difference between the maximum and minimum intensity levels in an image.
The gray level of the image at any pair of coordinates (x, y).
The range of values spanned by the gray scale, I.e., the lowest and highest intensity levels that an image can have.
Intensity as perceived by the human visual system.
What type of image would have gray levels clustered at the upper (right) end of the histogram?
A well contrasted image.
A uniformly bright image.
A dark image.
A poorly contrasted image.
What is the complement of a grayscale image?
Its dynamic range
Its photographic negative
Its brightness
Its contrast
In a dark image, where would the gray levels be clustered in the histogram?
At the upper (right) end.
At the lower (left) end.
In the center of the histogram.
Spread out over much of the range.
What are the two ways of enhancing the contrast of a poorly contrasted image?
Histogram stretching and histogram equalization.
Histogram and grayscale.
Contrast stretching and grayscale.
Histogram and equalization.
What is the purpose of histogram stretching?
To cluster the gray levels together in the center of the histogram.
To enhance the contrast of a poorly contrasted image.
To make the image uniformly bright.
To equalize the distribution of gray levels in the image.
Which arithmetic operation can be used to lighten an image?
Adding a constant ,Multiplying by a constant
Subtracting a constant
Dividing by a constant
What is the range of values for the gray scale in an image?
0 to 255
0 to 100
-255 to 255
-100 to 100
What is neighborhood relation?
A relation between pixels
A relation between two images
A relation between image and sound
A relation between image and video
What are the steps of spatial convolution?
Flipping columns of kernel, multiplying each pixel in range of kernel by the corresponding element of flipped kernel, summing all these products and writing to center pixel.
Flipping rows and columns of kernel, multiplying each pixel in range of kernel by the corresponding element of flipped kernel, summing all these products and writing to center pixel.
Flipping rows of kernel, multiplying each pixel in range of kernel by the corresponding element of flipped kernel, summing all these products and writing to center pixel.
Multiplying each pixel in range of kernel by the corresponding element of flipped kernel, summing all these products and writing to center pixel.
What are the 4-neighborhood relations?
Diagonal neighbors
All neighbors
Vertical and horizontal neighbors
No neighbors
What is the problem with applying a filter at the border of an image?
There will be a lack of grey values to use in the filter function.
The filter will only be applied to those pixels in the image so that the mask will lie fully within the image.
The output image will be smaller than the original.
All necessary values outside the image are zero.
What is neighborhood processing?
Applying a function to each pixel value
Applying a function to a diagonal neighbor of each pixel
Applying a function to a vertical neighbor of each pixel
Applying a function to a neighborhood of each pixel
What is a filter?
A filter that removes noise
A filter that sharpens image
A mask with its function
A filter that extracts edges
What are the approaches to deal with the problem of borders in convolution?
Ignore the borders or padding with zeros.
Flipping the image or resizing the image.
Applying a different filter or rotating the kernel.
Changing the contrast or brightness of the image.
What is a separable convolution?
A convolution where the kernel is always a rectangular matrix.
A convolution where the kernel can be written as a convolution of 2 vectors.
A convolution where the kernel is always a rectangular matrix.
A convolution where the kernel can be written as a sum of 2 vectors.
What is a linear spatial filter?
A filter that performs a sum-of-addition operation between an image and a filter kernel
A filter that performs a sum-of-products operation between an image and a filter kernel
A filter that performs a product-of-products operation between an image and a filter kernel
A filter that performs a product-of-addition operation between an image and a filter kernel
What is the effect of flipping rows and columns of a kernel in spatial convolution?
It has the same effect as rotating the kernel by 180°.
It has no effect on the kernel.
It rotates the kernel by 90°.
It flips the image instead of the kernel.
What is a mask?
A rectangle with sides of odd length
A circle with radius of odd length
A rectangle with sides of even length
A circle with radius of even length
What is the disadvantage of ignoring the borders in convolution?
The output image will be smaller than the original.
The resulting image will have unwanted artifacts.
A significant amount of information is lost.
It will take longer to compute the convolution.
What is spatial correlation?
Moving the center of a kernel over an image, and computing the sum of products at each location
What is the advantage of using separable convolution?
It is much more efficient to compute.
It produces more accurate results.
It allows the use of larger filter kernels.
It works better with color images.
What is the computational cost of convolving an image with an n × m kernel?
N × m products per pixel.
N + m products per pixel.
N × log(m) products per pixel.
M + n log(n) products per pixel.
What is spatial convolution?
The mechanics of spatial convolution are the same as spatial correlation, except that the kernel is not rotated by 180° before multiplying and adding
The mechanics of spatial convolution are the same as spatial correlation, except that the kernel is rotated by 90° before multiplying and adding
The mechanics of spatial convolution are the same as spatial correlation, except that the kernel is rotated by 180° before multiplying and adding
The mechanics of spatial convolution are the same as spatial correlation, except that the kernel is not rotated by 90° before multiplying and adding
What is the main purpose of spatial convolution?
To resize an image.
To convert an image to grayscale.
To rotate an image.
To apply a filter to an image.
What is the disadvantage of padding an image with zeros?
It makes the output image smaller than the original.
It takes longer to compute the convolution.
It requires more memory than ignoring the borders.
It may introduce unwanted artifacts around the image.
What is the advantage of flipping the kernel in spatial convolution?
It makes the convolution more efficient.
It allows the use of larger filter kernels.
It ensures that the filter is applied symmetrically.
It improves the accuracy of the convolution.
What is the difference between separable and non-separable convolution?
Separable convolution is always faster than non-separable convolution.
Separable convolution can be written as two convolutions, while non-separable convolution cannot.
Separable convolution can only be used with square kernels, while non-separable convolution can be used with rectangular kernels.
Separable convolution always produces better results than non-separable convolution.
What is the primary application of nonlinear filters in image processing?
To adjust the contrast of images.
To remove noise from images.
To blur images.
To sharpen images.
What are high frequency components in an image?
Parts of the image characterized by little change in the grey values.
Parts of the image characterized by large changes in the grey values over small distances.
Parts of the image that are blurred.
Parts of the image that are noisy.
Which of the following filters is a low pass filter?
Laplacian filter.
Gaussian filter.
Averaging filter.
High pass filter.
What is the difference between linear and nonlinear filters?
Linear filters generate output that is a linear combination of their input, while nonlinear filters do not.
Linear filters are faster than nonlinear filters.
Nonlinear filters are used more frequently than linear filters.
Linear filters are used to blur images, while nonlinear filters are used to sharpen images.
What is the effect of zero padding at the borders of an image when applying a blurring filter?
It makes the image appear brighter.
It makes the image appear smoother.
It makes a dark border appear around the image.
It has no effect on the image.
What must the first derivative of a digital function be in areas of constant intensity?
Nonzero.
Zero.
Positive.
What is the formula for the first-order derivative of a one-dimensional function f(x)?
Н��f/𝜕x = f(x+1) + f(x-1) - 2f(x)
Н��f/𝜕x = f(x+1) + f(x)
Н��f/𝜕x = f(x+1) - f(x-1)
Н��f/𝜕x = f(x+1) - f(x)
What is unsharp masking?
A linear filter used to remove noise from images.
A technique used to increase the contrast of images.
A non-linear filter used to sharpen images.
A technique used to enhance edges and details in images.
What is the purpose of the scaling transformation in image filtering?
To clip values outside the 0-255 range.
To smooth the image and remove noise.
To transform all values in the range 𝑔𝝿 – 𝑔𝝻 to the range 0 – 255.
To highlight fine details in the image.
How does a median filter work in image processing?
It generates output that is the maximum value under the mask.
It generates output that is the minimum value under the mask.
It generates output that is the median value under the mask.
It generates output that is a linear combination of the input.
What is the goal of sharpening?
To smooth the image.
To highlight fine details in an image.
To de-emphasize the regions of slowly varying intensities.
To create a featureless background.
Which filter is of particular value in edge detection and edge enhancement?
Averaging filter.
Gaussian filter.
High pass filter.
Low pass filter.
What are sharpening filters based on?
Low pass filters.
Second-order derivatives.
Averaging filters.
First-order derivatives.
Why are non-linear filters not employed in image processing as frequently as linear filters?
Because they are slower than linear filters.
Because they require more complex algorithms to be implemented.
Because they are more difficult to understand and use.
Because linear filters generate output that is easier to manipulate and work with.
How does unsharp masking work in image processing?
It subtracts a blurred version of the image from the original image to enhance edges and details.
It adds a blurred version of the image to the original image to enhance edges and details.
It multiplies a blurred version of the image by the original image to enhance edges and details.
It divides the original image by a blurred version of the image to enhance edges and details.
How are digital images edges typically modeled?
As having a constant intensity profile.
As having a noisy profile.
As having an intensity ramp profile.
As having a blurred profile.
What is the second derivative of a one-dimensional function?
Н��𝑓/𝜕𝑥 = 𝑓(𝑥+1) + 𝑓(𝑥−1) − 2𝑓(𝑥)
Н��2𝑓/𝜕𝑥2 = 𝑓(𝑥+1) − 𝑓(𝑥)
Н��𝑓/𝜕𝑥 = 𝑓(𝑥+1) + 𝑓(𝑥)
Н��2𝑓/𝜕𝑥2 = 𝑓(𝑥+1) + 𝑓(𝑥)
What is unsharp masking?
Subtracting an unsharp (blurred or smoothed) version of the image from the original image.
Adding an unsharp (blurred or smoothed) version of the image to the original image.
Multiplying an unsharp (blurred or smoothed) version of the image with the original image.
Dividing an unsharp (blurred or smoothed) version of the image by the original image.
What is the Laplacian filter used for?
To smooth the image.
To highlight fine details in an image.
To find areas of rapid change (edges) in images.
To de-emphasize the regions of slowly varying intensities.
What is the general idea behind non-linear image filtering?
To use the spatial mask in a convolution process.
To obtain neighboring pixel values using the mask, and then ordering mechanisms produce the output pixel.
To generate output that is a linear combination of the input.
To rearrange the pixels in the image without using a mask.
What is the goal of blurring filters?
To enhance edges in an image.
To reduce sharp transitions in intensity and remove noise.
To eliminate low frequency components in an image.
To reduce the size of an image.
What is the difference between unsharp masking and highboost filtering?
The value of the parameter 𝑘 used in the process.
The type of filter used in the process.
The order in which the blurred image and the original image are subtracted.
There is no difference, they are the same process.
What is the effect of applying a blurring filter to reduce detail in an image?
It makes the image appear worse than the original.
It is not useful in concentrating on some aspects of the image.
It is useful in concentrating on some aspects of the image, such as numbers of objects and amount of dark and light areas.
It has no effect on the image.
Which filter is one of the most common non-linear filters used in image processing?
Maximum filter.
Median filter.
Linear filter.
Minimum filter.
Why is it common to smooth the image before applying the Laplacian filter?
To reduce the contribution of the unsharp mask.
To find areas of rapid change (edges) more accurately.
To highlight fine details in the image.
Because derivative filters are very sensitive to noise.
What is the difference between low and high frequency components in an image?
Low frequency components are parts of the image characterized by large changes in the grey values over small distances, while high frequency components are characterized by little change in the grey values.
High frequency components are parts of the image characterized by large changes in the grey values over small distances, while low frequency components are characterized by little change in the grey values.
Low frequency components are parts of the image that are blurred, while high frequency components are parts of the image that are noisy.
High frequency components are parts of the image that are blurred, while low frequency components are parts of the image that are noisy.
What is the frequency domain in image processing?
A domain that maps images to complex numbers
A domain that maps images to a frequency distribution
A domain that maps images to binary numbers
A domain that maps images to a spatial distribution
Why is the Fourier Transform of fundamental importance to image processing?
It allows for low-pass and high-pass filtering with great precision
It converts images to binary format
It is an alternative to linear spatial filtering
It allows for the isolation of a single image pixel
What is the similarity between the forward and inverse Fourier transforms?
They are completely different
They are the same
They differ only by a scale factor 1 𝑀𝑝 in the inverse transform, and a negative sign in the exponent of the forward transform
They are not related
What is the inverse Fourier transform?
A process that converts a matrix into another matrix of the same size
A process that recovers a function completely via an inverse process
A process that isolates and processes particular image frequencies
A process that allows for low-pass and high-pass filtering with great precision
How can we display the Fourier transform of an image?
We can view the DC coefficient directly
We can view the imaginary part of the complex number
We can view the magnitude and phase as two separate figures
We can view the power spectrum as a single figure
What is the separability property of Fourier Transform?
The 2-D DFT cannot be computed
The 2-D DFT can be computed by computing 1-D DFT transforms along the rows (columns) of the image, followed by 1-D transforms along the columns (rows) of the result
The first product depends on 𝑥 and 𝑢, and is independent of 𝑦 and 𝑣
The second product value depends only on 𝑦 and 𝑣, and is independent of 𝑥 and 𝑢
What is the matrix 𝝹 in the Fourier Transform?
The output matrix of the same size as the input matrix
The input matrix to the Fourier Transform
The inverse Fourier transform of the input matrix
A matrix that represents the amplitude and phase of the sinusoid
Why is the output of frequency domain processing not an image?
Because the output is always a complex number
Because the output is a transformation
Because the output is in binary format
Because the output is in spatial domain
What is the linearity property of Fourier Transform?
The DFT of a sum is not equal to the sum of the individual DFT's
The DFT of a sum is equal to the sum of the individual DFT's
The DFT of a sum is equal to the product of the individual DFT's
The DFT of a sum is not related to the individual DFT's
What is the DFT?
The digital Fourier transform
The discrete Fourier transform
The direct Fourier transform
The differential Fourier transform
What is the power spectrum in the Fourier Transform?
The imaginary part of the complex number
The imaginary part of the complex number
The magnitude of the complex number
The magnitude squared of the complex number
What is the approach to convolution using DFT?
Pad 𝑆 with zeroes so that it becomes smaller than 𝑀
Pad 𝑆 with zeroes so that it becomes the same size as 𝑀
Pad 𝑆 with ones so that it becomes the same size as 𝑀
Pad 𝑆 with zeroes so that it becomes larger than 𝑀
What is the formula for the Fourier transform of an M x N matrix?
F(u,v) = Σx=0^M-1 Σy=0^N-1 f(x,y) * e^(-j2π(ux/M + vy/N))
F(u,v) = Σx=0^M-1 Σy=0^N-1 f(x,y) * e^(j2π(ux/M + vy/N))
F(u,v) = Σx=0^M-1 Σy=0^N-1 f(x,y) * e^(j2π(ux/M - vy/N))
F(u,v) = Σx=0^M-1 Σy=0^N-1 f(x,y) * e^(-j2π(ux/M - vy/N))
What is the polar representation of the Fourier Transform?
F(u,v) = F(u,v) * e^(jφ(u,v))
F(u,v) = F(u,v) * e^(-jφ(u,v))
F(u,v) = F(u,v) + jφ(u,v)
F(u,v) = |F(u,v)| * e^(jφ(u,v))
What is the formula for the inverse Fourier transform of an M x N matrix?
F(x,y) = Σu=0^M-1 Σv=0^N-1 F(u,v) * e^(j2π(ux/M - vy/N))
F(x,y) = Σu=0^M-1 Σv=0^N-1 F(u,v) * e^(j2π(ux/M + vy/N))
F(x,y) = Σu=0^M-1 Σv=0^N-1 F(u,v) * e^(j2π(ux/M * vy/N))
F(x,y) = Σu=0^M-1 Σv=0^N-1 F(u,v) * e^(-j2π(ux/M + vy/N))
What is the convolution property of Fourier Transform?
To convolve an image 𝑀 with a spatial filter 𝑆, 𝑆 is placed over each pixel of 𝑀 in turn, and the product of all corresponding grey values of 𝑀 and elements of 𝑆 is calculated, and finally the results are multiplied.
To convolve an image 𝑀 with a spatial filter 𝑆, 𝑆 is placed over each pixel of 𝑀 in turn, and the product of all corresponding grey values of 𝑀 and elements of 𝑆 is calculated, and finally the results are added.
To convolve an image 𝑀 with a spatial filter 𝑆, 𝑆 is placed over each pixel of 𝑀 in turn, and the sum of all corresponding grey values of 𝑀 and elements of 𝑆 is calculated, and finally the results are added.
To convolve an image 𝑀 with a spatial filter 𝑆, 𝑆 is placed over each pixel of 𝑀 in turn, and the product of all corresponding grey values of 𝑀 and elements of 𝑆 is calculated, and finally the results are subtracted.
What does the spectrum of the Fourier Transform represent?
The real part of the complex number
The imaginary part of the complex number
The magnitude of the complex number
The amplitude and phase of the sinusoid
What is the DC coefficient of Fourier Transform?
It is the value 𝝹 0, 0 of the DFT.
It is the value 𝝹 1, 1 of the DFT.
It is the value 𝝹 0, 1 of the DFT.
It is the value 𝝹 1, 0 of the DFT.
What is the purpose of padding 𝑆 with zeroes in the convolution using DFT approach?
To decrease the size of 𝑆
To increase the size of 𝑆
To make the size of 𝑆 the same as 𝑀
To make the size of 𝑆 smaller than 𝑀
The output of processing an image in the frequency domain is an image.
True
False
The DC coefficient equals to the product of all terms in the original image matrix.
True
False
The same algorithm can used for both the forward and inverse Fourier transforms.
True
False
Power Spectrum of Fourier Transform equals to the square of the spectrum.
True
False
The Fourier transform of an image must not be complex because the image values are always real.
True
False
What is the purpose of shifting in image processing?
To increase the dimension of the matrix.
To decrease the dimension of the matrix.
To have the DC coefficient in the center of the matrix.
To have the DC coefficient at the top left of the matrix.
What is the Fourier transform of a single edge image?
Its spectrum is a circle.
Its spectrum is a box.
Its spectrum is a line.
Its spectrum is a triangle.
What is the convolution theorem used for in image processing?
To perform spatial convolution by elementwise multiplication of the Fourier transform by a suitable filter matrix.
To perform frequency convolution by elementwise multiplication of the Fourier transform by a suitable filter matrix.
To perform spatial convolution by elementwise division of the Fourier transform by a suitable filter matrix.
To perform frequency convolution by elementwise division of the Fourier transform by a suitable filter matrix.
What is the purpose of an ideal low pass filter?
To highlight fine details in the image.
To allow specific frequencies and set other frequencies to zero.
To remove the noise from the image.
To remove all frequencies above the cutoff frequency D.
How is a Butterworth filter different from an ideal filter?
A Butterworth filter uses a circle with a less sharp cutoff than ideal filtering.
A Butterworth filter only passes all frequencies below the cutoff frequency D, while an ideal filter only passes all frequencies above the cutoff frequency D.
A Butterworth filter eliminates center values and keeps the others, while an ideal filter keeps the center values and eliminates the others.
What is the equation for the Butterworth low pass filter?
Н�� 𝑥 = 1 / (1 + 𝑥 / 𝝷)^(2𝑛)
Н�� 𝑥 = 1 + 𝑥 / 𝝷^(2𝑛)
Н�� 𝑥 = 𝑥 / 𝝷^(2𝑛) / (1 + 𝑥)
Н�� 𝑥 = 1 / (1+ (x/𝝷)^(2𝑛))
What does the size of 𝑛 indicate in the Butterworth low pass filter?
The cutoff frequency
The frequency response
The sharpness of the cutoff
The magnitude of the filter
What effect does the size of the circle have on the final result of high pass filtering?
The larger the circle, the more blurred the image.
The smaller the circle, the less blurred the image.
If the cutoff is large, then more information is removed from the transform, leaving only the highest frequencies; only the edges of the image remain.
If the cutoff is small, then only removing a small amount of the transform; only the lowest frequencies of the image would be removed.
What is the disadvantage of ideal filtering in image processing?
It introduces unwanted artifacts: ringing, into the result.
It removes too much information from the transform.
It makes the image too blurry.
It makes the image too blurry.
How is low pass filtering performed in the frequency domain?
By eliminating center values and keeping the others.
By multiplying the transform by a matrix in such a way that center values are maintained, and values away from the center are either removed or minimized.
By using a circle with a less sharp cutoff.
By passing all frequencies below the cutoff frequency D and replacing all other frequencies with zero.
What is the difference between a Butterworth low pass filter and a Butterworth high pass filter?
A Butterworth low pass filter eliminates center values and keeps the others, while a Butterworth high pass filter keeps the center values and eliminates the others.
A Butterworth low pass filter only passes all frequencies below the cutoff frequency D, while a Butterworth high pass filter only passes all frequencies above the cutoff frequency D.
A Butterworth low pass filter uses a circle with a less sharp cutoff to select frequencies, while a Butterworth high pass filter uses a circle with a sharp cutoff to select frequencies.
A Butterworth low pass filter simply cuts off the Fourier transform at some distance from the center, while a Butterworth high pass filter allows specific frequencies and sets other frequencies to zero.
Which filter results in no "ringing" effect?
Butterworth low pass filter
Butterworth high pass filter
Both filters
Neither filter
Ideal low pass filter only passes frequencies .............. The cutoff frequency
Not equal to
Above
Below
Equal to
........... Can be used as the kernel matrix in ideal filtering.
A triangle
A rectangle
A circle
As the cutoff frequency of an Ideal low pass filter decreases, the image becomes ......... blurred.
More
Has no effect
Less
Butterworth filtering uses a kernel with ............ Sharp cutoff than ideal filtering.
Equal
More
Less
What is image restoration?
A process that reduces degradations which have occurred during the image acquisition.
A process that adds degradations to an image.
A process that creates images from scratch.
A process that changes the color of an image.
What is the main reason for using rank-order filtering instead of median filtering?
It allows to choose the median of rectangular masks.
It is faster than median filtering.
It allows to choose the median of non-rectangular masks.
It is more effective than median filtering.
What is periodic noise?
A repeating electronic disturbance
A random disturbance
A distortion of the image
A loss of image quality
What are some common types of noise?
Salt and pepper noise, Gaussian noise, and periodic noise.
Blue noise, red noise, and green noise.
Salt noise, pepper noise, and Gaussian noise.
White noise, black noise, and brown noise.
What is the main advantage of using rank-order filtering?
It is faster than median filtering.
It allows to choose the median of non-rectangular masks.
It is more effective than median filtering.
It is suitable for cleaning large amounts of noise.
Can periodic noise be completely removed from an image?
No, only a significant amount can be removed
Yes, with the right filtering method
No, periodic noise cannot be removed from an image
Yes, by increasing the brightness of the image
How can periodic noise be recognized in an image?
By the appearance of bars over the image
By the appearance of spots on the image
By the loss of color in the image
By the blurring of the image
What is the emphasis of this course?
The techniques for dealing with restoration, rather than with the degradations themselves.
The degradations themselves, rather than the techniques for dealing with restoration.
The relationship between image processing and computer programming.
The use of image processing for artistic purposes.
What is the problem with manually applying the median filter?
It is too slow for MATLAB.
It does not work with non-square masks.
It requires sorting at least 9 values for each pixel.
It is not effective for salt and pepper noise.
What is the main disadvantage of using image averaging to reduce Gaussian noise?
It does not work with non-square masks.
It requires sorting at least 9 values for each pixel.
It tends to blur the image.
It is only effective for small amounts of noise.
What is the appropriate way to choose the threshold value 𝝷 in the outlier method?
Choose the lowest possible value.
Choose the highest possible value.
Apply the method with several different thresholds and choose the value that provides the best results.
It doesn't matter which value is chosen.
What is the purpose of low-pass filtering when cleaning salt and pepper noise?
To remove high-frequency components of the image.
To remove low-frequency components of the image.
To remove all the noise from the image.
To add more noise to the image.
What is the purpose of image restoration?
To remove or reduce degradations which have occurred during the acquisition of the image.
To add degradations to the image.
To increase the resolution of the image.
To change the color of the image.
What is the advantage of using median filtering over low-pass filtering for cleaning salt and pepper noise?
Median filtering replaces noisy values with one closer to its surroundings, while low-pass filtering smears the noise over the image.
Low-pass filtering replaces noisy values with one closer to its surroundings, while median filtering smears the noise over the image.
Both methods have the same advantage.
There is no advantage to either method.
What are some optical effects that can cause image degradation?
Out of focus blurring and blurring due to camera motion.
Brightness and contrast adjustment.
Color saturation and hue shift.
Image cropping and resizing.
What is the most effective method for cleaning salt and pepper noise?
Median filtering.
Low-pass filtering.
Rank-order filtering.
Outlier method.
Which type of noise requires the use of frequency domain filtering?
Periodic noise.
Gaussian noise.
Salt and pepper noise.
All of the above.
How does the notch filtering method work?
By making the rows and columns of the spikes zero
By creating a filter consisting of ones with a ring of zeroes; the zeroes lying at a radius equals to the distance of the spikes from the center
By multiplying the image by a Laplacian filter
By increasing the contrast of the image
How does the thickness of the ring in the band reject filtering method affect the noise reduction?
A thicker ring reduces more noise
A thinner ring reduces more noise
The thickness of the ring does not affect noise reduction
The thicker the ring, the more noise is added to the image
Why is the outlier method not particularly suitable for cleaning large amounts of noise?
It is too slow for large amounts of noise.
It is less effective than other methods for large amounts of noise.
It is prone to blurring the image for large amounts of noise.
It is preferred to use the median filter for large amounts of noise.
What is the relationship between the period of the noise and the location of the spikes in the Fourier transform?
The tighter the period of the noise, the further from the center the two spikes will be
The tighter the period of the noise, the closer to the center the two spikes will be
The period of the noise does not affect the location of the spikes
The period of the noise affects only the size of the spikes
What is the advantage of using notch filtering over band reject filtering?
Notch filtering removes noise in the center of the image
Notch filtering is faster than band reject filtering
Notch filtering removes all the noise from the image
Notch filtering is easier to implement than band reject filtering
What is the effect of periodic noise on an image?
It creates bars over the image
It blurs the image
It changes the color of the image
It removes details from the image
What is the idealized form of white noise that is normally distributed?
Salt and pepper noise.
Gaussian noise.
Median noise.
Outlier noise.
What is the first step in the outlier method for cleaning salt and pepper noise?
Choose a threshold value 𝝷.
Compare the pixel value with the mean of its neighbors.
Classify the pixel as noisy or not.
Replace the pixel value with the mean of its neighbors if it is noisy.
What is rank-order filtering?
A process that orders the set and takes the 𝑛-th value, for some predetermined value of 𝑛.
A process that takes the average of the set.
A process that takes the maximum value of the set.
A process that takes the minimum value of the set.
How does the band reject filtering method work?
By creating a filter consisting of ones with a ring of zeroes; the zeroes lying at a radius equals to the distance of the spikes from the center
By making the rows and columns of the spikes zero
By multiplying the image by a Gaussian filter
By reducing the brightness of the image
What is the trade-off when using average filtering to reduce Gaussian noise?
Averaging does not reduce Gaussian noise.
Averaging reduces Gaussian noise but also blurs the image.
Averaging is more effective than other methods at reducing Gaussian noise.
Averaging works best with rectangular masks.
What are the two methods to eliminate spikes in the spectrum?
Band reject filtering and notch filtering
Average filtering and Gaussian filtering
Median filtering and threshold filtering
Sobel filtering and Canny filtering
What is Gaussian noise?
A type of white noise that is normally distributed.
A type of noise that only affects television signals.
A type of noise that is caused by random fluctuations in the signal.
A type of noise that can only be modelled by adding random values to an image.
What is the purpose of non-maximum suppression in edge detection?
To amplify all gradient values
To suppress all gradient values except the local maxima
To enhance the noise in the image
To highlight all edges in the image
What is edge detection?
A technique to extract edges from images.
A directional change in the intensity or color in an image.
A technique to reduce noise in an image.
A type of filter used in image processing.
What is image segmentation?
It refers to the operation of partitioning an image into component parts, or into separate objects.
It refers to the operation of converting an image into a greyscale image.
It refers to the operation of adding the values of the three channels in an RGB image and dividing by three.
It refers to the operation of removing the edges from an image.
What is the Laplacian edge detector?
A technique to extract edges from images.
A type of filter used in image processing.
An edge detector that uses only one kernel to calculate second order derivatives in a single pass.
An operator used for edge detection in an image.
Which filter is used to find the magnitude and orientation of gradient in the Canny Edge Detector?
Gaussian filter
Sobel filter
Laplacian filter
Prewitt filter
What is the difference between an RGB image and a greyscale image?
An RGB image has three channels, while a greyscale image has only one channel.
An RGB image has only one channel, while a greyscale image has three channels.
An RGB image has a varying background, while a greyscale image has a uniform background.
An RGB image has hidden detail, while a greyscale image does not have hidden detail.
What is non-maximum suppression?
A technique to extract edges from images.
An edge thinning technique that reduces the edge extracted from the gradient value.
A technique to reduce noise in an image.
An edge thinning technique that eliminates all but one accurate response to the edge.
Which step in the Canny Edge Detector involves defining two thresholds for gradient magnitude?
Convolution with Gaussian filter
Finding magnitude and orientation of gradient
Non-maximum suppression
Hysteresis thresholding
What is the purpose of convolving an image with a Gaussian filter in the Canny edge detector?
To reduce noise in the image.
To calculate the gradient of the image.
To perform non-maximum suppression.
To detect horizontal and vertical edges.
What is the Prewitt filter?
An edge thinning technique.
A technique to extract edges from images.
A filter used for edge detection in an image that detects two types of edges: horizontal and vertical edges.
A type of filter used in image processing.
What is the gradient of an image?
A directional change in the intensity or color in an image.
A vector defined by partial derivatives that points in the direction of largest possible intensity increase.
A type of filter used in image processing.
All of the above.
What is the aim of developing methods for automatic edge detection?
To convert an RGB image into a greyscale image.
To measure the size of objects in an image.
To isolate particular objects from their background.
To automatically pick out the edges of an image.
What is the Canny edge detector?
A technique to extract edges from images that is probably the most widely used edge detector in computer vision.
A type of filter used in image processing.
A technique to reduce noise in an image.
An edge detector that uses only one kernel to calculate second order derivatives in a single pass.
What happens if the gradient magnitude value is between the high and low thresholds in hysteresis thresholding?
The edge is labeled as noise.
The edge is labeled as a weak edge.
The edge is labeled as a strong edge.
The edge is removed from the image.
How is the gradient of an image calculated?
By convolution with a derivative kernel.
By using a Laplacian edge detector.
By performing non-maximum suppression.
By using partial derivatives to calculate the gradient in both x and y directions.
{"name":"Quiz on ALL 1 to 10", "url":"https://www.quiz-maker.com/QPREVIEW","txt":"Test your understanding of image processing with this comprehensive quiz! Dive into various topics such as image segmentation, enhancement, and analysis.30 engaging multiple-choice questionsLearn key concepts in digital imagingPerfect for students and enthusiasts alike","img":"https:/images/course6.png"}
Powered by: Quiz Maker