Why should an image be blurred using a Gaussian Kernel before downsampling?
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
I recently read that before downsampling an image, it should be blurred using a Gaussian Kernel. This way, the downsampled image is better than just picking a single pixel out of a NxN block or averaging over the block. After searching in this site as well as google, I didn't get any exact answer.
But there were questions on how to select $sigma$ for blurring. Reading the answers on those posts, I learned that downsampling has to be done in accordance with sampling theorem. Downsampling without blurring causes aliasing effects.
- Can someone please explain why image has to be blurred before downsampling? I mean what is the exact relation to sampling theorem. What happens when an image is downsampled without blurring? I mean what are these aliasing effects? How can I notice them in the downsampled image?
- Why is Gaussian blurring better than Averaging over a block?
If you can give some examples of the images, I would be a lot grateful. Ans, I would appreciate all kind of answers, partial, intuitive, rigorous, anything.
Thank You!
image-processing gaussian downsampling smoothing
add a comment |Â
up vote
1
down vote
favorite
I recently read that before downsampling an image, it should be blurred using a Gaussian Kernel. This way, the downsampled image is better than just picking a single pixel out of a NxN block or averaging over the block. After searching in this site as well as google, I didn't get any exact answer.
But there were questions on how to select $sigma$ for blurring. Reading the answers on those posts, I learned that downsampling has to be done in accordance with sampling theorem. Downsampling without blurring causes aliasing effects.
- Can someone please explain why image has to be blurred before downsampling? I mean what is the exact relation to sampling theorem. What happens when an image is downsampled without blurring? I mean what are these aliasing effects? How can I notice them in the downsampled image?
- Why is Gaussian blurring better than Averaging over a block?
If you can give some examples of the images, I would be a lot grateful. Ans, I would appreciate all kind of answers, partial, intuitive, rigorous, anything.
Thank You!
image-processing gaussian downsampling smoothing
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I recently read that before downsampling an image, it should be blurred using a Gaussian Kernel. This way, the downsampled image is better than just picking a single pixel out of a NxN block or averaging over the block. After searching in this site as well as google, I didn't get any exact answer.
But there were questions on how to select $sigma$ for blurring. Reading the answers on those posts, I learned that downsampling has to be done in accordance with sampling theorem. Downsampling without blurring causes aliasing effects.
- Can someone please explain why image has to be blurred before downsampling? I mean what is the exact relation to sampling theorem. What happens when an image is downsampled without blurring? I mean what are these aliasing effects? How can I notice them in the downsampled image?
- Why is Gaussian blurring better than Averaging over a block?
If you can give some examples of the images, I would be a lot grateful. Ans, I would appreciate all kind of answers, partial, intuitive, rigorous, anything.
Thank You!
image-processing gaussian downsampling smoothing
I recently read that before downsampling an image, it should be blurred using a Gaussian Kernel. This way, the downsampled image is better than just picking a single pixel out of a NxN block or averaging over the block. After searching in this site as well as google, I didn't get any exact answer.
But there were questions on how to select $sigma$ for blurring. Reading the answers on those posts, I learned that downsampling has to be done in accordance with sampling theorem. Downsampling without blurring causes aliasing effects.
- Can someone please explain why image has to be blurred before downsampling? I mean what is the exact relation to sampling theorem. What happens when an image is downsampled without blurring? I mean what are these aliasing effects? How can I notice them in the downsampled image?
- Why is Gaussian blurring better than Averaging over a block?
If you can give some examples of the images, I would be a lot grateful. Ans, I would appreciate all kind of answers, partial, intuitive, rigorous, anything.
Thank You!
image-processing gaussian downsampling smoothing
image-processing gaussian downsampling smoothing
asked 5 hours ago
Nagabhushan S N
1455
1455
add a comment |Â
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
2
down vote
It should not, the need really depends on your application. However, this is a safe bet for most needs, and almost mandatory when you want to control the information lost by the downsampling.
Blurring is often another word for low-pass filtering. When an image contains high-frequency content (fast variations), downsampling can produce visually weird or annoying aliasing or moiré artifacts. There is an example on wikipedia Aliasing: the original
and the downsampled, represented at the same size:
The ripples on the bottom right are low-frequency artifacts generated by a careless brute-force downsampling. A blurring would attenuate image sharpness, dim the borders between bricks, and reduce the apparent aliasing aspects.
The choice of appropriate blurring filters has a long history in image processing. Gaussian shapes have long been considered somehow optimal for different reasons for "theoretical" continuous images. Plus, it is both decreasing and symmetric in the space and the frequency domains. In the time domain, this means that faraway pixels have less influence. In the frequency domain, frequencies are reduced monotonously from low to high.
Since most images are discretized, reality is somewhat different. Since the Gaussian convolution used to be computationally expensive, early approximating filters were designed with short support, borrowed for instance from Pascal triangle. Later, fast recursive implementations were designed (Deriche, Shen, etc.)
I guess question 1) is answered. For question 2) simple averaging gives an equal weight to all pixels in the window. Hence, faraway pixels are given equal importance with respect to closer pixels, which is not optimal in regions where images exhibit weak stationarity, like trends, edges and textures.
add a comment |Â
up vote
1
down vote
According to (digital) sampling theory, signals should be properly bandlimited, before they are (down) sampled.
A digital filter limits the bandwidth of the signal and makes it suitable for downsampling without aliasing.
A Gausssian filter is very suitable as a filter, as it has a number of nice features. The Gaussian function is mathematically tractable. It has sufficient frequency attenuation. It has small time domain footprint. It has little noticeable artefacts. Therefore it's the programmers choice in image processing.
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
It should not, the need really depends on your application. However, this is a safe bet for most needs, and almost mandatory when you want to control the information lost by the downsampling.
Blurring is often another word for low-pass filtering. When an image contains high-frequency content (fast variations), downsampling can produce visually weird or annoying aliasing or moiré artifacts. There is an example on wikipedia Aliasing: the original
and the downsampled, represented at the same size:
The ripples on the bottom right are low-frequency artifacts generated by a careless brute-force downsampling. A blurring would attenuate image sharpness, dim the borders between bricks, and reduce the apparent aliasing aspects.
The choice of appropriate blurring filters has a long history in image processing. Gaussian shapes have long been considered somehow optimal for different reasons for "theoretical" continuous images. Plus, it is both decreasing and symmetric in the space and the frequency domains. In the time domain, this means that faraway pixels have less influence. In the frequency domain, frequencies are reduced monotonously from low to high.
Since most images are discretized, reality is somewhat different. Since the Gaussian convolution used to be computationally expensive, early approximating filters were designed with short support, borrowed for instance from Pascal triangle. Later, fast recursive implementations were designed (Deriche, Shen, etc.)
I guess question 1) is answered. For question 2) simple averaging gives an equal weight to all pixels in the window. Hence, faraway pixels are given equal importance with respect to closer pixels, which is not optimal in regions where images exhibit weak stationarity, like trends, edges and textures.
add a comment |Â
up vote
2
down vote
It should not, the need really depends on your application. However, this is a safe bet for most needs, and almost mandatory when you want to control the information lost by the downsampling.
Blurring is often another word for low-pass filtering. When an image contains high-frequency content (fast variations), downsampling can produce visually weird or annoying aliasing or moiré artifacts. There is an example on wikipedia Aliasing: the original
and the downsampled, represented at the same size:
The ripples on the bottom right are low-frequency artifacts generated by a careless brute-force downsampling. A blurring would attenuate image sharpness, dim the borders between bricks, and reduce the apparent aliasing aspects.
The choice of appropriate blurring filters has a long history in image processing. Gaussian shapes have long been considered somehow optimal for different reasons for "theoretical" continuous images. Plus, it is both decreasing and symmetric in the space and the frequency domains. In the time domain, this means that faraway pixels have less influence. In the frequency domain, frequencies are reduced monotonously from low to high.
Since most images are discretized, reality is somewhat different. Since the Gaussian convolution used to be computationally expensive, early approximating filters were designed with short support, borrowed for instance from Pascal triangle. Later, fast recursive implementations were designed (Deriche, Shen, etc.)
I guess question 1) is answered. For question 2) simple averaging gives an equal weight to all pixels in the window. Hence, faraway pixels are given equal importance with respect to closer pixels, which is not optimal in regions where images exhibit weak stationarity, like trends, edges and textures.
add a comment |Â
up vote
2
down vote
up vote
2
down vote
It should not, the need really depends on your application. However, this is a safe bet for most needs, and almost mandatory when you want to control the information lost by the downsampling.
Blurring is often another word for low-pass filtering. When an image contains high-frequency content (fast variations), downsampling can produce visually weird or annoying aliasing or moiré artifacts. There is an example on wikipedia Aliasing: the original
and the downsampled, represented at the same size:
The ripples on the bottom right are low-frequency artifacts generated by a careless brute-force downsampling. A blurring would attenuate image sharpness, dim the borders between bricks, and reduce the apparent aliasing aspects.
The choice of appropriate blurring filters has a long history in image processing. Gaussian shapes have long been considered somehow optimal for different reasons for "theoretical" continuous images. Plus, it is both decreasing and symmetric in the space and the frequency domains. In the time domain, this means that faraway pixels have less influence. In the frequency domain, frequencies are reduced monotonously from low to high.
Since most images are discretized, reality is somewhat different. Since the Gaussian convolution used to be computationally expensive, early approximating filters were designed with short support, borrowed for instance from Pascal triangle. Later, fast recursive implementations were designed (Deriche, Shen, etc.)
I guess question 1) is answered. For question 2) simple averaging gives an equal weight to all pixels in the window. Hence, faraway pixels are given equal importance with respect to closer pixels, which is not optimal in regions where images exhibit weak stationarity, like trends, edges and textures.
It should not, the need really depends on your application. However, this is a safe bet for most needs, and almost mandatory when you want to control the information lost by the downsampling.
Blurring is often another word for low-pass filtering. When an image contains high-frequency content (fast variations), downsampling can produce visually weird or annoying aliasing or moiré artifacts. There is an example on wikipedia Aliasing: the original
and the downsampled, represented at the same size:
The ripples on the bottom right are low-frequency artifacts generated by a careless brute-force downsampling. A blurring would attenuate image sharpness, dim the borders between bricks, and reduce the apparent aliasing aspects.
The choice of appropriate blurring filters has a long history in image processing. Gaussian shapes have long been considered somehow optimal for different reasons for "theoretical" continuous images. Plus, it is both decreasing and symmetric in the space and the frequency domains. In the time domain, this means that faraway pixels have less influence. In the frequency domain, frequencies are reduced monotonously from low to high.
Since most images are discretized, reality is somewhat different. Since the Gaussian convolution used to be computationally expensive, early approximating filters were designed with short support, borrowed for instance from Pascal triangle. Later, fast recursive implementations were designed (Deriche, Shen, etc.)
I guess question 1) is answered. For question 2) simple averaging gives an equal weight to all pixels in the window. Hence, faraway pixels are given equal importance with respect to closer pixels, which is not optimal in regions where images exhibit weak stationarity, like trends, edges and textures.
answered 2 hours ago
Laurent Duval
15.3k32057
15.3k32057
add a comment |Â
add a comment |Â
up vote
1
down vote
According to (digital) sampling theory, signals should be properly bandlimited, before they are (down) sampled.
A digital filter limits the bandwidth of the signal and makes it suitable for downsampling without aliasing.
A Gausssian filter is very suitable as a filter, as it has a number of nice features. The Gaussian function is mathematically tractable. It has sufficient frequency attenuation. It has small time domain footprint. It has little noticeable artefacts. Therefore it's the programmers choice in image processing.
add a comment |Â
up vote
1
down vote
According to (digital) sampling theory, signals should be properly bandlimited, before they are (down) sampled.
A digital filter limits the bandwidth of the signal and makes it suitable for downsampling without aliasing.
A Gausssian filter is very suitable as a filter, as it has a number of nice features. The Gaussian function is mathematically tractable. It has sufficient frequency attenuation. It has small time domain footprint. It has little noticeable artefacts. Therefore it's the programmers choice in image processing.
add a comment |Â
up vote
1
down vote
up vote
1
down vote
According to (digital) sampling theory, signals should be properly bandlimited, before they are (down) sampled.
A digital filter limits the bandwidth of the signal and makes it suitable for downsampling without aliasing.
A Gausssian filter is very suitable as a filter, as it has a number of nice features. The Gaussian function is mathematically tractable. It has sufficient frequency attenuation. It has small time domain footprint. It has little noticeable artefacts. Therefore it's the programmers choice in image processing.
According to (digital) sampling theory, signals should be properly bandlimited, before they are (down) sampled.
A digital filter limits the bandwidth of the signal and makes it suitable for downsampling without aliasing.
A Gausssian filter is very suitable as a filter, as it has a number of nice features. The Gaussian function is mathematically tractable. It has sufficient frequency attenuation. It has small time domain footprint. It has little noticeable artefacts. Therefore it's the programmers choice in image processing.
answered 4 hours ago
Fat32
12.6k31127
12.6k31127
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdsp.stackexchange.com%2fquestions%2f52721%2fwhy-should-an-image-be-blurred-using-a-gaussian-kernel-before-downsampling%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password