Performing a “mean blur”

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;







up vote
7
down vote

favorite
1












I perform a mean blur on a number-array (called grid in the code). I iterate the array, and calculate average values from adjacent "cells".
The kernel size defines how many adjacent values are used for average calculation.



/**
* Performs a mean blur with a given kernel size.
* @param number grid - Data grid holding values to blur.
* @param number kernelSize - Describes the size of the "blur"-cut for each cell.
*/
static MeanBlur(grid, kernelSize)
// Result
let newGrid = ;

// Iterate original grid
for (let row = 0; row < grid.length; row++)
let newRow = ;
for (let col = 0; col < grid[row].length; col++)

let adjacentValues = ;

// Get all adjacent values
for (let i = 1; i <= kernelSize; i++)
adjacentValues = adjacentValues.concat(this._getAdjacentValues(row, col, i, grid));



// Calculate average value
const average = adjacentValues.reduce((a, b) => a + b, 0) / adjacentValues.length;

// add average value to the current row
newRow.push(average);

newGrid.push(newRow);

return newGrid;


/**
* Return all adjacent values of a cell in a `number`-array.
* The amount of adjacent value depends on the kernel size.
* @param number row - Describes the cell's row position.
* @param number col - Describes the cell's column position.
* @param number kernelSize - The kernel size.
* @param number grid - Original data grid.
*/
static _getAdjacentValues(row, col, kernelSize, grid)

if (kernelSize < 1)
throw "Kernel size value should be at least 1.";


let adjacentValues = ;

// north
if (row - kernelSize >= 0)
adjacentValues.push(grid[row - kernelSize][col]);

// north-east
if (row - kernelSize >= 0 && col + kernelSize < grid[row].length)
adjacentValues.push(grid[row - kernelSize][col + kernelSize]);

// east
if (col + kernelSize < grid[row].length)
adjacentValues.push(grid[row][col + kernelSize]);


// south-east
if (row + kernelSize < grid.length && col + kernelSize < grid[row].length)
adjacentValues.push(grid[row + kernelSize][col + kernelSize]);


// south
if (row + kernelSize < grid.length)
adjacentValues.push(grid[row + kernelSize][col]);


// south-west
if (row + kernelSize < grid.length && col - kernelSize >= 0)
adjacentValues.push(grid[row + kernelSize][col - kernelSize]);


// west
if (col - kernelSize >= 0)
adjacentValues.push(grid[row][col - kernelSize]);


// north-west
if (row - kernelSize >= 0 && col - kernelSize >= 0)
adjacentValues.push(grid[row - kernelSize][col - kernelSize]);


return adjacentValues;



What I am most concerned with is _getAdjacentValues. I think there has to be a more convenient way to get adjacent values without including values at implausible indices.



enter image description here



What I mean is that i.e. at the grid position (0, 0) even with a kernel size of 1, values like (-1, -1) will be checked automatically by a loop.










share|improve this question





























    up vote
    7
    down vote

    favorite
    1












    I perform a mean blur on a number-array (called grid in the code). I iterate the array, and calculate average values from adjacent "cells".
    The kernel size defines how many adjacent values are used for average calculation.



    /**
    * Performs a mean blur with a given kernel size.
    * @param number grid - Data grid holding values to blur.
    * @param number kernelSize - Describes the size of the "blur"-cut for each cell.
    */
    static MeanBlur(grid, kernelSize)
    // Result
    let newGrid = ;

    // Iterate original grid
    for (let row = 0; row < grid.length; row++)
    let newRow = ;
    for (let col = 0; col < grid[row].length; col++)

    let adjacentValues = ;

    // Get all adjacent values
    for (let i = 1; i <= kernelSize; i++)
    adjacentValues = adjacentValues.concat(this._getAdjacentValues(row, col, i, grid));



    // Calculate average value
    const average = adjacentValues.reduce((a, b) => a + b, 0) / adjacentValues.length;

    // add average value to the current row
    newRow.push(average);

    newGrid.push(newRow);

    return newGrid;


    /**
    * Return all adjacent values of a cell in a `number`-array.
    * The amount of adjacent value depends on the kernel size.
    * @param number row - Describes the cell's row position.
    * @param number col - Describes the cell's column position.
    * @param number kernelSize - The kernel size.
    * @param number grid - Original data grid.
    */
    static _getAdjacentValues(row, col, kernelSize, grid)

    if (kernelSize < 1)
    throw "Kernel size value should be at least 1.";


    let adjacentValues = ;

    // north
    if (row - kernelSize >= 0)
    adjacentValues.push(grid[row - kernelSize][col]);

    // north-east
    if (row - kernelSize >= 0 && col + kernelSize < grid[row].length)
    adjacentValues.push(grid[row - kernelSize][col + kernelSize]);

    // east
    if (col + kernelSize < grid[row].length)
    adjacentValues.push(grid[row][col + kernelSize]);


    // south-east
    if (row + kernelSize < grid.length && col + kernelSize < grid[row].length)
    adjacentValues.push(grid[row + kernelSize][col + kernelSize]);


    // south
    if (row + kernelSize < grid.length)
    adjacentValues.push(grid[row + kernelSize][col]);


    // south-west
    if (row + kernelSize < grid.length && col - kernelSize >= 0)
    adjacentValues.push(grid[row + kernelSize][col - kernelSize]);


    // west
    if (col - kernelSize >= 0)
    adjacentValues.push(grid[row][col - kernelSize]);


    // north-west
    if (row - kernelSize >= 0 && col - kernelSize >= 0)
    adjacentValues.push(grid[row - kernelSize][col - kernelSize]);


    return adjacentValues;



    What I am most concerned with is _getAdjacentValues. I think there has to be a more convenient way to get adjacent values without including values at implausible indices.



    enter image description here



    What I mean is that i.e. at the grid position (0, 0) even with a kernel size of 1, values like (-1, -1) will be checked automatically by a loop.










    share|improve this question

























      up vote
      7
      down vote

      favorite
      1









      up vote
      7
      down vote

      favorite
      1






      1





      I perform a mean blur on a number-array (called grid in the code). I iterate the array, and calculate average values from adjacent "cells".
      The kernel size defines how many adjacent values are used for average calculation.



      /**
      * Performs a mean blur with a given kernel size.
      * @param number grid - Data grid holding values to blur.
      * @param number kernelSize - Describes the size of the "blur"-cut for each cell.
      */
      static MeanBlur(grid, kernelSize)
      // Result
      let newGrid = ;

      // Iterate original grid
      for (let row = 0; row < grid.length; row++)
      let newRow = ;
      for (let col = 0; col < grid[row].length; col++)

      let adjacentValues = ;

      // Get all adjacent values
      for (let i = 1; i <= kernelSize; i++)
      adjacentValues = adjacentValues.concat(this._getAdjacentValues(row, col, i, grid));



      // Calculate average value
      const average = adjacentValues.reduce((a, b) => a + b, 0) / adjacentValues.length;

      // add average value to the current row
      newRow.push(average);

      newGrid.push(newRow);

      return newGrid;


      /**
      * Return all adjacent values of a cell in a `number`-array.
      * The amount of adjacent value depends on the kernel size.
      * @param number row - Describes the cell's row position.
      * @param number col - Describes the cell's column position.
      * @param number kernelSize - The kernel size.
      * @param number grid - Original data grid.
      */
      static _getAdjacentValues(row, col, kernelSize, grid)

      if (kernelSize < 1)
      throw "Kernel size value should be at least 1.";


      let adjacentValues = ;

      // north
      if (row - kernelSize >= 0)
      adjacentValues.push(grid[row - kernelSize][col]);

      // north-east
      if (row - kernelSize >= 0 && col + kernelSize < grid[row].length)
      adjacentValues.push(grid[row - kernelSize][col + kernelSize]);

      // east
      if (col + kernelSize < grid[row].length)
      adjacentValues.push(grid[row][col + kernelSize]);


      // south-east
      if (row + kernelSize < grid.length && col + kernelSize < grid[row].length)
      adjacentValues.push(grid[row + kernelSize][col + kernelSize]);


      // south
      if (row + kernelSize < grid.length)
      adjacentValues.push(grid[row + kernelSize][col]);


      // south-west
      if (row + kernelSize < grid.length && col - kernelSize >= 0)
      adjacentValues.push(grid[row + kernelSize][col - kernelSize]);


      // west
      if (col - kernelSize >= 0)
      adjacentValues.push(grid[row][col - kernelSize]);


      // north-west
      if (row - kernelSize >= 0 && col - kernelSize >= 0)
      adjacentValues.push(grid[row - kernelSize][col - kernelSize]);


      return adjacentValues;



      What I am most concerned with is _getAdjacentValues. I think there has to be a more convenient way to get adjacent values without including values at implausible indices.



      enter image description here



      What I mean is that i.e. at the grid position (0, 0) even with a kernel size of 1, values like (-1, -1) will be checked automatically by a loop.










      share|improve this question















      I perform a mean blur on a number-array (called grid in the code). I iterate the array, and calculate average values from adjacent "cells".
      The kernel size defines how many adjacent values are used for average calculation.



      /**
      * Performs a mean blur with a given kernel size.
      * @param number grid - Data grid holding values to blur.
      * @param number kernelSize - Describes the size of the "blur"-cut for each cell.
      */
      static MeanBlur(grid, kernelSize)
      // Result
      let newGrid = ;

      // Iterate original grid
      for (let row = 0; row < grid.length; row++)
      let newRow = ;
      for (let col = 0; col < grid[row].length; col++)

      let adjacentValues = ;

      // Get all adjacent values
      for (let i = 1; i <= kernelSize; i++)
      adjacentValues = adjacentValues.concat(this._getAdjacentValues(row, col, i, grid));



      // Calculate average value
      const average = adjacentValues.reduce((a, b) => a + b, 0) / adjacentValues.length;

      // add average value to the current row
      newRow.push(average);

      newGrid.push(newRow);

      return newGrid;


      /**
      * Return all adjacent values of a cell in a `number`-array.
      * The amount of adjacent value depends on the kernel size.
      * @param number row - Describes the cell's row position.
      * @param number col - Describes the cell's column position.
      * @param number kernelSize - The kernel size.
      * @param number grid - Original data grid.
      */
      static _getAdjacentValues(row, col, kernelSize, grid)

      if (kernelSize < 1)
      throw "Kernel size value should be at least 1.";


      let adjacentValues = ;

      // north
      if (row - kernelSize >= 0)
      adjacentValues.push(grid[row - kernelSize][col]);

      // north-east
      if (row - kernelSize >= 0 && col + kernelSize < grid[row].length)
      adjacentValues.push(grid[row - kernelSize][col + kernelSize]);

      // east
      if (col + kernelSize < grid[row].length)
      adjacentValues.push(grid[row][col + kernelSize]);


      // south-east
      if (row + kernelSize < grid.length && col + kernelSize < grid[row].length)
      adjacentValues.push(grid[row + kernelSize][col + kernelSize]);


      // south
      if (row + kernelSize < grid.length)
      adjacentValues.push(grid[row + kernelSize][col]);


      // south-west
      if (row + kernelSize < grid.length && col - kernelSize >= 0)
      adjacentValues.push(grid[row + kernelSize][col - kernelSize]);


      // west
      if (col - kernelSize >= 0)
      adjacentValues.push(grid[row][col - kernelSize]);


      // north-west
      if (row - kernelSize >= 0 && col - kernelSize >= 0)
      adjacentValues.push(grid[row - kernelSize][col - kernelSize]);


      return adjacentValues;



      What I am most concerned with is _getAdjacentValues. I think there has to be a more convenient way to get adjacent values without including values at implausible indices.



      enter image description here



      What I mean is that i.e. at the grid position (0, 0) even with a kernel size of 1, values like (-1, -1) will be checked automatically by a loop.







      javascript array signal-processing






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited 12 hours ago









      200_success

      124k14144401




      124k14144401










      asked 13 hours ago









      チーズパン

      267321




      267321




















          3 Answers
          3






          active

          oldest

          votes

















          up vote
          3
          down vote













          Convolution Filter



          In CG this type of processing is call a convolution filter and there are many strategies used to handle edges



          As the previous answer points out for performance you are best to use typed arrays and avoid creating arrays for each cell you process.



          In your example




          for (let i = 1; i <= kernelSize; i++) 
          adjacentValues = adjacentValues.concat(this._getAdjacentValues(row, col, i, grid));

          const average = adjacentValues.reduce((a, b) => a + b, 0) / adjacentValues.length;



          You can do the reduce in the loop you create the sub array in, and avoid the 9 new arrays you create and the need to iterate again.



          Example substitute value at edges.



          const edgeVal = 0;
          var val, sum = 0;
          for (let i = 0; i < 9; i++) 0) - 1];
          if (val !== undefined)
          val = val[col + (i % 3) - 1];
          sum += val !== undefined ? val : edgeVal;
          else
          sum += edgeVal;


          const average = sum / 9;


          Or ignore edges



          var val;
          var sum = 0;
          var count = 0;
          for (let i = 0; i < 9; i++) 0) - 1];
          if (val !== undefined)
          val = val[col + (i % 3) - 1];
          if (val !== undefined)
          sum += val;
          count ++;



          const average = sum / count;


          Better yet, work on flattened arrays to avoid the double indexing for 2D arrays, and create the result array at the start rather than pushing to it each cell.



          Workers



          Convolution filters are ideally suited for parallel processing with an almost n / p performance increase (n is number of cells, p is number of processing nodes).



          Using web workers on a i7 4 core (8 virtual cores) can give you a 8 times performance increase (On chrome (maybe others) you can get a core count at window.clientInformation.hardwareConcurrency)



          Attempting to use more workers than available cores will result in slower processing.



          GPU



          For the ultimate performance the GPU, via webGL is available and will provide realtime processing of very large data sets (16M and above), A common non CG use is processing layers in convolutional neural network.



          An example of a convolution filter (About halfway down the page) using webGL can easily be adapted to most data types, however doubles will only be available on a small number of hardware setups.






          share|improve this answer





























            up vote
            3
            down vote













            This is essentially a box blur. A box blur is separable. That means that you can significantly reduce the amount of work that you perform by first doing a horizontal-only pass with a kernel of all 1s (so for a 3x3, it would be just [1 1 1]), and then do a vertical-only pass on the result of the horizontal-only pass with the kernel rotated 90°. This website explains it pretty well:




            Filtering an M-by-N image with a P-by-Q filter kernel requires roughly MNPQ multiplies and adds (assuming we aren't using an implementation based on the FFT). If the kernel is separable, you can filter in two steps. The first step requires about MNP multiplies and adds. The second requires about MNQ multiplies and adds, for a total of MN(P + Q).



            The computational advantage of separable convolution versus nonseparable convolution is therefore:



            For a 9-by-9 filter kernel, that's a theoretical speed-up of 4.5.




            If you're going to work on the CPU, there are further techniques you can use for speed-ups, as well. Another example is to use a summed area table. In this technique, you iterate over the input image and the output image is the sum of all pixel above and to the left of the current pixel. It's very fast to calculate this. Once you have that, you can find the mean of any given pixel for an n x n area with 2 subtractions and 1 addition.






            share|improve this answer



























              up vote
              2
              down vote













              Since you not only need adjacent values but also the number of adjacent cells, I do not think that there's a way to work around conditions that cover the corner cases. However, you can make a case differentiation between border cells and inner cells and use two seperate code paths for it. One with and the other without if conditions.



              If accuracy at the borders is less important, you could take the grid indexes modulo width and height respectively, effectively calculating the blur over a torus. This will eliminate the if conditions.



              If performance is an issue I would consider using preallocated one-dimensional TypedArrays for input and output. The resulting average should then be calculated directly from the input array and written to the output array without using intermediate lists, concat or fancy reduce functions.






              share|improve this answer










              New contributor




              aventurin is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.

















                Your Answer




                StackExchange.ifUsing("editor", function ()
                return StackExchange.using("mathjaxEditing", function ()
                StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
                StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
                );
                );
                , "mathjax-editing");

                StackExchange.ifUsing("editor", function ()
                StackExchange.using("externalEditor", function ()
                StackExchange.using("snippets", function ()
                StackExchange.snippets.init();
                );
                );
                , "code-snippets");

                StackExchange.ready(function()
                var channelOptions =
                tags: "".split(" "),
                id: "196"
                ;
                initTagRenderer("".split(" "), "".split(" "), channelOptions);

                StackExchange.using("externalEditor", function()
                // Have to fire editor after snippets, if snippets enabled
                if (StackExchange.settings.snippets.snippetsEnabled)
                StackExchange.using("snippets", function()
                createEditor();
                );

                else
                createEditor();

                );

                function createEditor()
                StackExchange.prepareEditor(
                heartbeatType: 'answer',
                convertImagesToLinks: false,
                noModals: false,
                showLowRepImageUploadWarning: true,
                reputationToPostImages: null,
                bindNavPrevention: true,
                postfix: "",
                onDemand: true,
                discardSelector: ".discard-answer"
                ,immediatelyShowMarkdownHelp:true
                );



                );













                 

                draft saved


                draft discarded


















                StackExchange.ready(
                function ()
                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f203826%2fperforming-a-mean-blur%23new-answer', 'question_page');

                );

                Post as a guest






























                3 Answers
                3






                active

                oldest

                votes








                3 Answers
                3






                active

                oldest

                votes









                active

                oldest

                votes






                active

                oldest

                votes








                up vote
                3
                down vote













                Convolution Filter



                In CG this type of processing is call a convolution filter and there are many strategies used to handle edges



                As the previous answer points out for performance you are best to use typed arrays and avoid creating arrays for each cell you process.



                In your example




                for (let i = 1; i <= kernelSize; i++) 
                adjacentValues = adjacentValues.concat(this._getAdjacentValues(row, col, i, grid));

                const average = adjacentValues.reduce((a, b) => a + b, 0) / adjacentValues.length;



                You can do the reduce in the loop you create the sub array in, and avoid the 9 new arrays you create and the need to iterate again.



                Example substitute value at edges.



                const edgeVal = 0;
                var val, sum = 0;
                for (let i = 0; i < 9; i++) 0) - 1];
                if (val !== undefined)
                val = val[col + (i % 3) - 1];
                sum += val !== undefined ? val : edgeVal;
                else
                sum += edgeVal;


                const average = sum / 9;


                Or ignore edges



                var val;
                var sum = 0;
                var count = 0;
                for (let i = 0; i < 9; i++) 0) - 1];
                if (val !== undefined)
                val = val[col + (i % 3) - 1];
                if (val !== undefined)
                sum += val;
                count ++;



                const average = sum / count;


                Better yet, work on flattened arrays to avoid the double indexing for 2D arrays, and create the result array at the start rather than pushing to it each cell.



                Workers



                Convolution filters are ideally suited for parallel processing with an almost n / p performance increase (n is number of cells, p is number of processing nodes).



                Using web workers on a i7 4 core (8 virtual cores) can give you a 8 times performance increase (On chrome (maybe others) you can get a core count at window.clientInformation.hardwareConcurrency)



                Attempting to use more workers than available cores will result in slower processing.



                GPU



                For the ultimate performance the GPU, via webGL is available and will provide realtime processing of very large data sets (16M and above), A common non CG use is processing layers in convolutional neural network.



                An example of a convolution filter (About halfway down the page) using webGL can easily be adapted to most data types, however doubles will only be available on a small number of hardware setups.






                share|improve this answer


























                  up vote
                  3
                  down vote













                  Convolution Filter



                  In CG this type of processing is call a convolution filter and there are many strategies used to handle edges



                  As the previous answer points out for performance you are best to use typed arrays and avoid creating arrays for each cell you process.



                  In your example




                  for (let i = 1; i <= kernelSize; i++) 
                  adjacentValues = adjacentValues.concat(this._getAdjacentValues(row, col, i, grid));

                  const average = adjacentValues.reduce((a, b) => a + b, 0) / adjacentValues.length;



                  You can do the reduce in the loop you create the sub array in, and avoid the 9 new arrays you create and the need to iterate again.



                  Example substitute value at edges.



                  const edgeVal = 0;
                  var val, sum = 0;
                  for (let i = 0; i < 9; i++) 0) - 1];
                  if (val !== undefined)
                  val = val[col + (i % 3) - 1];
                  sum += val !== undefined ? val : edgeVal;
                  else
                  sum += edgeVal;


                  const average = sum / 9;


                  Or ignore edges



                  var val;
                  var sum = 0;
                  var count = 0;
                  for (let i = 0; i < 9; i++) 0) - 1];
                  if (val !== undefined)
                  val = val[col + (i % 3) - 1];
                  if (val !== undefined)
                  sum += val;
                  count ++;



                  const average = sum / count;


                  Better yet, work on flattened arrays to avoid the double indexing for 2D arrays, and create the result array at the start rather than pushing to it each cell.



                  Workers



                  Convolution filters are ideally suited for parallel processing with an almost n / p performance increase (n is number of cells, p is number of processing nodes).



                  Using web workers on a i7 4 core (8 virtual cores) can give you a 8 times performance increase (On chrome (maybe others) you can get a core count at window.clientInformation.hardwareConcurrency)



                  Attempting to use more workers than available cores will result in slower processing.



                  GPU



                  For the ultimate performance the GPU, via webGL is available and will provide realtime processing of very large data sets (16M and above), A common non CG use is processing layers in convolutional neural network.



                  An example of a convolution filter (About halfway down the page) using webGL can easily be adapted to most data types, however doubles will only be available on a small number of hardware setups.






                  share|improve this answer
























                    up vote
                    3
                    down vote










                    up vote
                    3
                    down vote









                    Convolution Filter



                    In CG this type of processing is call a convolution filter and there are many strategies used to handle edges



                    As the previous answer points out for performance you are best to use typed arrays and avoid creating arrays for each cell you process.



                    In your example




                    for (let i = 1; i <= kernelSize; i++) 
                    adjacentValues = adjacentValues.concat(this._getAdjacentValues(row, col, i, grid));

                    const average = adjacentValues.reduce((a, b) => a + b, 0) / adjacentValues.length;



                    You can do the reduce in the loop you create the sub array in, and avoid the 9 new arrays you create and the need to iterate again.



                    Example substitute value at edges.



                    const edgeVal = 0;
                    var val, sum = 0;
                    for (let i = 0; i < 9; i++) 0) - 1];
                    if (val !== undefined)
                    val = val[col + (i % 3) - 1];
                    sum += val !== undefined ? val : edgeVal;
                    else
                    sum += edgeVal;


                    const average = sum / 9;


                    Or ignore edges



                    var val;
                    var sum = 0;
                    var count = 0;
                    for (let i = 0; i < 9; i++) 0) - 1];
                    if (val !== undefined)
                    val = val[col + (i % 3) - 1];
                    if (val !== undefined)
                    sum += val;
                    count ++;



                    const average = sum / count;


                    Better yet, work on flattened arrays to avoid the double indexing for 2D arrays, and create the result array at the start rather than pushing to it each cell.



                    Workers



                    Convolution filters are ideally suited for parallel processing with an almost n / p performance increase (n is number of cells, p is number of processing nodes).



                    Using web workers on a i7 4 core (8 virtual cores) can give you a 8 times performance increase (On chrome (maybe others) you can get a core count at window.clientInformation.hardwareConcurrency)



                    Attempting to use more workers than available cores will result in slower processing.



                    GPU



                    For the ultimate performance the GPU, via webGL is available and will provide realtime processing of very large data sets (16M and above), A common non CG use is processing layers in convolutional neural network.



                    An example of a convolution filter (About halfway down the page) using webGL can easily be adapted to most data types, however doubles will only be available on a small number of hardware setups.






                    share|improve this answer














                    Convolution Filter



                    In CG this type of processing is call a convolution filter and there are many strategies used to handle edges



                    As the previous answer points out for performance you are best to use typed arrays and avoid creating arrays for each cell you process.



                    In your example




                    for (let i = 1; i <= kernelSize; i++) 
                    adjacentValues = adjacentValues.concat(this._getAdjacentValues(row, col, i, grid));

                    const average = adjacentValues.reduce((a, b) => a + b, 0) / adjacentValues.length;



                    You can do the reduce in the loop you create the sub array in, and avoid the 9 new arrays you create and the need to iterate again.



                    Example substitute value at edges.



                    const edgeVal = 0;
                    var val, sum = 0;
                    for (let i = 0; i < 9; i++) 0) - 1];
                    if (val !== undefined)
                    val = val[col + (i % 3) - 1];
                    sum += val !== undefined ? val : edgeVal;
                    else
                    sum += edgeVal;


                    const average = sum / 9;


                    Or ignore edges



                    var val;
                    var sum = 0;
                    var count = 0;
                    for (let i = 0; i < 9; i++) 0) - 1];
                    if (val !== undefined)
                    val = val[col + (i % 3) - 1];
                    if (val !== undefined)
                    sum += val;
                    count ++;



                    const average = sum / count;


                    Better yet, work on flattened arrays to avoid the double indexing for 2D arrays, and create the result array at the start rather than pushing to it each cell.



                    Workers



                    Convolution filters are ideally suited for parallel processing with an almost n / p performance increase (n is number of cells, p is number of processing nodes).



                    Using web workers on a i7 4 core (8 virtual cores) can give you a 8 times performance increase (On chrome (maybe others) you can get a core count at window.clientInformation.hardwareConcurrency)



                    Attempting to use more workers than available cores will result in slower processing.



                    GPU



                    For the ultimate performance the GPU, via webGL is available and will provide realtime processing of very large data sets (16M and above), A common non CG use is processing layers in convolutional neural network.



                    An example of a convolution filter (About halfway down the page) using webGL can easily be adapted to most data types, however doubles will only be available on a small number of hardware setups.







                    share|improve this answer














                    share|improve this answer



                    share|improve this answer








                    edited 2 hours ago

























                    answered 3 hours ago









                    Blindman67

                    5,8241320




                    5,8241320






















                        up vote
                        3
                        down vote













                        This is essentially a box blur. A box blur is separable. That means that you can significantly reduce the amount of work that you perform by first doing a horizontal-only pass with a kernel of all 1s (so for a 3x3, it would be just [1 1 1]), and then do a vertical-only pass on the result of the horizontal-only pass with the kernel rotated 90°. This website explains it pretty well:




                        Filtering an M-by-N image with a P-by-Q filter kernel requires roughly MNPQ multiplies and adds (assuming we aren't using an implementation based on the FFT). If the kernel is separable, you can filter in two steps. The first step requires about MNP multiplies and adds. The second requires about MNQ multiplies and adds, for a total of MN(P + Q).



                        The computational advantage of separable convolution versus nonseparable convolution is therefore:



                        For a 9-by-9 filter kernel, that's a theoretical speed-up of 4.5.




                        If you're going to work on the CPU, there are further techniques you can use for speed-ups, as well. Another example is to use a summed area table. In this technique, you iterate over the input image and the output image is the sum of all pixel above and to the left of the current pixel. It's very fast to calculate this. Once you have that, you can find the mean of any given pixel for an n x n area with 2 subtractions and 1 addition.






                        share|improve this answer
























                          up vote
                          3
                          down vote













                          This is essentially a box blur. A box blur is separable. That means that you can significantly reduce the amount of work that you perform by first doing a horizontal-only pass with a kernel of all 1s (so for a 3x3, it would be just [1 1 1]), and then do a vertical-only pass on the result of the horizontal-only pass with the kernel rotated 90°. This website explains it pretty well:




                          Filtering an M-by-N image with a P-by-Q filter kernel requires roughly MNPQ multiplies and adds (assuming we aren't using an implementation based on the FFT). If the kernel is separable, you can filter in two steps. The first step requires about MNP multiplies and adds. The second requires about MNQ multiplies and adds, for a total of MN(P + Q).



                          The computational advantage of separable convolution versus nonseparable convolution is therefore:



                          For a 9-by-9 filter kernel, that's a theoretical speed-up of 4.5.




                          If you're going to work on the CPU, there are further techniques you can use for speed-ups, as well. Another example is to use a summed area table. In this technique, you iterate over the input image and the output image is the sum of all pixel above and to the left of the current pixel. It's very fast to calculate this. Once you have that, you can find the mean of any given pixel for an n x n area with 2 subtractions and 1 addition.






                          share|improve this answer






















                            up vote
                            3
                            down vote










                            up vote
                            3
                            down vote









                            This is essentially a box blur. A box blur is separable. That means that you can significantly reduce the amount of work that you perform by first doing a horizontal-only pass with a kernel of all 1s (so for a 3x3, it would be just [1 1 1]), and then do a vertical-only pass on the result of the horizontal-only pass with the kernel rotated 90°. This website explains it pretty well:




                            Filtering an M-by-N image with a P-by-Q filter kernel requires roughly MNPQ multiplies and adds (assuming we aren't using an implementation based on the FFT). If the kernel is separable, you can filter in two steps. The first step requires about MNP multiplies and adds. The second requires about MNQ multiplies and adds, for a total of MN(P + Q).



                            The computational advantage of separable convolution versus nonseparable convolution is therefore:



                            For a 9-by-9 filter kernel, that's a theoretical speed-up of 4.5.




                            If you're going to work on the CPU, there are further techniques you can use for speed-ups, as well. Another example is to use a summed area table. In this technique, you iterate over the input image and the output image is the sum of all pixel above and to the left of the current pixel. It's very fast to calculate this. Once you have that, you can find the mean of any given pixel for an n x n area with 2 subtractions and 1 addition.






                            share|improve this answer












                            This is essentially a box blur. A box blur is separable. That means that you can significantly reduce the amount of work that you perform by first doing a horizontal-only pass with a kernel of all 1s (so for a 3x3, it would be just [1 1 1]), and then do a vertical-only pass on the result of the horizontal-only pass with the kernel rotated 90°. This website explains it pretty well:




                            Filtering an M-by-N image with a P-by-Q filter kernel requires roughly MNPQ multiplies and adds (assuming we aren't using an implementation based on the FFT). If the kernel is separable, you can filter in two steps. The first step requires about MNP multiplies and adds. The second requires about MNQ multiplies and adds, for a total of MN(P + Q).



                            The computational advantage of separable convolution versus nonseparable convolution is therefore:



                            For a 9-by-9 filter kernel, that's a theoretical speed-up of 4.5.




                            If you're going to work on the CPU, there are further techniques you can use for speed-ups, as well. Another example is to use a summed area table. In this technique, you iterate over the input image and the output image is the sum of all pixel above and to the left of the current pixel. It's very fast to calculate this. Once you have that, you can find the mean of any given pixel for an n x n area with 2 subtractions and 1 addition.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered 2 hours ago









                            user1118321

                            10.4k11145




                            10.4k11145




















                                up vote
                                2
                                down vote













                                Since you not only need adjacent values but also the number of adjacent cells, I do not think that there's a way to work around conditions that cover the corner cases. However, you can make a case differentiation between border cells and inner cells and use two seperate code paths for it. One with and the other without if conditions.



                                If accuracy at the borders is less important, you could take the grid indexes modulo width and height respectively, effectively calculating the blur over a torus. This will eliminate the if conditions.



                                If performance is an issue I would consider using preallocated one-dimensional TypedArrays for input and output. The resulting average should then be calculated directly from the input array and written to the output array without using intermediate lists, concat or fancy reduce functions.






                                share|improve this answer










                                New contributor




                                aventurin is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                Check out our Code of Conduct.





















                                  up vote
                                  2
                                  down vote













                                  Since you not only need adjacent values but also the number of adjacent cells, I do not think that there's a way to work around conditions that cover the corner cases. However, you can make a case differentiation between border cells and inner cells and use two seperate code paths for it. One with and the other without if conditions.



                                  If accuracy at the borders is less important, you could take the grid indexes modulo width and height respectively, effectively calculating the blur over a torus. This will eliminate the if conditions.



                                  If performance is an issue I would consider using preallocated one-dimensional TypedArrays for input and output. The resulting average should then be calculated directly from the input array and written to the output array without using intermediate lists, concat or fancy reduce functions.






                                  share|improve this answer










                                  New contributor




                                  aventurin is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                  Check out our Code of Conduct.



















                                    up vote
                                    2
                                    down vote










                                    up vote
                                    2
                                    down vote









                                    Since you not only need adjacent values but also the number of adjacent cells, I do not think that there's a way to work around conditions that cover the corner cases. However, you can make a case differentiation between border cells and inner cells and use two seperate code paths for it. One with and the other without if conditions.



                                    If accuracy at the borders is less important, you could take the grid indexes modulo width and height respectively, effectively calculating the blur over a torus. This will eliminate the if conditions.



                                    If performance is an issue I would consider using preallocated one-dimensional TypedArrays for input and output. The resulting average should then be calculated directly from the input array and written to the output array without using intermediate lists, concat or fancy reduce functions.






                                    share|improve this answer










                                    New contributor




                                    aventurin is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.









                                    Since you not only need adjacent values but also the number of adjacent cells, I do not think that there's a way to work around conditions that cover the corner cases. However, you can make a case differentiation between border cells and inner cells and use two seperate code paths for it. One with and the other without if conditions.



                                    If accuracy at the borders is less important, you could take the grid indexes modulo width and height respectively, effectively calculating the blur over a torus. This will eliminate the if conditions.



                                    If performance is an issue I would consider using preallocated one-dimensional TypedArrays for input and output. The resulting average should then be calculated directly from the input array and written to the output array without using intermediate lists, concat or fancy reduce functions.







                                    share|improve this answer










                                    New contributor




                                    aventurin is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.









                                    share|improve this answer



                                    share|improve this answer








                                    edited 7 hours ago





















                                    New contributor




                                    aventurin is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.









                                    answered 7 hours ago









                                    aventurin

                                    18517




                                    18517




                                    New contributor




                                    aventurin is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.





                                    New contributor





                                    aventurin is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.






                                    aventurin is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.



























                                         

                                        draft saved


                                        draft discarded















































                                         


                                        draft saved


                                        draft discarded














                                        StackExchange.ready(
                                        function ()
                                        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f203826%2fperforming-a-mean-blur%23new-answer', 'question_page');

                                        );

                                        Post as a guest













































































                                        Comments

                                        Popular posts from this blog

                                        What does second last employer means? [closed]

                                        Installing NextGIS Connect into QGIS 3?

                                        One-line joke