Sign In to Follow Application
View All Documents & Correspondence

A System For Mixing Images And A Method Thereof

Abstract: In accordance with an embodiment of the present invention, an image mixing system (100) for mixing images with different tone mapping where different images are generated from an input image is provided. The image mixing system (100) includes a clipper and clamper mapping unit (103), a mixer coefficient generator (201), and a mixing unit (104). The mixer coefficient generator (201) generates weights for mixing. The mixing unit (104) mixes different tone mapped images using a multiplexer (203) having lines: an output of the clipper and clamper mapping unit (103), a weighted sum of the tone mapped images, such as, outputs of a global tone mapping unit (101) and a local tone mapping unit (102). Ref. Fig.: Figure 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
24 March 2020
Publication Number
40/2021
Publication Type
INA
Invention Field
ELECTRICAL
Status
Email
info@krishnaandsaurastri.com
Parent Application
Patent Number
Legal Status
Grant Date
2025-04-21
Renewal Date

Applicants

BHARAT ELECTRONICS LIMITED
OUTER RING ROAD, NAGAVARA, BANGALORE

Inventors

1. Neelabh Keshav
Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore-560013
2. Uday Kumar Urimi
Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore-560013
3. Mastan Rao Kongara
Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore-560013
4. Chinnappa Rajappa Patil
Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore-560013

Specification

DESC:FIELD OF INVENTION
[0001] The present invention relates generally to image processing systems and particularly to mixing images with different tone mapping.

BACKGROUND
[0002] Image sensors consist of arrays of pixels which are sensitive to a specific band of electromagnetic spectrum such as visible, near-infrared, infrared etc. These sensors generate a current or a voltage based on incident electromagnetic radiation. Analog to Digital Convertors convert the analog signals to digital signals and thus give a matrix data at a rate called frames per second.
[0003] Image and video quality are heavily influenced by contrast of a scene. The perception of the contrast by human eye is quite complex and is dependent, among other factors, on the content of the scene itself. A better contrast image displays greater variation in image intensity levels though it may not always be desired.
[0004] When tested on large set of different conditions, many current methods may fail in certain scenes such as low dynamic range images, scenes with continuously varying intensity etc. Application of same technique to all images or video frame regardless of its content produces poor quality image in certain cases which in other condition may be far superior.
[0005] Most of the imaging sensors produce a high dynamic range digital data as high as 8000 different intensity levels or even more. Traditional display devices on the other hand may operate using only 256 display levels. Further, human eyes barely perceive half of these levels. Working with a greater number of bits has computational cost and high memory bandwidth. So, not only the image or video data has to be enhanced for better viewing experience, but also its size needs to be reduced.
[0006] Histogram Equalization is a non-linear transformation for dynamic range reduction of high dynamic range digital image wherein a high dynamic range input data is mapped to a low dynamic range data. In the process, the contrast of the image may undergo a change which may not always cause a better visual differentiation among various features within the image. The histogram equalization algorithm involves generation of gain based on the cumulative distribution function such that the dominant dynamic range is mapped to a wider set of tones compared to the less occurring dynamic range. Histogram Equalization can be done at a global level for the complete frame or at local level block by block followed by matching intensities for all the blocks. Other algorithms like Automatic Gain Control, Histogram Projection etc. may also be used for dynamic range reduction in similar way.
[0007] In EP2226762A1, an image is divided into several blocks each of which is histogram equalized. The image is again divided into several blocks such that the new blocks contains a part of adjacent regions and applying the equalization functions computed for first set of blocks using bilinear interpolation. Image is further divided into third set of regions such that each of these regions comprises corresponding area of corresponding first corner region of the image and applying the first equalization function computed for first corresponding region.
[0008] In US10062154, a contrast stretching module is used to generate histogram by compressing and enlarging a contrast of brightness bins of an equalized histogram of brightness component values. Previous input frame of image pixel data is used to generate equalized histogram of brightness component values and is applied on current frame to form contrast enhanced pixel. A flat region detection module based partly on variance of brightness component is used to count number of flat pixels after classifying the pixels as flat or not. A noise detection module is used to detect whether the pixel is noisy or not. Contrast enhancement is restricted based partly on number of pixels containing noise.
[0009] In US9055227, luminance histogram is computed, and first distance is computed from luminance histogram to a plurality of predetermined luminance histograms. First control points for global tone mapping is estimated from first pre-determined control points by computing luminance histograms for training images by plurality of users. Global tone mapping curve is applied to digital image. Principal component analysis is applied to luminance histograms of training images in training prototype set. A second distance is calculated from a pixel to centers of predetermined number of neighboring image blocks after dividing the image into plurality of image blocks. Local tone mapping curve is generated for image blocks based on second control points. Local tone mapping and global tone mapping curve is merged to generate tone mapped pixel value.
[0010] There is still a need for a method of mixing images that provides better contrast and minimizes flickering.

SUMMARY
[0011] This summary is provided to introduce concepts related to an image mixing system and a method for mixing images. This summary is neither intended to identify essential features of the present invention nor is it intended for use in determining or limiting the scope of the present invention.
[0012] In an embodiment of the present invention, an image mixing system is provided. The image mixing system includes a local tone mapping unit, a clipper and clamper mapping unit, a global tone mapping unit, a mixing unit, and a temporal smoothening unit. The mixing unit includes a mixer coefficient generator, a multiplier, another multiplier, an adder, a 2×1 multiplexer. The local tone mapping unit generates a first image (Image 1) based on an input image. The clipper and clamper mapping unit generates a second image (Image 2) based on the input image. The global tone mapping unit generates a third image (Image 3) based on the input image. The mixer coefficient generator generates mixing coefficient (a) based on the input image. The multiplier applies the mixing coefficient (a) to the first image (Image 1) to generate a Weighted Image 1. The other multiplier applies a complementary mixing coefficient (1 - a) to the third image (Image 3) to generate a Weighted Image 3. The adder sums the Weighted Image 1 and the Weighted Image 3 to generate an Image A. The 2×1 multiplexer having the Image A and the Image 2 (Image B) as inputs and a select line controlled by a decision-making circuit (202) to generate a fourth image (Image 4). The temporal smoothening unit temporally smoothens the fourth image (Image 4) to generate an output image.
[0013] In another embodiment of the present invention, a method for mixing images in an image mixing system is provided. The method includes generating a first image (Image 1) by a local tone mapping circuit based on an input image. A clipper and clamper mapping unit generates a second image (Image 2) based on the input image. A global tone mapping unit generates a third image (Image 3) based on the input image. The method further includes generating a mixing coefficient (a) by a mixer coefficient generator based on the input image. A multiplier applies the mixing coefficient (a) to the first image (Image 1) and to generate a Weighted Image 1. Another multiplier applies a complementary mixing coefficient (1 - a) to the third image (Image 3) to generate a Weighted Image 3. An adder adds the Weighted Image 1 and the Weighted Image 3 to generate an Image A. A 2×1 multiplexer multiplexes the Image A with the Image 2 (Image B) to generate a fourth image (Image 4). The 2×1 multiplexer has a select line controlled by a decision-making circuit. The method further includes temporally smoothening the fourth image (Image 4) to generate an output image.
[0014] In an exemplary embodiment, the clipper and clamper mapping unit includes a first occurrence counter, a second occurrence counter, an energy center generator, a subtractor, and a clipper circuit. The first occurrence counter receives the input image and a first decision threshold. The input image corresponds to an input matrix data. The first occurrence counter generates a first intermediate matrix data having elements from the input matrix data occurring more than the first decision threshold. The second occurrence counter receives the first intermediate matrix data, generates a second intermediate matrix data having elements from the first intermediate matrix data occurring more than the second decision threshold. The energy center generator calculates an energy center of the input matrix data. The subtractor clamps the input matrix data with the energy center to generate a third intermediate matrix data. The clipper circuit limits a range of the third intermediate matrix data to a predetermined range to generate the second image (Image 2).
[0015] In an exemplary embodiment, the mixer coefficient generator includes a dynamic range unit, a motion awareness unit, a change awareness unit, an OR gate, and a digital to analog converter. The dynamic range unit calculates a digital dynamic range data of the input image. The dynamic range unit sets a low dynamic range flag (Low_Dynamic_Range flag) when the digital dynamic range data is below a low threshold. The dynamic range unit sets a high dynamic range flag (High_Dynamic_Range flag) when the digital dynamic range data is above a high threshold. The dynamic range unit generates a moderate signal when the neither High_Dynamic_Range flag nor Low_Dynamic_Range flag is set and one or more most significant bits of the digital dynamic range data are directly sent to a digital to analog convertor. The motion awareness unit is activated by the high dynamic range flag (High_Dynamic_Range flag). The motion awareness unit detects motion in the input image. The motion awareness unit sets a high motion flag (High_Motion flag) when high amount of motion is detected. The motion awareness unit sets a low motion flag (Low_Motion flag) when low amount of motion is detected. The change awareness unit is activated by the low motion flag (Low_Motion flag). The change awareness unit detects changes in the input image. The change awareness unit sets a high change flag (High_Change flag) when high amount of change is detected. The change awareness unit sets a low change flag (Low_Change flag) when low amount of change is detected. The OR gate has the low dynamic range flag (Low_Dynamic_Range flag), the high motion flag (High_Motion flag), and the high change flag (High_Change flag) as inputs and generates an output signal. The digital to analog converter receives the bits of the digital dynamic range data when moderate signal is set, the output signal of the OR gate, and the low change flag (Low_Change flag) as inputs and generates the mixer coefficient (a) based on a gain bias and an offset bias and the digital input data consisting of output of the OR gate, one or more most significant bits of the digital dynamic range data and Low_Change flag.
[0016] In an exemplary embodiment, the energy center is an average of a minimum element and a maximum element of the second intermediate matrix data.
[0017] In an exemplary embodiment, the energy center is updated based on at least two elements from the second intermediate matrix data.
[0018] In an exemplary embodiment, the clipper and clamper mapping unit includes a second decision threshold circuit that generates the second decision threshold. The clipper and clamper mapping unit increases the second decision threshold when number of elements of the second intermediate matrix data is beyond a predefined range:
energy_threshold_new = energy_threshold_old + ?………………………. (2)
where
? = (? × (total_valid_occurrence – d)) + s…………………….……………. (3)
decrease the second decision threshold when number of elements of the second intermediate matrix data is below a predefined range:
energy_threshold_new = energy_threshold_old - ?…………………… (4)
where
? = (F× (total_valid_occurrence – ?)) + ß……………...………………... (5)
wherein ?, ?, d, s, ?, F, ? and ß are whole numbers in (2)-(5).
[0019] In an exemplary embodiment, the decision-making circuit includes a Schmitt trigger circuit having the digital dynamic range data of the input image data as input and the output controlling the select line of the 2×1 multiplexer.

BRIEF DESCRIPTION OF ACCOMPANYING DRAWINGS
[0020] The detailed description is described with reference to the accompanying figures.
[0021] Figure 1 illustrates a schematic block diagram of an image mixing system in accordance with an embodiment of the present invention.
[0022] Figure 2 illustrates a schematic block diagram of a mixer and a mixer coefficient generator in accordance with an embodiment of the present invention.
[0023] Figure 3 illustrates a schematic block diagram of a clipper and clamper mapping unit in accordance with an embodiment of the present invention.
[0024] Figure 4 illustrates a schematic block diagram of a mixer coefficient generator in accordance with an embodiment of the present invention.
[0025] Figures 5 illustrates a flowchart of a method for mixing images in accordance with an embodiment of the present invention.
[0026] Figure 6 illustrates a flowchart of a method of generating a second image in accordance with an embodiment of the present invention.
[0027] Figure 7 illustrates a flowchart of a method of generating a mixer coefficient in accordance with an embodiment of the present invention.
[0028] Figure 8 illustrates a method of generating a second decision threshold in accordance with an embodiment of the present invention.
[0029] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems embodying the principles of the present invention.
[0030] Similarly, it will be appreciated that any flow chart, flow diagram, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.

DETAILED DESCRIPTION
[0031] The various embodiments of the present invention provide an image mixing system and a method for mixing images.
[0032] In the following description, for purpose of explanation, specific details are set forth in order to provide an understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these details.
[0033] One skilled in the art will recognize that embodiments of the present invention, some of which are described below, may be incorporated into a number of systems.
[0034] However, the systems and methods are not limited to the specific embodiments described herein. Further, structures and devices shown in the figures are illustrative of exemplary embodiments of the present invention and are meant to avoid obscuring of the present invention.
[0035] Furthermore, connections between components and/or modules within the figures are not intended to be limited to direct connections. Rather, these components and modules may be modified, re-formatted or otherwise changed by intermediary components and modules.
[0036] The appearances of the phrase “in an embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
[0037] Referring now to Figure 1, a schematic block diagram of an image mixing system (100) is illustrated in accordance with an embodiment of the present invention. The image mixing system (100) mixes images generated using different tone mapping. The image mixing system (100) includes a global tone mapping unit (101), a local tone mapping unit (102), a clipper and clamper mapping unit (103), a mixing unit (104), a first tone map update circuit (105), and a second tone map update circuit (106). The first tone map update circuit (105) is connected to the global tone mapping unit (101). The second tone map update circuit (106) is connected to the local tone mapping unit (102). The global tone mapping unit (101), the local tone mapping unit (102), and the clipper and clamper mapping unit (103) are connected to the mixing unit (104).
[0038] An input matrix data corresponding to an input image is fed to the global tone mapping unit (101), the clipper and clamper mapping unit (103) and the local tone mapping unit (102). A mapping table for the global tone mapping unit (101) is controlled using the first tone map update circuit (105). For the local tone mapping unit (102), the mapping table is controlled using the second tone map update circuit (106).
[0039] A first image (Image 1) is an output of the local tone mapping unit (102), a second image (Image 2) is an output of the clipper and clamper mapping unit (103), and a third image (Image 3) is an output of the global tone mapping unit (101). These three images (Images 1-3) are mixed in a proportion generated in the mixing unit (104).
[0040] Referring now to Figure 2, a schematic block diagram of a mixer and a mixer coefficient generator is illustrated in accordance with an embodiment of the present invention. The mixing unit (104) includes a mixer coefficient generator (201), a decision-making circuit (202), a multiplexer (203), and a temporal smoothening unit (204).
[0041] The mixer coefficient generator (201) generates a mixing coefficient (a), such that 0 = a = 1. The first image (Image 1) is fed along with the mixing coefficient (a) to a multiplier (210). The mixing coefficient (a) is also fed to a subtractor (211) that subtracts the mixing coefficient (a) from 1. The output of this subtractor (211) is fed to a multiplier (212) where it is multiplied with the third image (Image 3). The output of both the multipliers (210, 212) is fed to an adder (213). The output of the adder (213) is connected to an input data line of the 2×1 multiplexer (203). The other input data line of the 2×1 multiplexer (203) is the second image (Image 2). A select line of the 2×1 multiplexer (203) is fed from output of the decision-making circuit (202). The output of the 2×1 multiplexer (203) is a fourth image (Image 4). Temporal smoothening is applied on the fourth image (Image 4) in the temporal smoothening unit (204) to generate an output image.
[0042] Referring now to Figure 3, a schematic block diagram of the clipper and clamper mapping unit (103) is illustrated in accordance with an embodiment of the present invention. The clipper and clamper mapping unit (103) includes a first occurrence counter (301), a second decision threshold circuit (302), a second occurrence counter (303), an energy center generator (304), and a clipper circuit (305).
[0043] The input matrix data is fed as input to the clipper and clamper mapping unit (103). The input matrix data is fed to the first occurrence counter (301). Based on a first decision threshold, value of elements in the input matrix data which occur more than the first decision threshold is retained with their number of occurrences as a first intermediate matrix data. The first intermediate matrix data is fed to the second occurrence counter (303). The second decision threshold circuit (302) generates a second threshold value which is used by the second occurrence counter (303). The second occurrence counter (303) performs function similar to the first occurrence counter (301) and retains values of only those elements whose occurrence is more than the second decision threshold. The second occurrence counter (303) generates the second intermediate matrix data.
[0044] An energy center of the second intermediate matrix data is calculated in the energy center generator (304) as Energy_Centre_Curr_Frame. A mean of minimum and maximum value of the second intermediate matrix data may be taken as energy center which is generated in the energy center generator (304). The value of the generated energy center is smoothened in time domain using the equation 1:
[0045] Curr_Energy_Centre = ( µ0 × Energy_Centre_Curr_Frame) + ( (1-µ0) × Prev_Energy_Centre)…………………………………………………(equation 1)
[0046] where 0 < µ0 < 1
[0047] In the second decision threshold circuit (302), the second decision threshold is calculated based on a feedback from the second occurrence counter (303). If the total valid occurrences in the second occurrence counter (303) based on the output of the second decision threshold circuit (302) is within a certain set limits, the output of the second decision threshold circuit (302) continues to be same. However, if the total valid occurrence exceeds the set limits, the second decision threshold is increased using equation 2:
[0048] energy_threshold_new = energy_threshold_old + ?……….(equation 2)
[0049] where ? = (? × (total_valid_occurrence – d)) + s…………(equation 3)
[0050] where ?, ?, d and s are whole numbers
[0051] The values of these are set such that the value by which the second decision threshold is increased is directly proportional to the distance of desired number of valid levels from the current number of valid levels. Also, the value in any case doesn’t exceed a fixed value. This imparts smooth change in the second decision threshold value.
[0052] Similarly, if the total valid occurrence falls below the set limits, the threshold is decreased using equation 4:
[0053] energy_threshold_new = energy_threshold_old - ?………(equation 4)
[0054] where ? = (F× (total_valid_occurrence – ?)) + ß……..…(equation 5)
[0055] ?, F, ? and ß are whole numbers
[0056] The values of these are set such that the value by which the second decision threshold is decreased is directly proportional to the distance of desired number of valid levels from the current number of valid levels. Also, the value in any case doesn’t exceed a fixed value. This imparts smooth change in the second decision threshold value.
[0057] The value of Curr_Energy_Centre is updated only when there are at least two valid occurrences.
[0058] The clamping operation is performed on the input matrix data using the output of the energy center generator (304) by way of a subtractor (310) to generate a third intermediate matrix. Finally, the clipper circuit (305) clips beyond the intended minimum and maximum limits of the third intermediate matrix. The clipper circuit (305) limits the range of values of the elements of the third intermediate matrix to a predefined range. One such intended minimum and maximum value may be 0 and 255 respectively for 8-bit output range.
[0059] Referring now to Figure 4, a schematic block diagram of the mixer coefficient generator (201) is illustrated in accordance with an embodiment of the present invention. The mixer coefficient generator (201) includes a dynamic range unit (401), a motion awareness unit (402), a change awareness unit (403), an OR gate (404), and a digital to analog convertor (405).
[0060] The input matrix is fed to the dynamic range unit (401), the motion awareness unit (402), and the change awareness unit (403). The dynamic range unit (401) keeps a count of the matrix elements whose occurrence is more than a set threshold. The total number of retained elements is called as a digital dynamic range data indicative of a dynamic range is also sent to the decision-making circuit (202) which contains a Schmitt trigger circuit that provides hysteresis and is used to set upper and lower limits for switching between the two multiplexer inputs in multiplexer (203). Ranges can be set for three levels of the dynamic range i.e., low, moderate and high. For moderate the dynamic range, the value of the dynamic range is directly sent to the digital to analog convertor (405). In case the dynamic range is high, High_Dynamic_Range flag is set, the motion awareness unit (402) output is enabled and the level of motion in image is estimated whether it is high or low. In case motion is low, the change awareness unit (403) output is enabled. Low_Dynamic_Range flag, High_Motion flag and High_Change flag is fed to the OR gate (404). The output of OR gate (404) is also fed to the digital to analog converter (405).
[0061] The digital dynamic range data in the dynamic range unit (401) is computed using the output of the first occurrence counter (301). The number of different elements retained is taken as Dynamic_Range_Curr_Frame and is as follows:
[0062] Curr_Dynamic_Range = ( µ1 × Dynamic_Range_Curr_Frame) + ( (1-µ1) × Prev_ Dynamic_Range)………………………………………………(equation 6)
[0063] where 0 < µ1 < 1
[0064] Two Schmitt trigger circuit is used for High_Dynamic_Range and Low_Dynamic_Range flag each. Based on the configuration of the Schmitt trigger circuit, the two flags either set or reset. When neither of the two flags is set, one or more most significant bits of the digital dynamic range data are fed to the digital to analog converter (405). High_Motion flag and Low_Motion flag are set or reset based on the assigned threshold.
[0065] The motion in the image scene is partly calculated in the motion awareness unit (402) using the gain level calculated for localized histogram equalization. The absolute difference of two consecutive gain levels in temporal domain for a region is summed for all the regional blocks. Based on the set threshold the High_Change or Low_Change flag is set or reset. The Low_Change flag is sent to the input of the digital to analog converter (405). The change calculated partly controls the temporal smoothening weights assigned in the temporal smoothening unit (204).
[0066] The first tone map update circuit (105) and the second tone map update circuit (106) in Figure 1 controls the update rate of the global tone mapping unit (101) and the local tone mapping unit (102) by smoothening the gain generated in the temporal domain. When high weight is given to previous gain, the rate of update slows whereas when high weight is given to present gain, update speeds up. The change calculated in the change awareness unit (403) partly controls the temporal smoothening weights assigned to the first tone map update circuit (105) and the second tone map update circuit (106).
[0067] In accordance with an embodiment of the present invention, the image mixing system (100) for mixing images with different tone mapping where the different images are generated from the input image is provided. The image mixing system (100) includes the clipper and clamper mapping unit (103), the mixer coefficient generator (201), and the mixing unit (104). The mixer coefficient generator (201) generates the weights for mixing. The mixing unit (104) mixes different tone mapped images using the multiplexer (203). The input lines of the multiplexer (203) are: the output of the clipper and clamper mapping unit (103), the weighted sum of the tone mapped images, such as, the outputs of the global tone mapping unit (101) and the local tone mapping unit (102).
[0068] In an embodiment, the clipper and clamper mapping unit (103) includes the energy centre generator (304) that clamps the data based on the generated energy centre. The clipper and clamper mapping unit (103) further includes the first and second occurrence counters (301 and 303). The first occurrence counter (301) uses the first decision threshold to retain all values of the matrix which occur more than the set threshold in matrix. The retained values are sent as input to the second occurrence counter (303) with the second decision threshold with only those values retained whose occurrence is more than the set threshold. The energy center is the average of minimum and maximum value retained by the second occurrence counter (303) and temporally smoothened.
[0069] In an embodiment, the second decision threshold is calculated on the basis of the number of valid occurrences calculated in previous frame within a set limit else the threshold is increased when valid levels increase as per the equations (2) & (3). In that, the values of ?, ?, d and s are set such that the value by which threshold is increased is directly proportional to the distance of desired number of valid levels from the current number of valid levels. Also, the value in any case doesn’t exceed a fixed value. The energy threshold is decreased when valid levels fall below the set range as per the equations (4) & (5). In that, the values of ?, F, ? and ß are set such that the value by which threshold is decreased is directly proportional to the distance of desired number of valid levels from the current number of valid levels. Also, the value in any case doesn’t exceed a fixed value. The value of Curr_Energy_Centre is updated only when there are at least two valid occurrences. The clipper circuit (305) clips the values beyond the intended range of the output.
[0070] In an embodiment, the mixer coefficient generator (201) includes the dynamic range unit (401), the motion awareness unit (402), and the change awareness unit (403). The dynamic range unit (401) is similar to the first occurrence counter (301). The output of the dynamic range unit (401) is the number of elements retained. The digital to analog converter (405) generates the mixer coefficient with the input fed as the digital dynamic range data, Low_Change flag and output of the OR gate (404) whose inputs are Low_Dynamic_Range flag, High_Motion flag and High_Change flag. High_Dynamic_Range flag, Low_Dynamic_Range flag and moderate dynamic range are set using a predetermined range. For moderate dynamic range, one or more most significant bits of the digital dynamic range data are directly sent to the digital to analog convertor. High_Change and Low_Change flag are also set/reset based on the set range. A gain and offset bias are used to control the output of the digital to analog converter (405) to set the output within a limited range based on digital input data consisting of output of the OR gate (404), one or more most significant bits of the digital dynamic range data and Low_Change flag.
[0071] In an embodiment, the mixing unit (104) includes the decision-making circuit (202) that controls the choice of the output of the multiplexer (203). The Schmitt trigger circuit to control the output of the decision-making circuit (202) with the digital dynamic range data of the input being used as input to the Schmitt trigger circuit.
[0072] Referring now to Figure 5, a flowchart of a method for mixing images is illustrated in accordance with an embodiment of the present invention.
[0073] At step 502, the local tone mapping unit (102) generates the first image (Image 1) based on the input image.
[0074] At step 504, the clipper and clamper mapping unit (103) generates the second image (Image 2) based on the input image.
[0075] At step 506, global tone mapping unit (101) generates the third image (Image 3) based on the input image.
[0076] At step 508, mixer coefficient generator (201) generates a mixing coefficient (a) based on the input image,
[0077] At step 510, the multiplier (210) applies the mixing coefficient (a) to the first image (Image 1) and to generate the Weighted Image 1.
[0078] At step 512, the multiplier (212) applies the complementary mixing coefficient (1 - a) to the third image (Image 3) to generate the Weighted Image 3.
[0079] At step 514, the adder (213) sums the Weighted Image 1 and the Weighted Image 3 to generate the Image A.
[0080] At step 516, the 2×1 multiplexer (203) multiplexed the Image A and the Image 2 (Image B) based on the select line controlled by the decision-making circuit (202) to generate the fourth image (Image 4).
[0081] At step 518, the temporal smoothening unit (204) temporally smoothens the fourth image (Image 4) to generate the output image.
[0082] Referring now to Figure 6, a flowchart of a method of generating the second image (Image 2) is illustrated in accordance with an embodiment of the present invention.
[0083] At step 602, the first occurrence counter (301) receives the input image and the first decision threshold, wherein the input image corresponds to the input matrix data. The first occurrence counter (301) generates the first intermediate matrix data having elements from the input matrix data occurring more than the first decision threshold.
[0084] At step 604, the second occurrence counter (303) receives the first intermediate matrix data. The second occurrence counter (303) generates the second intermediate matrix data having elements from the first intermediate matrix data occurring more than the second decision threshold.
[0085] At step 606, the energy center generator (304) calculates the energy center of the second intermediate matrix data.
[0086] At step 608, the subtractor (310) clamps the input matrix data with the energy center to generate the third intermediate matrix data.
[0087] At step 610, the clipper circuit (305) limits the range of the third intermediate matrix data to the predetermined range to generate the second image (Image 2).
[0088] Referring now to Figure 7, a flowchart of a method of generating the mixer coefficient (a) is illustrated in accordance with an embodiment of the present invention.
[0089] At step 702, the dynamic range unit (401) calculates the digital dynamic range data indicative of the dynamic range of the input image.
[0090] At step 704, the dynamic range unit (401) sets the low dynamic range flag (Low_Dynamic_Range flag) when the digital dynamic range data is below the low threshold and sets the high dynamic range flag (High_Dynamic_Range flag) when the digital dynamic range data is above the high threshold.
[0091] At step 706, the dynamic range unit (401) generates the moderate signal when neither Low_Dynamic_Range flag, nor High_Dynamic_Range flag is set. In this case, one or more most significant bits of the digital dynamic range data are directly sent to the digital to analog convertor (405).
[0092] At step 708, the motion awareness unit (402) is activated by the high dynamic range flag (High_Dynamic_Range flag). When active, the motion awareness unit (402) detects motion in the input image.
[0093] At step 710, the motion awareness unit (402) sets the high motion flag (High_Motion flag) when high amount of motion is detected and sets the low motion flag (Low_Motion flag) when low amount of motion is detected.
[0094] At step 712, the change awareness unit (403) is activated by the low motion flag (Low_Motion flag). When active, the change awareness unit (403) detects changes in the input image.
[0095] At step 714, the change awareness unit (403) sets the high change flag (High_Change flag) when high amount of change is detected and sets the low change flag (Low_Change flag) when low amount of change is detected.
[0096] At step 716, the OR gate (404) receives the low dynamic range flag (Low_Dynamic_Range flag), the high motion flag (High_Motion flag), and the high change flag (High_Change flag) as inputs and generates the output signal. The digital to analog converter (405) receives the digital dynamic range data, when moderate signal is set (i.e., neither High_Dynamic_Range nor Low_Dynamic_Range flag are set), the output signal of the OR gate (404), and the low change flag (Low_Change flag) as inputs and generates the mixer coefficient (a) based on the gain bias and the offset bias.
[0097] Referring now to Figure 8, a method of generating the second decision threshold is illustrated in accordance with an embodiment of the present invention.
[0098] At step 802, the second decision threshold circuit (302) generates the second decision threshold.
[0099] At step 804, the second decision threshold circuit (302) checks whether the number of elements of second intermediate matrix data are beyond the predefined range. If YES, the second decision threshold circuit (302) increases the second decision threshold based on the equations (2) and (3). If NO, the step 808 is executed.
[00100] At step 808, the second decision threshold circuit (302) checks whether the number of elements of second intermediate matrix data are below the predefined range. If YES, the second decision threshold circuit (302) decreases the second decision threshold based on the equations (4) and (5).
[00101] In one embodiment, the present invention relates generally to the field of signal processing of sensor acquired signals and more specifically to processing of signals from imaging sensors sensitive though non-exclusive to infrared range of electromagnetic radiation for enhancement wherein the imaging sensor is an array of elements called pixels each of which gives an intensity level output based on their response to incident electromagnetic radiation.
[00102] Advantageously, the method mixes images with different tone mapping to give an enhanced quality image. The final image is generated from an input image by passing it through a mixer which is a function of several parameters including input image, gain and level set for the image. Different tone mapping circuits are used to generate different images from the input image, and they are mixed in different proportion using the mixer.
[00103] The foregoing description of the invention has been set merely to illustrate the invention and is not intended to be limiting. Since modifications of the disclosed embodiments incorporating the spirit and substance of the invention may occur to person skilled in the art, the invention should be construed to include everything within the scope of the invention.

,CLAIMS:
1. An image mixing system (100), comprising:
a local tone mapping unit (102) configured to generate a first image (Image 1) based on an input image;
a clipper and clamper mapping unit (103) configured to generate a second image (Image 2) based on the input image;
a global tone mapping unit (101) configured to generate a third image (Image 3) based on the input image;
a mixing unit (104) comprising:
a mixer coefficient generator (201) configured to generate a mixing coefficient (a) based on the input image,
a multiplier (210) configured to apply the mixing coefficient (a) to the first image (Image 1) to generate a Weighted Image 1,
a multiplier (212) configured to apply a complementary mixing coefficient (1 - a) to the third image (Image 3) to generate a Weighted Image 3, and
an adder (213) configured to sum the Weighted Image 1 and the Weighted Image 3 to generate an Image A;
a 2×1 multiplexer (203) having the Image A and the Image 2 (Image B) as inputs and a select line controlled by a decision-making circuit (202) to generate a fourth image (Image 4); and
a temporal smoothening unit (204) configured to temporally smoothen the fourth image (Image 4) to generate an output image.
2. The image mixing system (100) as claimed in claim 1, wherein the clipper and clamper mapping unit (103) comprises:
a first occurrence counter (301) configured to:
receive the input image and a first decision threshold, wherein the input image corresponds to an input matrix data, and
generate a first intermediate matrix data having elements from the input matrix data occurring more than the first decision threshold;
a second occurrence counter (303) configured to:
receive the first intermediate matrix data, and
generate a second intermediate matrix data having elements from the first intermediate matrix data occurring more than the second decision threshold;
an energy center generator (304) configured to calculate an energy center of the input matrix data;
a subtractor (310) configured to clamp the input matrix data with the energy center to generate a third intermediate matrix data; and
a clipper circuit (305) configured to limit a range of the third intermediate matrix data to a predetermined range to generate the second image (Image 2).
3. The image mixing system (100) as claimed in claim 1, wherein the mixer coefficient generator (201) includes:
a dynamic range unit (401) configured to:
calculate a digital dynamic range data of the input image,
set a low dynamic range flag (Low_Dynamic_Range flag) when the digital dynamic range data is below a low threshold,
set a high dynamic range flag (High_Dynamic_Range flag) when the digital dynamic range data is above a high threshold, and
generate a moderate signal when the neither High_Dynamic_Range flag nor Low_Dynamic_Range flag is set and one or more most significant bits of the digital dynamic range data are directly sent to a digital to analog convertor (405);
a motion awareness unit (402) activated by the high dynamic range flag (High_Dynamic_Range flag), said motion awareness unit (402) configured to:
detect motion in the input image,
set a high motion flag (High_Motion flag) when high amount of motion is detected, and
set a low motion flag (Low_Motion flag) when low amount of motion is detected;
a change awareness unit (403) activated by the low motion flag (Low_Motion flag), said change awareness unit (403) configured to:
detect changes in the input image,
set a high change flag (High_Change flag) when high amount of change is detected, and
set a low change flag (Low_Change flag) when low amount of change is detected;
an OR gate (404) having the low dynamic range flag (Low_Dynamic_Range flag), the high motion flag (High_Motion flag), and the high change flag (High_Change flag) as inputs and configured to generate an output signal; and
a digital to analog converter (405) configured to receive the bits of the digital dynamic range data when moderate signal is set, the output signal of the OR gate (404), and the low change flag (Low_Change flag) as inputs and configured to generate the mixer coefficient (a) based on a gain bias and an offset bias and the digital input data consisting of output of the OR gate (404), the one or more most significant bits of the digital dynamic range data and Low_Change flag.
4. The image mixing system (100) as claimed in claim 2, wherein the energy center is an average of a minimum element and a maximum element of the second intermediate matrix data.
5. The image mixing system (100) as claimed in claim 2, wherein the energy center is updated based on at least two elements from the second intermediate matrix data.
6. The image mixing system (100) as claimed in claim 2, wherein the clipper and clamper mapping unit (103) includes a second decision threshold circuit (302) configured to:
generate the second decision threshold,
increase the second decision threshold when number of elements of the second intermediate matrix data is beyond a predefined range:
energy_threshold_new = energy_threshold_old + ?………………………. (2)
where
? = (? × (total_valid_occurrence – d)) + s…………………….……………. (3)
decrease the second decision threshold when number of elements of the second intermediate matrix data is below a predefined range:
energy_threshold_new = energy_threshold_old - ?…………………… (4)
where
? = (F× (total_valid_occurrence – ?)) + ß……………...………………... (5)
wherein ?, ?, d, s, ?, F, ? and ß are whole numbers in (2)-(5).
7. The image mixing system (100) as claimed in claim 2, wherein the decision-making circuit (202) includes a Schmitt trigger circuit having the digital dynamic range data of the input image data as input and the output controlling the select line of the 2×1 multiplexer (203).
8. A method for mixing images in an image mixing system (100), said method comprising:
generating, by a local tone mapping circuit (102), a first image (Image 1) based on an input image;
generating, by a clipper and clamper mapping unit (103), a second image (Image 2) based on the input image;
generating, by a global tone mapping unit (101), a third image (Image 3) based on the input image;
generating, by a mixer coefficient generator (201), a mixing coefficient (a) based on the input image;
applying, by a multiplier (210), the mixing coefficient (a) to the first image (Image 1) and to generate a Weighted Image 1;
applying, by a multiplier (212), a complementary mixing coefficient (1 - a) to the third image (Image 3) to generate a Weighted Image 3;
adding, by an adder (213), the Weighted Image 1 and the Weighted Image 3 to generate an Image A;
multiplexing, by a 2×1 multiplexer (203), the Image A with the Image 2 (Image B) to generate a fourth image (Image 4), said 2×1 multiplexer (203) having a select line controlled by a decision-making circuit (202); and
temporally smoothening, by a temporal smoothening unit (204), the fourth image (Image 4) to generate an output image.
9. The method as claimed in claim 8, comprising:
receiving, by a first occurrence counter (301), the input image and a first decision threshold, wherein the input image corresponds to an input matrix data;
generating, by the first occurrence counter (301), a first intermediate matrix data having elements from the input matrix data occurring more than the first decision threshold;
receiving, by a second occurrence counter (303), the first intermediate matrix data;
generating, by the second occurrence counter (303), having elements from the first intermediate matrix data occurring more than the second decision threshold;
calculating, by an energy center generator (304), an energy center of the input matrix data;
clamping, by a subtractor (310), the input matrix data with the energy center to generate a third intermediate matrix data; and
limiting, by a clipper circuit (305), a range of the third intermediate matrix data to a predetermined range to generate the second image (Image 2).
10. The method as claimed in claim 8, comprising:
calculating, by a dynamic range unit (401), a digital dynamic range data of the input image;
setting, by the dynamic range unit (401), a low dynamic range flag (Low_Dynamic_Range flag) when the digital dynamic range data is below a low threshold;
setting, by the dynamic range unit (401), a high dynamic range flag (High_Dynamic_Range flag) when the digital dynamic range data is above a high threshold;
generating, by the dynamic range unit (401), a moderate signal when neither High_Dynamic_Range flag nor Low_Dynamic_Range flag is set;
detecting, by a motion awareness unit (402), motion in the input image, wherein the motion awareness unit (402) is activated by the high dynamic range flag (High_Dynamic_Range flag);
setting, by the motion awareness unit (402), a high motion flag (High_Motion flag) when high amount of motion is detected;
setting, by the motion awareness unit (402), a low motion flag (Low_Motion flag) when low amount of motion is detected;
detecting, by a change awareness unit (403), changes in the input image, wherein the change awareness unit (403) is activated by the low motion flag (Low_Motion flag);
setting, by the change awareness unit (403), a high change flag (High_Change flag) when high amount of change is detected;
setting, by the change awareness unit (403), a low change flag (Low_Change flag) when low amount of change is detected; and
generating, by a digital to analog converter (405), the mixer coefficient (a) based on one or more most significant bits of the digital dynamic range data, the output signal of the OR gate (404), the low change flag (Low_Change flag), a gain bias, and an offset bias.
11. The method as claimed in claim 9, wherein the energy center is an average of a minimum element and a maximum element of the second intermediate matrix data.
12. The method as claimed in claim 9, wherein the energy center is updated based on at least two elements from the second intermediate matrix data.
13. The method as claimed in claim 9, comprising:
generating, by a second decision threshold circuit (302), the second decision threshold;
increasing, by the second decision threshold circuit (302), the second decision threshold when number of elements of the second intermediate matrix data is beyond a predefined range:
energy_threshold_new = energy_threshold_old + ?……………………. (2)
where
? = (? × (total_valid_occurrence – d)) + s…………………….…………. (3)
decreasing, by the second decision threshold circuit (302), the second decision threshold when number of elements of the second intermediate matrix data is below a predefined range:
energy_threshold_new = energy_threshold_old - ?…………………… (4)
where
? = (F× (total_valid_occurrence – ?)) + ß……………...………………... (5)
wherein ?, ?, d, s , ?, F, ? and ß are whole numbers in (2)-(5).
14. The method as claimed in claim 9, wherein the decision-making circuit (202) includes a Schmitt trigger circuit having the digital dynamic range data of the input image data as input and the output controlling the select line of the 2X1 multiplexer (203).

Documents

Application Documents

# Name Date
1 202041012880-Correspondence to notify the Controller [13-03-2025(online)].pdf 2025-03-13
1 202041012880-IntimationOfGrant21-04-2025.pdf 2025-04-21
1 202041012880-PROVISIONAL SPECIFICATION [24-03-2020(online)].pdf 2020-03-24
1 202041012880-Response to office action [01-11-2024(online)].pdf 2024-11-01
2 202041012880-AMENDED DOCUMENTS [07-10-2024(online)].pdf 2024-10-07
2 202041012880-FORM 1 [24-03-2020(online)].pdf 2020-03-24
2 202041012880-PatentCertificate21-04-2025.pdf 2025-04-21
2 202041012880-US(14)-HearingNotice-(HearingDate-18-03-2025).pdf 2025-02-21
3 202041012880-DRAWINGS [24-03-2020(online)].pdf 2020-03-24
3 202041012880-FORM 13 [07-10-2024(online)].pdf 2024-10-07
3 202041012880-Response to office action [01-11-2024(online)].pdf 2024-11-01
3 202041012880-Written submissions and relevant documents [29-03-2025(online)].pdf 2025-03-29
4 202041012880-AMENDED DOCUMENTS [07-10-2024(online)].pdf 2024-10-07
4 202041012880-Correspondence to notify the Controller [13-03-2025(online)].pdf 2025-03-13
4 202041012880-FORM-26 [21-06-2020(online)].pdf 2020-06-21
4 202041012880-POA [07-10-2024(online)].pdf 2024-10-07
5 202041012880-US(14)-HearingNotice-(HearingDate-18-03-2025).pdf 2025-02-21
5 202041012880-FORM-26 [24-06-2020(online)].pdf 2020-06-24
5 202041012880-FORM 13 [07-10-2024(online)].pdf 2024-10-07
5 202041012880-ABSTRACT [27-04-2023(online)].pdf 2023-04-27
6 202041012880-Response to office action [01-11-2024(online)].pdf 2024-11-01
6 202041012880-POA [07-10-2024(online)].pdf 2024-10-07
6 202041012880-FORM 3 [27-07-2020(online)].pdf 2020-07-27
6 202041012880-CLAIMS [27-04-2023(online)].pdf 2023-04-27
7 202041012880-ABSTRACT [27-04-2023(online)].pdf 2023-04-27
7 202041012880-AMENDED DOCUMENTS [07-10-2024(online)].pdf 2024-10-07
7 202041012880-COMPLETE SPECIFICATION [27-04-2023(online)].pdf 2023-04-27
7 202041012880-ENDORSEMENT BY INVENTORS [27-07-2020(online)].pdf 2020-07-27
8 202041012880-CLAIMS [27-04-2023(online)].pdf 2023-04-27
8 202041012880-DRAWING [27-07-2020(online)].pdf 2020-07-27
8 202041012880-FER_SER_REPLY [27-04-2023(online)].pdf 2023-04-27
8 202041012880-FORM 13 [07-10-2024(online)].pdf 2024-10-07
9 202041012880-COMPLETE SPECIFICATION [27-04-2023(online)].pdf 2023-04-27
9 202041012880-CORRESPONDENCE-OTHERS [27-07-2020(online)].pdf 2020-07-27
9 202041012880-OTHERS [27-04-2023(online)].pdf 2023-04-27
9 202041012880-POA [07-10-2024(online)].pdf 2024-10-07
10 202041012880-ABSTRACT [27-04-2023(online)].pdf 2023-04-27
10 202041012880-COMPLETE SPECIFICATION [27-07-2020(online)].pdf 2020-07-27
10 202041012880-FER.pdf 2022-11-01
10 202041012880-FER_SER_REPLY [27-04-2023(online)].pdf 2023-04-27
11 202041012880-CLAIMS [27-04-2023(online)].pdf 2023-04-27
11 202041012880-FORM 18 [27-06-2022(online)].pdf 2022-06-27
11 202041012880-OTHERS [27-04-2023(online)].pdf 2023-04-27
11 202041012880-Proof of Right [18-09-2020(online)].pdf 2020-09-18
12 202041012880-COMPLETE SPECIFICATION [27-04-2023(online)].pdf 2023-04-27
12 202041012880-Correspondence_Form1_28-09-2020.pdf 2020-09-28
12 202041012880-FER.pdf 2022-11-01
13 202041012880-Proof of Right [18-09-2020(online)].pdf 2020-09-18
13 202041012880-FORM 18 [27-06-2022(online)].pdf 2022-06-27
13 202041012880-FER_SER_REPLY [27-04-2023(online)].pdf 2023-04-27
14 202041012880-COMPLETE SPECIFICATION [27-07-2020(online)].pdf 2020-07-27
14 202041012880-Correspondence_Form1_28-09-2020.pdf 2020-09-28
14 202041012880-FER.pdf 2022-11-01
14 202041012880-OTHERS [27-04-2023(online)].pdf 2023-04-27
15 202041012880-Proof of Right [18-09-2020(online)].pdf 2020-09-18
15 202041012880-OTHERS [27-04-2023(online)].pdf 2023-04-27
15 202041012880-FER.pdf 2022-11-01
15 202041012880-CORRESPONDENCE-OTHERS [27-07-2020(online)].pdf 2020-07-27
16 202041012880-COMPLETE SPECIFICATION [27-07-2020(online)].pdf 2020-07-27
16 202041012880-DRAWING [27-07-2020(online)].pdf 2020-07-27
16 202041012880-FER_SER_REPLY [27-04-2023(online)].pdf 2023-04-27
16 202041012880-FORM 18 [27-06-2022(online)].pdf 2022-06-27
17 202041012880-ENDORSEMENT BY INVENTORS [27-07-2020(online)].pdf 2020-07-27
17 202041012880-Correspondence_Form1_28-09-2020.pdf 2020-09-28
17 202041012880-COMPLETE SPECIFICATION [27-04-2023(online)].pdf 2023-04-27
17 202041012880-CORRESPONDENCE-OTHERS [27-07-2020(online)].pdf 2020-07-27
18 202041012880-FORM 3 [27-07-2020(online)].pdf 2020-07-27
18 202041012880-Proof of Right [18-09-2020(online)].pdf 2020-09-18
18 202041012880-DRAWING [27-07-2020(online)].pdf 2020-07-27
18 202041012880-CLAIMS [27-04-2023(online)].pdf 2023-04-27
19 202041012880-ABSTRACT [27-04-2023(online)].pdf 2023-04-27
19 202041012880-COMPLETE SPECIFICATION [27-07-2020(online)].pdf 2020-07-27
19 202041012880-ENDORSEMENT BY INVENTORS [27-07-2020(online)].pdf 2020-07-27
19 202041012880-FORM-26 [24-06-2020(online)].pdf 2020-06-24
20 202041012880-POA [07-10-2024(online)].pdf 2024-10-07
20 202041012880-FORM-26 [21-06-2020(online)].pdf 2020-06-21
20 202041012880-FORM 3 [27-07-2020(online)].pdf 2020-07-27
20 202041012880-CORRESPONDENCE-OTHERS [27-07-2020(online)].pdf 2020-07-27
21 202041012880-DRAWING [27-07-2020(online)].pdf 2020-07-27
21 202041012880-DRAWINGS [24-03-2020(online)].pdf 2020-03-24
21 202041012880-FORM 13 [07-10-2024(online)].pdf 2024-10-07
21 202041012880-FORM-26 [24-06-2020(online)].pdf 2020-06-24
22 202041012880-AMENDED DOCUMENTS [07-10-2024(online)].pdf 2024-10-07
22 202041012880-ENDORSEMENT BY INVENTORS [27-07-2020(online)].pdf 2020-07-27
22 202041012880-FORM 1 [24-03-2020(online)].pdf 2020-03-24
22 202041012880-FORM-26 [21-06-2020(online)].pdf 2020-06-21
23 202041012880-DRAWINGS [24-03-2020(online)].pdf 2020-03-24
23 202041012880-FORM 3 [27-07-2020(online)].pdf 2020-07-27
23 202041012880-PROVISIONAL SPECIFICATION [24-03-2020(online)].pdf 2020-03-24
23 202041012880-Response to office action [01-11-2024(online)].pdf 2024-11-01
24 202041012880-FORM 1 [24-03-2020(online)].pdf 2020-03-24
24 202041012880-FORM-26 [24-06-2020(online)].pdf 2020-06-24
24 202041012880-US(14)-HearingNotice-(HearingDate-18-03-2025).pdf 2025-02-21
25 202041012880-FORM-26 [21-06-2020(online)].pdf 2020-06-21
25 202041012880-PROVISIONAL SPECIFICATION [24-03-2020(online)].pdf 2020-03-24
25 202041012880-Correspondence to notify the Controller [13-03-2025(online)].pdf 2025-03-13
26 202041012880-Written submissions and relevant documents [29-03-2025(online)].pdf 2025-03-29
26 202041012880-DRAWINGS [24-03-2020(online)].pdf 2020-03-24
27 202041012880-PatentCertificate21-04-2025.pdf 2025-04-21
27 202041012880-FORM 1 [24-03-2020(online)].pdf 2020-03-24
28 202041012880-IntimationOfGrant21-04-2025.pdf 2025-04-21
28 202041012880-PROVISIONAL SPECIFICATION [24-03-2020(online)].pdf 2020-03-24

Search Strategy

1 SearchHistory(10)E_01-11-2022.pdf

ERegister / Renewals

3rd: 14 Jul 2025

From 24/03/2022 - To 24/03/2023

4th: 14 Jul 2025

From 24/03/2023 - To 24/03/2024

5th: 14 Jul 2025

From 24/03/2024 - To 24/03/2025

6th: 14 Jul 2025

From 24/03/2025 - To 24/03/2026