Abstract: A method and system for orienting a disoriented image is provided. Also, method and system for training a plurality of Gaussian mixture models (GMMs) to orient the disoriented image is provided. The method includes obtaining a plurality of color and texture features from the disoriented image. The method also includes selecting a plurality of discriminative features from the color and texture features. Further, the method includes calculating probability of each of the GMMs orienting the disoriented image. Each of the GMMs represents one of a plurality of rotation classes. Each of the rotation classes represents a rotation angle, wherein the rotation angle is a multiple of a right angle. The system includes an electronic device. The electronic device includes an embedded platform. The embedded platform includes a processor for processing the disoriented image.
METHOD AND SYSTEM FOR ORIENTING A DISORIENTED IMAGE
FIELD
[0001] The present disclosure relates generally to the field of image processing.
More particularly, the present disclosure relates to a method and a system for orienting
a disoriented image.
BACKGROUND
[0002] Image processing can be applied for detecting orientation angle of an image.
For example, the orientation can be detected by analysis of content of the image. The
image is oriented to another angle for preferred orientation on a display or better
organization in a storage device.
[0003] However, existing techniques for detecting orientation are not efficient if the
image is of low contrast or homogeneous texture. Also, techniques exist to orient the
image to an angle equal to nearest multiple of 90 degrees from the orientation angle.
However, existing techniques do not orient the image to rotation angles equal to
multiples of a right angle, for example, 0, 90, 180, and 270 degrees.
[0004] In light of the foregoing discussion there is a need for a method and system
for orienting a disoriented image to reduce computational complexity and increase
accuracy of orientation.
2
SUMMARY
[0005] Embodiments of the present disclosure described herein provide a method
and system for orienting a disoriented image.
[0006] An example of a method for training a plurality of Gaussian mixture models
(GMMs) for orienting a disoriented image includes obtaining a plurality of color and
texture features from one or more sample images. The method also includes identifying
a plurality of discriminative features from the color and texture features. The
discriminative features comprise one or more feature vector. Further, the method
includes constructing the GMMs for a plurality of rotation classes based on the one or
more feature vectors. Each of the rotation classes represents a rotation angle. The
rotation angle is a multiple of a right angle. Furthermore, the method includes extracting
a plurality of parameters of the GMMs for each of the rotation classes and porting the
parameters on an embedded platform in an electronic device.
[0007] An example of a method for orienting a disoriented image includes obtaining
a plurality of color and texture features from the disoriented image. The method also
includes selecting a plurality of discriminative features from the color and texture
features. Further, the method includes calculating probability of each of a plurality of
GMMs orienting the disoriented image. Each of the GMMs represents one of a plurality
of rotation classes. Each of the rotation classes represents a rotation angle, wherein the
rotation angle is a multiple of a right angle.
3
[0008] An example of a system for orienting a disoriented image includes an
electronic device. The electronic device includes an embedded platform. The embedded
platform includes a processor for processing the disoriented image. The processor
includes an extraction unit for extracting a plurality of color and texture features from the
disoriented image. The processor also includes a selection unit for selecting a plurality
of discriminative features from the color and texture features. Further, the processor
includes a probability module for calculating probability of each of a plurality of GMMs
orienting the disoriented image based on the discriminative features. Furthermore, the
processor includes a determination unit for determining one of the GMMs with a highest
value of probability. Also, the processor includes an orientation unit for orienting the
disoriented image using the determined GMM.
BRIEF DESCRIPTION OF FIGURES
[0009] The accompanying figures, similar reference numerals may refer to identical
or functionally similar elements. These reference numerals are used in the detailed
description to illustrate various embodiments and to explain various aspects and
advantages of the present disclosure.
[0010] FIG. 1 is a block diagram of a system for orienting a disoriented image, in
accordance with which various embodiments can be implemented;
[0011] FIG. 2 is a flow chart illustrating a method for training a plurality of gaussian
mixture models (GMMs) for orienting of a disoriented image, in accordance with one
embodiment;
4
[0012] FIG. 3 is a flow chart illustrating a method for orienting a disoriented image,
in accordance with one embodiment; and
[0013] FIG. 4 is an exemplary illustration of orienting a disoriented image, in
accordance with one embodiment.
[0014] Persons skilled in the art will appreciate that elements in the figures are
illustrated for simplicity and clarity and may have not been drawn to scale. For example,
the dimensions of some of the elements in the figures may be exaggerated relative to
other elements to help to improve understanding of various embodiments of the present
disclosure.
DETAILED DESCRIPTION
[0015] It should be observed that method steps and system components have been
represented by conventional symbols in the figures, showing only specific details that
are relevant for an understanding of the present disclosure. Further, details that may be
readily apparent to person ordinarily skilled in the art may not have been disclosed. In
the present disclosure, relational terms such as first and second, and the like, may be
used to distinguish one entity from another entity, without necessarily implying any
actual relationship or order between such entities.
[0016] Embodiments of the present disclosure described herein provide a method
and system for orienting a disoriented image.
[0017] FIG. 1 is a block diagram of a system 100 for orienting a disoriented image.
The system 100 includes an electronic device 105. Examples of the electronic device
5
105 include, but are not limited to, a computer, a laptop, a digital camera, a mobile
device, a digital album, a digital television, a hand held device, a personal digital
assistant (PDA), a camcorder, and a video player.
[0018] The electronic device 105 includes a bus 110 for communicating information,
and a processor 115 coupled with the bus 110 for processing information. The
electronic device 105 also includes a memory 120, for example a random access
memory (RAM) coupled to the bus 110 for storing information required by the processor
115. The memory 120 can be used for storing temporary information required by the
processor 115. The electronic device 105 further includes a read only memory (ROM)
125 coupled to the bus 110 for storing static information required by the processor 115.
A storage unit 130, for example a magnetic disk, hard disk or optical disk, can be
provided and coupled to the bus 110 for storing information.
[0019] The electronic device 105 can be coupled via the bus 110 to a display 135,
for example a cathode ray tube (CRT) or liquid crystal display (LCD), for displaying
information. An input device 140, including various keys, is coupled to the bus 110 for
communicating information to the processor 115. In some embodiments, cursor control
145, for example a mouse, a trackball, a joystick, or cursor direction keys for
communicating information to the processor 115 and for controlling cursor movement on
the display 135 can also be present.
[0020] In some embodiments, the display 135 may perform the functions of the
input device 140. For example, consider a touch screen display operable to receive
6
haptic input. The user can then use a stylus to select one or more portions on the visual
image displayed on the touch screen device.
[0021] In some embodiments, the steps of the present disclosure are performed by
the electronic device 105 using the processor 115. The information can be read into the
memory 120 from a machine-readable medium, for example the storage unit 130. In
alternative embodiments, hard-wired circuitry can be used in place of or in combination
with software instructions to implement various embodiments.
[0022] The term machine-readable medium can be defined as a medium providing
data to a machine to enable the machine to perform a specific function. The machinereadable
medium can be a storage media. Storage media can include non-volatile
media and volatile media. The storage unit 130 can be a non-volatile media. The
memory 120 can be a volatile media. All such media must be tangible to enable the
instructions carried by the media to be detected by a physical mechanism that reads the
instructions into the machine.
[0023] Examples of the machine readable medium include, but are not limited to, a
floppy disk, a flexible disk, a hard disk, a magnetic tape, a CD-ROM, an optical disk,
punchcards, papertape, a RAM, a PROM, EPROM, and a FLASH-EPROM.
[0024] The machine readable medium can also include online links, download links,
and installation links providing the information to the processor 115.
[0025] The electronic device 105 also includes a communication interface 150
coupled to the bus 110 for enabling data communication. Examples of the
7
communication interface 150 include, but are not limited to, an integrated services
digital network (ISDN) card, a modem, a local area network (LAN) card, an infrared port,
a Bluetooth port, a zigbee port, and a wireless port.
[0026] In some embodiments, the processor 115 includes one or more processing
units for performing one or more functions of the processor 115. The processing units
are hardware circuitry performing specified functions. The processor 115 processes a
disoriented image that has to be oriented. The processor 115 includes an extraction unit
155 for extracting a plurality of color and texture features from the disoriented image.
The color and texture features are also known as low-level features. The processor 115
also includes a selection unit 160 for selecting multiple discriminative features from the
color and texture features. Further, the processor 115 includes a probability module 165
for calculating probability of each of the gaussian mixture models (GMMs) orienting the
disoriented image based on the discriminative features.
[0027] Furthermore, the processor 115 includes a determination unit 170 for
determining one GMM from among the GMMs with a highest value of the probability.
Further, the processor also includes an orientation unit 175 for orienting the disoriented
image using the determined GMM.
[0028] The storage unit 130 stores the codebook corresponding to the color values.
In some embodiments, the processor includes one or more modules to train the GMMs.
The modules include a gaussian model construction unit for constructing the GMMs for
each of the rotation classes based on the one or more feature vectors and a parameter
extraction module for extracting multiple parameters of the GMMs for each of the
8
rotation classes. In some embodiments the embedded platform includes a storage
module for storing one or more parameters of the GMMs for each of the rotation
classes. Each of the rotation classes represents a rotation angle. The rotation angle is a
multiple of a right angle, for example, 0, 90, 180, and 270 degrees.
[0029] FIG. 2 is a flow chart illustrating a method for training a plurality of gaussian
mixture models (GMMs) for orienting of a disoriented image, in accordance with one
embodiment.
[0030] The method starts at step 205.
[0031] At step 210, a plurality of color and texture features are obtained from one or
more samples. The color and texture features include low level features. In some
embodiments the color and texture features are obtained using information theoretic
feature selection. Each of the sample images belongs to one of the rotation classes
based on the rotation angle of each of the sample images. In some embodiments the
texture features are obtained using one or more edge detection algorithms.
[0032] At step 215, a plurality of discriminative features is identified from the color
and texture features. The discriminative features include one or more feature vectors. In
some embodiments the discriminative features are identified using information theoretic
feature selection.
[0033] At step 220, the GMMs are constructed for a plurality of rotation classes
based on the one or more feature vectors. Each of the rotation classes represents a
9
rotation angle. The rotation angle is a multiple of a right angle, for example, 0, 90, 180,
and 270 degrees. A first probability is determined based on a feature vector from among
the feature vectors. A probability of selecting a GMM from among the GMMs is
determined based on the first probability for orienting each of the sample images. For a
given feature vector xt probability of selecting an ith Gaussian model is given by
equation (1).
[0034] (1)
where μi is a mean of the ith gaussian mixture, i is a covariance of the ith Gaussian
mixture and i are priori weights of the ith Gaussian mixture and where the first
probability is given by equation (2).
[0035] …………….(2)
[0036] At step 225, a plurality of parameters of the GMMs is extracted for each of
the rotation classes and the parameters on an embedded platform are ported in an
electronic device. Examples of the electronic device 105 include, but are not limited to,
a computer, a laptop, a digital camera, a camcorder, a mobile device, a digital album, a
10
digital television, a hand held device, a personal digital assistant (PDA), and a video
player.
[0037] The method stops at 230.
[0038] FIG. 3 is a flow chart illustrating a method for orienting a disoriented image,
in accordance with one embodiment.
[0039] The method starts at step 305.
[0040] At step 310, a plurality of color and texture features are obtained from the
disoriented image. The color and texture features include low level features. In some
embodiments the color and texture features are obtained using information theoretic
feature selection. Each of the sample images belongs to one of the rotation classes
based on the rotation angle of each of the sample images. In some embodiments the
texture features are obtained using one or more edge detection algorithms.
[0041] At step 315, a plurality of discriminative features is selected from the color
and texture features. The discriminative features include one or more feature vectors. In
some embodiments the discriminative features are identified using information theoretic
feature selection.
[0042] At step 320, a probability of each of the GMMs orienting the disoriented
image based on the discriminative features is calculated. Each of the GMMs represents
one of a plurality rotation classes. Each of the rotation classes represents a rotation
angle. The rotation angle is a multiple of a right angle, for example, 0, 90, 180, and 270
degrees. The rotation class of the disoriented image is determined using a log likelihood
11
ratio. For example, the log likelihood ratio of classification of an audio sample into a
speech or a non speech class is given by equation (3).
[0043] (3)
[0044] where P(X/ sp) is the likelihood given that a feature vector X of the audio
sample is from a speech class and P(X/ np) is the likelihood given that the feature
vector X belongs to a non speech class.
[0045] To determine the probability of each of the GMMs orienting the disoriented
image is determined using one or more parameters of the GMMs. A maximum
probability of the parameters is obtained using an iterative expectation maximization
algorithm. To obtain the maximum probability of the parameters, an initial model is
used to estimate a new model 1, such that a probability of selecting given by P(X/ )
is greater than a probability of selecting 1 given by P(X/ 1). The new mode 1 then
becomes the initial model for a next iteration. The iterations are continued until a
convergence threshold is reached.
[0046] At step 325, one of the GMMs with a highest value of the probability is
determined. The highest value of the probability indicates the most probable GMMs for
orienting the disoriented image.
[0047] At step 330, the disoriented image is oriented using the determined GMM.
To orient the disoriented images, a plurality of responses of the determined GMM is
computed for the feature vectors of the disoriented image. One or more response
12
values from weighted combinations of the computed responses is obtained. A rotation
angle giving a maximum one of the responses is selected and the disoriented image is
rotated by the selected rotation angle to obtain an oriented image.
[0048] The method stops at 335.
[0049] FIG. 4 is an exemplary illustration of orienting a disoriented image 405, in
accordance with one embodiment. A sub image 410 from the disoriented image is
subjected to feature recognition using information theoretic feature selection. The color
and texture features of the sub image 410 are extracted to obtain a second sub image
415. The color and texture features are extracted using edge detection algorithm.
Multiple discriminative features are identified using the color and texture features. The
discriminative features are oriented using one or more gaussian mixture models to
obtain an oriented sub image 420. The oriented sub image 420 is used to obtain an
oriented sub image 425. The oriented sub image 425 is used to obtain the oriented
image 430.
[0050] In the preceding specification, the present disclosure and its advantages
have been described with reference to specific embodiments. However, it will be
apparent to a person of ordinary skill in the art that various modifications and changes
can be made, without departing from the scope of the present disclosure, as set forth in
the claims below. Accordingly, the specification and figures are to be regarded as
illustrative examples of the present disclosure, rather than in restrictive sense. All such
possible modifications are intended to be included within the scope of the present
disclosure.
13
I/We claim:
1. A method for training a plurality of gaussian mixture models (GMMs) for orienting
of a disoriented image, the method comprising:
obtaining a plurality of color and texture features from one or more sample
images;
identifying a plurality of discriminative features from the color and texture
features, wherein the discriminative features comprises one or more feature vectors;
constructing the GMMs for a plurality of rotation classes based on the one or
more feature vectors, wherein each of the rotation classes represents a rotation angle,
and wherein the rotation angle is a multiple of a right angle; and
extracting a plurality of parameters of the GMMs for each of the rotation classes
and porting the parameters on an embedded platform in an electronic device.
2. The method of claim 1, wherein each of the sample images belongs to one of the
rotation classes based on the rotation angle of each of the sample images.
3. The method of claim 1, constructing the GMMs comprises:
determining a first probability based on a feature vector from among the one or
more feature vectors; and
determining probability of selecting a GMM from among the GMMs based on the
first probability for orienting each of the sample images.
14
4. The method of claim 1, wherein the color and the textural features are obtained
based on information theoretic feature selection.
5. A method of orienting a disoriented image, the method comprising:
obtaining a plurality of color and texture features from the disoriented image;
selecting a plurality of discriminative features from the color and texture features;
calculating probability of each of a plurality of GMMs orienting the disoriented
image based on the discriminative features, wherein each of the GMMs
represents one of a plurality rotation classes, wherein each of the rotation
classes represents a rotation angle, and wherein the rotation angle is a multiple
of a right angle;
determining one of the GMMs with a highest value of the probability; and
orienting the disoriented image using the determined GMM.
6. The method of claim 6, wherein the color and the textural features are obtained
based on information theoretic feature selection.
7. A system for orienting a disoriented image, the system comprising:
an electronic device comprising:
an embedded platform comprising:
a processor for processing the disoriented image, the processor
comprising:
15
an extraction unit for extracting a plurality of color and texture
features from the disoriented image;
a selection unit for selecting a plurality of discriminative features
from the color and texture features;
a probability module for calculating probability of each of a plurality
of GMMs orienting the disoriented image based on the discriminative
features;
a determination unit for determining one of the GMMs with a
highest value of the probability; and
an orientation unit for orienting the disoriented image using the
determined GMM.
8. The system of claim 7, the electronic device further comprises:
a communication interface for receiving the disoriented image; and
a memory for storing information from the disoriented image.
9. The system of claim 7, wherein the processor further comprises:
a gaussian model construction unit for constructing the GMMs for each of the
rotation classes based on the one or more feature vectors; and
a parameter extraction unit for extracting a plurality of parameters of the GMMs
for each of the rotation classes.
10. The system of claim 7, wherein the embedded platform further comprises:
16
a storage module for storing one or more parameters of the GMMs for each of
the rotation classes, wherein each of the rotation classes represents a rotation angle,
and wherein the rotation angle is a multiple of a right angle.
| # | Name | Date |
|---|---|---|
| 1 | 2868-che-2009 power of attorney 17-05-2010.pdf | 2010-05-17 |
| 1 | 2868-CHE-2009-AbandonedLetter.pdf | 2018-05-21 |
| 2 | 2868-CHE-2009-FORM-26 [27-11-2017(online)].pdf | 2017-11-27 |
| 2 | 2868-che-2009 form-1 17-05-2010.pdf | 2010-05-17 |
| 3 | 2868-CHE-2009-FER.pdf | 2017-11-16 |
| 3 | 2868-CHE-2009 CORRESPONDENCE OTHERS 27-06-2011.pdf | 2011-06-27 |
| 4 | Amended Form 1.pdf | 2015-07-17 |
| 4 | 2868-CHE-2009 FORM-18 27-06-2011.pdf | 2011-06-27 |
| 5 | Form 13_Address for service.pdf | 2015-07-17 |
| 5 | 2868-CHE-2009 POWER OF ATTORNEY 27-06-2011.pdf | 2011-06-27 |
| 6 | Power of Authority.pdf | 2011-09-04 |
| 6 | 2868-CHE-2009 FORM-13 15-07-2015.pdf | 2015-07-15 |
| 7 | Form-5.pdf | 2011-09-04 |
| 7 | Drawings.pdf | 2011-09-04 |
| 8 | Form-3.pdf | 2011-09-04 |
| 8 | Form-1.pdf | 2011-09-04 |
| 9 | Form-3.pdf | 2011-09-04 |
| 9 | Form-1.pdf | 2011-09-04 |
| 10 | Drawings.pdf | 2011-09-04 |
| 10 | Form-5.pdf | 2011-09-04 |
| 11 | Power of Authority.pdf | 2011-09-04 |
| 11 | 2868-CHE-2009 FORM-13 15-07-2015.pdf | 2015-07-15 |
| 12 | Form 13_Address for service.pdf | 2015-07-17 |
| 12 | 2868-CHE-2009 POWER OF ATTORNEY 27-06-2011.pdf | 2011-06-27 |
| 13 | Amended Form 1.pdf | 2015-07-17 |
| 13 | 2868-CHE-2009 FORM-18 27-06-2011.pdf | 2011-06-27 |
| 14 | 2868-CHE-2009-FER.pdf | 2017-11-16 |
| 14 | 2868-CHE-2009 CORRESPONDENCE OTHERS 27-06-2011.pdf | 2011-06-27 |
| 15 | 2868-CHE-2009-FORM-26 [27-11-2017(online)].pdf | 2017-11-27 |
| 15 | 2868-che-2009 form-1 17-05-2010.pdf | 2010-05-17 |
| 16 | 2868-CHE-2009-AbandonedLetter.pdf | 2018-05-21 |
| 16 | 2868-che-2009 power of attorney 17-05-2010.pdf | 2010-05-17 |
| 1 | 2868-che-2009_28-09-2017.pdf |