Abstract: A method and system for facilitating real time detection of linear infrastructural objects in aerial imagery is disclosed. During edge-based segmentation, a seed point pair is marked along the feature boundary by working along the middle vertical cut for each video frame. Along the cut portion, seed point pairs are extracted by pairing up pixels which have a high intensity gradient magnitude, near similar gradient direction, and a high pixel intensity in HSV space. An image is scanned along the gradient direction from each seed point. Local maxima of pixel (x; y) is estimated in the neighborhood window including the orientation, i.e. 3 pixels {(y + 0; x + 1); (y + 1; x +1); (y - 1; x + 1)}. After comparison, seed points that are local maxima in the left- and right 3-neighborhoods are located, and simultaneously considered for growing the boundary, in next iteration.
CLIAMS:WE CLAIM:
1. A computer implemented method for facilitating real time detection of at least one linear infrastructural object by aerial imaging, said method comprising:
applying a background suppression technique to a HSV image, wherein said HSV image is first converted to a grey scale image and a binary image;
implementing a mean shift filtering technique to find a peak of a confidence map by using a color histogram of said HSV image;
performing a gradient image generation for a plurality of edges of said HSV image using a sobel function;
extracting a seed point pair along a middle cut portion of a linear feature of the HSV image to identify one or more boundaries of the seed point pair;
initiating a contour growing approach to detect said one or more boundaries of the linear feature; and
removing one or more false positives by using a rigidity feature, the rigidity feature being equivalent to the total sum of gradient orientations.
2. The computer implemented method as claimed in claim 1, further comprising constructing a three dimensional feature space to locate a plurality of high fidelity conductors of said at least one infrastructural object.
3. The computer implemented method as claimed in claim 1, wherein a plurality of linear features are detected by tracking a boundary of said infrastructural object in linear space, by applying said background suppression technique.
4. The computer implemented method as claimed in claim 1, wherein the seed point pair is constructed by using a second set of candidate seed points, by a side-facing camera.
5. The computer implemented method as claimed in claim 1, wherein the seed point pair is detected from a bottom horizontal line, by a front facing camera.
6. A computer implemented system for facilitating real time detection of at least one infrastructural object by aerial imaging, said system comprising:
a memory storing instructions;
a hardware processor coupled to said memory, wherein said hardware processor is configured by said instructions to:
apply a background suppression technique, wherein a HSV image is first converted to a grey scale image and a binary image;
implement a mean shift filtering technique to find a peak of a confidence map by using a color histogram of said HSV image;
perform a gradient image generation for a plurality of edges of the HSV image using a sobel function;
extract a seed point pair along a middle cut portion of a linear feature of the HSV image to identify one or more boundaries of the seed point pair; and
initiate a contour growing approach to detect said one or more boundaries of the linear feature.
7. The computer implemented system as claimed in claim 6, wherein said hardware processor is configured to construct a three dimensional feature space to locate a plurality of high fidelity conductors of said at least one infrastructural object.
8. The computer implemented system as claimed in claim 6, wherein said hardware processor is configured to detect a plurality of linear features by tracking a boundary of said infrastructural object in linear space, by said background suppression technique.
9. The computer implemented system as claimed in claim 6, wherein said hardware processor is configured to construct the seed point pair by using a set of candidate seed points.
10. The computer implemented system as claimed in claim 6, wherein said real time detection of at least one infrastructural object is enabled by using at least one of a front facing camera, a rear facing camera, and a side facing camera. ,TagSPECI:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
METHOD AND SYSTEM FOR FACILITATING REAL TIME DETECTION OF LINEAR INFRASTRUCTURAL OBJECTS BY AERIAL IMAGERY
Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the embodiments and the manner in which it is to be performed.
TECHNICAL FIELD
The embodiments herein generally relate to visual inspection systems, and more particularly, to visual inspection systems for facilitating real time detection of linear infrastructural objects by aerial imaging.
BACKGROUND
Inspection, aerial photography and monitoring of state and growth of protective area under power lines and other similar structures are getting increasingly popular and widespread as conventional means are either expensive and risky (using piloted aircraft) or are extremely time consuming and do not capture all the necessary angles (inspection from ground). Power line monitoring involves visual inspection of towers and their high voltage insulators, as well as the cables.
Using piloted aircraft such as expensive helicopters also require a team of flying experts using certain dedicated equipment flying close to ground level at about 10-50 knots along the physical power route. Visual inspection of linear infrastructural objects is many times limited due to certain flying regulations and other public safety concerns. In the recent past, updated techniques (post considerable improvements in flight design techniques and development of light-weight cameras) for monitoring power lines using Unmanned Aerial Vehicles (UAVs) are emerging. Benefits of using UAVs are being realized particularly with respect to costs, noise, and risk mitigation, however many drawbacks still persist as a result of limited UAV functionality. Drawbacks include over reliance on operators to observe and carry out the mission where the UAV is just used as a flying camera.
Most UAV systems lack the ability to simultaneously stream real time, encrypted observation data to multiple ground stations. Although usage of UAVs for maintenance inspections, especially of long linear infrastructure is rapidly emerging as a popular option, the amount of video or image data acquired is typically huge, due to vastness of infrastructure. Thus, automated analysis of such images and videos are being increasingly sought. Such analysis necessitates detection of elongated foreground objects, commonly subjected as linear feature detection.
Conventional techniques so far for implementing automated linear feature detection in outdoor scenes have used Hough transform for clustering of lines. The Hough transform is a technique which can be used to isolate features of a particular shape within an image. Since Hough transform requires that the desired features be specified in some parametric form, the classical Hough transform is most commonly used for the detection of regular curves such as lines, circles, ellipses, etc. A generalized Hough transform can be employed in applications where a simple analytic description of a feature(s) is not possible.
However, the Hough transform has a computational complexity of the order of O(n3) which is considerably high. Additionally, the power lines range for long distances over different terrains vary and hence the background imagery can vary from trees and patches of greenery to different flat spaces along with other common objects such as homes and roads. Detection of long linear infrastructural objects is challenging but necessary.
A couple of conventional techniques use the process of canny edge detection as the core step and implementation of canny edge detection leads to higher computational complexity and this higher computational complexity is not suitable for near-real-time processing. Other prior techniques either tackle their problem of linear feature detection in medical imaging domain (where the background imagery is plain and simple and are not applicable for outdoor scene analysis directly) or are not robust enough to detect false positives.
In general, existing techniques have assumed the background imagery to be near-stochastic, and accordingly focused primarily at background modeling and subtraction. This generalist assumption simplifies the process design to a rudimentary level as in a realistic scenario, outdoor video background is not stochastic (i.e having a random probability distribution or pattern that may be analyzed statistically but may not be predicted precisely) and hence multiple ways of background suppression rather than (complete) subtraction needs to be evolved.
SUMMARY
In view of the foregoing, an embodiment herein provides a computer implemented method for facilitating real time detection of at least one infrastructural object by aerial imaging. The method comprises applying a background suppression technique, wherein a Hue-Saturation-Value (HSV) image is first converted to a grey scale image and a binary image, implementing a mean shift filtering technique to find a peak of a confidence map by using a color histogram of the HSV image, performing a gradient image generation for a plurality of edges of the HSV image using a Sobel function, extracting a seed point pair along a middle cut portion of a linear feature of the HSV image to identify one or more boundaries of the seed point pair; and initiating a contour growing approach to detect the one or more boundaries of the linear feature.
In one aspect, an embodiment herein provides a computer implemented system for facilitating real time detection of at least one infrastructural object by aerial imaging. The system comprises a memory for storing instructions, a hardware processor coupled to the memory, wherein the hardware processor is configured by instructions to apply a background suppression technique, wherein a HSV image is first converted to a grey scale image and a binary image. The processor further implements a mean shift filtering technique to find a peak of a confidence map by using a color histogram of the HSV image, perform a gradient image generation for a plurality of edges of the HSV image using a Sobel function, extracts a seed point pair along a middle cut portion of a linear feature of the HSV image to identify one or more boundaries of the seed point pair; and initiate a contour growing approach to detect one or more boundaries of the linear feature.
The following presents a simplified summary of some embodiments of the disclosure in order to provide a basic understanding of the embodiments. This summary is not an extensive overview of the embodiments. It is not intended to identify key/critical elements of the embodiments or to delineate the scope of the embodiments. Its sole purpose is to present some embodiments in a simplified form as a prelude to the more detailed description that is presented below.
It should be appreciated by those skilled in the art that any block diagram herein represent conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computing device or processor, whether or not such computing device or processor is explicitly shown.
BRIEF DESCRIPTION OF THE DRAWINGS
The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:
FIG. 1 illustrates a functional block diagram of an image processing system 100, according to the embodiments as disclosed herein;
FIG. 2a and FIG. 2b illustrate exemplary images depicting rigidity distribution for true and false detection, according to the embodiments as disclosed herein; and
FIG. 3a and FIG. 3b illustrate exemplary images depicting detection of linear features in sample frames, according to the embodiments as disclosed herein.
DETAILED DESCRIPTION OF EMBODIMENTS
The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
Referring now to the drawings, and more particularly to FIG.S 1 through 3, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
Throughout, the specification the terms “seed point pair growing” and “ contour growing’’ are used interchangeably.
FIG. 1 illustrates a functional block diagram of an image processing system 100, according to the embodiments as disclosed herein.
As depicted in FIG. 1, the image processing system 100 comprises of stages such as background suppression, image extraction, seed point pair selection, false positive removal, and a seed point pair growing stage. It is to be noted that the images or (frames) captured by a camera (not shown in Figure) are in Red, Green, Blue (RGB) format. The RGB color space is correlated to allow for a scant representation of the image data. Scantiness or sparseness is mandatorily required since any real time transmission of images/video to a ground tower over a bandwidth constrained link or network necessitates compression of data.
In an embodiment various long linear infrastructures, when imaged aerially exhibit certain characteristics that are similar such as:
Objects such as railway lines being rigid metal-based structures are close to a straight thick line (with a small curvature in aerial image).
During monitoring of infrastructures using a down facing, side facing or below looking camera, UAVs can fly at a height but in relative proximity to a power grid or an object to be monitored. Hence there is no occlusion/obstruction over a plurality of linear image objects.
The corresponding linear artifacts in the aerial image exhibit high pixel intensity, since they are typically constructed using metals/alloys which reflect brightly.
High gradients are primarily found at least along the contours of the long linear infrastructure.
Using the above mentioned characteristics, a 3-Dimensional (3D) feature space is constructed to locate linear features, as required in maintenance inspections, with high fidelity. The 3 D feature space can include two popular sparse representation spaces in image processing such as Hue-Saturation-Value (HSV) color space and Hue-Saturation-Lightness for consideration. Hue is considered as the dominant color component at the location of interest while Saturation is considered as the color intensity (or purity) and Value translates to the luminance or simple brightness.
In an embodiment, the dominant background of the images among the power line images are the sky and greenery. Both sky and greenery have specific HSV values that do not interfere with the power-line values. Applying background suppression techniques by the image processing system 100 at block 102, the HSV image is first converted to a grey scale image and then a binary image using a suitable threshold. The color based suppression reduces greenery and sky, but some interference from roads and houses can be present at times. Certain erosion operators are used to remove such vestigial background. After assessing certain tabulated results, it can be observed clearly that background suppression technique applied at block 102 is able to significantly reduce background clutter.
The image processing system 100 goes through multiple (e.g...5) Stages of detection for linear structures, especially power lines in aerial images. In order to detect linear infrastructural features, a mean shift filtering technique is applied by the image processing system 100 to find the peak of a confidence map using the color histogram of the image, and gradient image generation is performed to retain all the linear infrastructural features. Further, based on the position of the camera, the medial strip is considered and seed point pairs for contour growing are selected based on parameters such as gradient magnitude and pixel value as features.
Additionally, detection of contours for infrastructural linear features in image space is performed using a contour growing approach and finally false positives are removed using a rigidity feature, as represented by the total sum of gradient orientations. A detailed step by step approach is as explained below:
Mean shift filtering:
Due to vastness and complexity of the background imagery, existing edge detection techniques show a number of edges in the background, along with those in the foreground. In order to reduce the predominant background clutter and then to simultaneously accentuate the foreground, a plurality of images must be filtered. Mean shift filtering is a data clustering process commonly used in computer vision and image processing. For each pixel of an image (having a spatial location and a particular color) the set of neighboring pixels (within a spatial radius and a defined color distance) is determined. For this set of neighbor pixels, the new spatial center (spatial mean) and the new color mean values are calculated. These calculated mean values serve as the new center for the next iteration of mean shift filtering. The mean shift filter procedure is iterated until the spatial and the color mean stops changing. At the end of the iteration, the final mean color will be assigned to the starting position of that iteration.
Gradient Vector Image Generation:
At block 104, gradient image extraction is performed. Gradient image extraction facilitates extracting richer information from images and helps obtain comparatively more discriminative power than standard histogram based methods. The image gradients are sparsely described in terms of magnitude and orientation. After the process of background suppression, the gradient magnitudes for a plurality of edges of the segmented images are estimated as a first feature, using a Sobel function. The Sobel function is predominantly used in image processing and computer vision, particularly within edge detection algorithms and creates an image which emphasizes edges and transitions of the captured image.
Context based potential seed point selection:
The image processing system 100 detects a plurality of linear features by tracking a prominent boundary of such objects in the gradient image. Since the plurality of objects are linear, the boundary contour of such objects are open to an extent and these boundary contours occurs in a pair of approximately parallel lines. Due to a perspective projection in images via the (side facing) camera, the parallel lines are thickest near the middle of the frame.
In order to extract an open contour with two or more boundaries, a method for growing one or more boundaries are implemented. In an embodiment, a seed point pair is extracted and selected at block 106 along a prominent middle vertical cut portion, per instance of linear feature, and then first two features of the prominent middle vertical cut portion are used to identify the boundaries of the seed point pairs. Further, a construction of a first set of seed point pairs is performed via construction of a second set of candidate seed points. Consider that a set of gradient magnitudes of the pixels along medial vertical lines as Gw/2, and a set of values from HSV space as Vw=2, for a wxh-sized image. Every seed point which is a part of any pair can be represented by s(g, v), where g and v are appropriate gradient and HSV value respectively. Conversely, let g(s) and v(s) represent the gradient and HSV value of a seed point. Also, let L(s) represent pixel location of a seed point, and v(l):l?L be the value at a pixel location. First, the set of candidate seed points C is prepared by taking high gradient pixels on the medial vertical line as follows.
where mean(?w/2) and var(?w/2) are mean and variance of gradient magnitudes respectively. From this candidate set C, the set of paired seed points, S, is constructed as follows.
In case, the position of the camera is front facing, then the plurality of seed points pair are detected from the bottom horizontal line. For example, a video is captured as the visual band image data and a test site in the outskirts of a city is chosen. A fixed-wing mini-UAV is flown at a speed of around 35 km/hr and the positioning of the UAV is such that it can flow overhead to the power grid. This implies that the camera tilt is also front facing and therefore minimized the amount of occlusion among power lines.
Contour Growing Approach:
Once the seed points are selected at block 106, an iterative contour growing approach is initiated to detect the boundaries of linear features. The contour growing approach or method at block 108 is derived from non-maximum suppression method for thinning of boundaries detected by one or more Sobel operators. An image is scanned along the direction of the gradient from each seed point. The local maxima of a pixel (x;y) is estimated in the current neighborhood window including the orientation i.e 3 pixels { ((y + 0; x + 1); (y + 1; x + 1); (y - 1; x + 1)}, notionally represented by ?O0,1, ?O1,1 and ?O-1,1.
At this point, the number 1 represents the gradient direction and after comparison new seed points that are local maxima in both left direction and right direction of the three neighborhoods are located and are then simultaneously considered for growing the boundary in the next iteration. In case, a seed point pair is represented at a particular location as s s(x, y), location of the seed as L(s), (second) feature value of the seed as V(s), then:
s(x,y)={L,V}
Thereby, it is easy to conjure a bijective mapping and its inverse between at least one L-V pair (Location- feature value of the pair) which can be denoted as L ? V and V ? L respectively. If N(S) is considered as the next location of a boundary seed point that is computed in an iteration, then
Rigidity based removal of false positives:
This is the final stage in facilitating real time detection of linear infrastructural objects in aerial imagery as depicted at block 110 and removal of false positives using a rigidity feature is undertaken as represented by the total sum of gradient orientations. Overall, the embodiments herein enable to minimize the missed detections, or false negatives, since a minimum tracking of power lines just below the camera is assured in all frames. Additionally, the mis-classification of random linear features as a power line segment is minimized since long linear stripes in image intensity is unique to power lines.
FIG. 2a and FIG. 2b illustrates exemplary images depicting rigidity distribution for true and false detection, according to the embodiments as disclosed herein. In essence, all linear infrastructure objects are thick metallic objects and hence possess a limited degree of elasticity. Due to high rigidity of the linear infrastructure objects, the curvature of the objects manifests itself as a slow gradual change in gradient orientation across a sequence of boundary pixels, thus limiting the pixels into a narrow band of orientation values. As a byproduct, the range of orientations is also limited and somewhat influenced by the position of the camera as well as the distance of the object from an optical center of the camera (as depicted in FIG. 2a).
Further, from another point of view most of the false positives occurring in the heterogeneous background exhibit a certain degree of randomness in gradient orientations. Unlike rigid infrastructures, such false positives do not possess a spatial correlation and banding of gradient orientations in a narrow band, but instead possess a point spread function (PSF). The point spread function (PSF) describes the response of an imaging system to a point source or point object. A more general term for the PSF is a system's impulse response, the PSF being the impulse response of a focused optical system. The PSF in many contexts can be assumed as the extended blob in an image that represents an unresolved object.
Since these false positives possess a spread out function, this observation can be used to weed out false positives in the final stage (fifth stage). It is to be noted that it is hard to parameterize band shapes and sizes, which in turn defines rigidity. This process of parameterizing is considered hard because mechanical bends (sags in power lines, slow turns in railway lines) can be purposefully introduced in the infrastructure, and the amount of such bends differ in various conditions.
However, in order to compensate for the dependency of band sizes of different camera poses and distances, a metric of total orientation sums to threshold is used to identify and remove false positives. The sum of all orientations along each of the grown boundary pair sequence is defined as the total orientation sum for that object. The threshold for the total sum is considered as 90% of the maximum of total gradient sums for all the boundary pairs identified after the fourth stage. This occurs because the maximum total sum for a true positive will be dominated by spatially correlated angles clustered around a mean, while the total sum for a false positive is expected to be a sum of random angles as per some spread function, thus having lesser mean value.
Several times, the usage of this feature is also capable of removing a linear feature whose boundary tracking strays away from the actual boundary during iterations till an advanced stage (up to almost complete tracking) is reached. This process occurs because once the boundary tracking is strayed from the actual boundary, the gradient orientation of the remaining part of the tracked boundary becomes random in nature and hence the total sum becomes lesser than the expected threshold in most of the cases.
FIG. 3a and FIG. 3b illustrate exemplary images depicting detection of linear features in sample frames, according to the embodiments as disclosed herein. In an embodiment, the image processing system 100 can also be enabled for a semi-automated processing of fault detection if required. For example, consider that a flight is flown across a long power grid corridor, and a video/image is captured by the camera of the power grid, then the image processing system 100 closely monitors for any faults or anomalies across the power grid. The image processing system 100 can detect such anomalies based on certain fault models or reference data pre-loaded in the database. As and when certain characteristics of the pre-loaded match with any detected anomaly, the image processing system 100 classifies them as a fault or an anomaly.
Overall, the image processing system 100 goes through five stages of processing wherein the first stage applies mean shift filtering to find the peak of a confidence map using a color histogram of an image. The second stage generates a gradient image that retains its linear features. In the third stage, a median strip is considered based on the position of the camera and seed point pairs are selected for contour growing based on gradient magnitude and the pixel values as features. The fourth stage pertains to detection of contours of infrastructural linear features in image spaces using the contour growing approach. Finally, false positives using rigidity features are removed, as represented by a total sum of gradient orientations.
It is, however to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g., any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g., hardware means like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the invention may be implemented on different hardware devices, e.g., using a plurality of CPUs.
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
Input/output (I/O) devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
A representative hardware environment for practicing the embodiments may include a hardware configuration of an information handling/computer system in accordance with the embodiments herein. The system herein comprises at least one processor or central processing unit (CPU). The CPUs are interconnected via system bus to various devices such as a random access memory (RAM), read-only memory (ROM), and an input/output (I/O) adapter. The I/O adapter can connect to peripheral devices, such as disk units and tape drives, or other program storage devices that are readable by the system. The system can read the inventive instructions on the program storage devices and follow these instructions to execute the methodology of the embodiments herein.
The system further includes a user interface adapter that connects a keyboard, mouse, speaker, microphone, and/or other user interface devices such as a touch screen device (not shown) to the bus to gather user input. Additionally, a communication adapter connects the bus to a data processing network, and a display adapter connects the bus to a display device which may be embodied as an output device such as a monitor, printer, or transmitter, for example.
The preceding description has been presented with reference to various embodiments. Persons having ordinary skill in the art and technology to which this application pertains will appreciate that alterations and changes in the described structures and methods of operation can be practiced without meaningfully departing from the principle, spirit and scope.
It is, however to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
Input/output (I/O) devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
A representative hardware environment for practicing the embodiments may include a hardware configuration of an information handling/computer system in accordance with the embodiments herein. The system herein comprises at least one processor or central processing unit (CPU). The CPUs are interconnected via system bus to various devices such as a random access memory (RAM), read-only memory (ROM), and an input/output (I/O) adapter. The I/O adapter can connect to peripheral devices, such as disk units and tape drives, or other program storage devices that are readable by the system. The system can read the inventive instructions on the program storage devices and follow these instructions to execute the methodology of the embodiments herein.
The system further includes a user interface adapter that connects a keyboard, mouse, speaker, microphone, and/or other user interface devices such as a touch screen device (not shown) to the bus to gather user input. Additionally, a communication adapter connects the bus to a data processing network, and a display adapter connects the bus to a display device which may be embodied as an output device such as a monitor, printer, or transmitter, for example.
The preceding description has been presented with reference to various embodiments. Persons having ordinary skill in the art and technology to which this application pertains will appreciate that alterations and changes in the described structures and methods of operation can be practiced without meaningfully departing from the principle, spirit and scope.
| # | Name | Date |
|---|---|---|
| 1 | 2711-MUM-2015-IntimationOfGrant22-07-2022.pdf | 2022-07-22 |
| 1 | REQUEST FOR CERTIFIED COPY [18-07-2016(online)].pdf | 2016-07-18 |
| 2 | 2711-MUM-2015-PatentCertificate22-07-2022.pdf | 2022-07-22 |
| 2 | Form 3 [20-08-2016(online)].pdf | 2016-08-20 |
| 3 | Request For Certified Copy-Online.pdf | 2018-08-11 |
| 3 | 2711-MUM-2015-Written submissions and relevant documents [22-07-2022(online)].pdf | 2022-07-22 |
| 4 | Form 3.pdf | 2018-08-11 |
| 4 | 2711-MUM-2015-Correspondence to notify the Controller [12-07-2022(online)].pdf | 2022-07-12 |
| 5 | Form 2.pdf | 2018-08-11 |
| 5 | 2711-MUM-2015-Response to office action [11-07-2022(online)].pdf | 2022-07-11 |
| 6 | Figure of Abstract.jpg | 2018-08-11 |
| 6 | 2711-MUM-2015-US(14)-ExtendedHearingNotice-(HearingDate-14-07-2022).pdf | 2022-07-04 |
| 7 | Drawings.pdf | 2018-08-11 |
| 7 | 2711-MUM-2015-Correspondence to notify the Controller [30-06-2022(online)].pdf | 2022-06-30 |
| 8 | ABSTRACT1.jpg | 2018-08-11 |
| 8 | 2711-MUM-2015-FORM-26 [30-06-2022(online)]-1.pdf | 2022-06-30 |
| 9 | 2711-MUM-2015-FORM-26 [30-06-2022(online)].pdf | 2022-06-30 |
| 9 | 2711-MUM-2015-Power of Attorney-201015.pdf | 2018-08-11 |
| 10 | 2711-MUM-2015-Form 1-190815.pdf | 2018-08-11 |
| 10 | 2711-MUM-2015-US(14)-HearingNotice-(HearingDate-11-07-2022).pdf | 2022-06-23 |
| 11 | 2711-MUM-2015-CLAIMS [27-07-2020(online)].pdf | 2020-07-27 |
| 11 | 2711-MUM-2015-Correspondence-201015.pdf | 2018-08-11 |
| 12 | 2711-MUM-2015-COMPLETE SPECIFICATION [27-07-2020(online)].pdf | 2020-07-27 |
| 12 | 2711-MUM-2015-Correspondence-190815.pdf | 2018-08-11 |
| 13 | 2711-MUM-2015-FER.pdf | 2020-01-27 |
| 13 | 2711-MUM-2015-FER_SER_REPLY [27-07-2020(online)].pdf | 2020-07-27 |
| 14 | 2711-MUM-2015-OTHERS [27-07-2020(online)].pdf | 2020-07-27 |
| 15 | 2711-MUM-2015-FER.pdf | 2020-01-27 |
| 15 | 2711-MUM-2015-FER_SER_REPLY [27-07-2020(online)].pdf | 2020-07-27 |
| 16 | 2711-MUM-2015-COMPLETE SPECIFICATION [27-07-2020(online)].pdf | 2020-07-27 |
| 16 | 2711-MUM-2015-Correspondence-190815.pdf | 2018-08-11 |
| 17 | 2711-MUM-2015-Correspondence-201015.pdf | 2018-08-11 |
| 17 | 2711-MUM-2015-CLAIMS [27-07-2020(online)].pdf | 2020-07-27 |
| 18 | 2711-MUM-2015-US(14)-HearingNotice-(HearingDate-11-07-2022).pdf | 2022-06-23 |
| 18 | 2711-MUM-2015-Form 1-190815.pdf | 2018-08-11 |
| 19 | 2711-MUM-2015-FORM-26 [30-06-2022(online)].pdf | 2022-06-30 |
| 19 | 2711-MUM-2015-Power of Attorney-201015.pdf | 2018-08-11 |
| 20 | 2711-MUM-2015-FORM-26 [30-06-2022(online)]-1.pdf | 2022-06-30 |
| 20 | ABSTRACT1.jpg | 2018-08-11 |
| 21 | 2711-MUM-2015-Correspondence to notify the Controller [30-06-2022(online)].pdf | 2022-06-30 |
| 21 | Drawings.pdf | 2018-08-11 |
| 22 | 2711-MUM-2015-US(14)-ExtendedHearingNotice-(HearingDate-14-07-2022).pdf | 2022-07-04 |
| 22 | Figure of Abstract.jpg | 2018-08-11 |
| 23 | 2711-MUM-2015-Response to office action [11-07-2022(online)].pdf | 2022-07-11 |
| 23 | Form 2.pdf | 2018-08-11 |
| 24 | 2711-MUM-2015-Correspondence to notify the Controller [12-07-2022(online)].pdf | 2022-07-12 |
| 24 | Form 3.pdf | 2018-08-11 |
| 25 | Request For Certified Copy-Online.pdf | 2018-08-11 |
| 25 | 2711-MUM-2015-Written submissions and relevant documents [22-07-2022(online)].pdf | 2022-07-22 |
| 26 | Form 3 [20-08-2016(online)].pdf | 2016-08-20 |
| 26 | 2711-MUM-2015-PatentCertificate22-07-2022.pdf | 2022-07-22 |
| 27 | REQUEST FOR CERTIFIED COPY [18-07-2016(online)].pdf | 2016-07-18 |
| 27 | 2711-MUM-2015-IntimationOfGrant22-07-2022.pdf | 2022-07-22 |
| 1 | SearchStrategy_26-12-2019.pdf |