Sign In to Follow Application
View All Documents & Correspondence

Method And System For Dynamic Generation Of Image Hotspots

Abstract: ABSTRACT METHOD AND SYSTEM FOR DYNAMIC GENERATION OF IMAGE HOTSPOTS Image hotspots are points in the picture that pop up a window when clicked. The Image Hotspot enables you to transform an image into an interactive or a content exploration activity. In conventional approaches hotspots are created manually by drawing boundary box for each product present in the main image. The present disclosure provides a dynamic hotspot generation technique. The dynamic hotspot generation technique initially segments an input image into sub-images. Product images are identified from each sub-image using a novel color intensity based matching technique and bounding boxes are generated around the matching product images. Further, the product images identified from the sub-images are stitched back to the input image using normalization technique. After generating center coordinates of each stitched back bounding box, colliding bounding boxes are identified and displacement applied to the center coordinates to avoid collision. Finally, the generated image hotspots are displayed in the input image. [To be published with FIG. 2]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
14 March 2023
Publication Number
38/2024
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th floor, Nariman point, Mumbai 400021, Maharashtra, India

Inventors

1. AGASTIN, George Sureshkumar
Tata Consultancy Services Limited, 379 Thornall Street 4th& 11th Floor, Edison, New Jersey 08837, USA
2. REDDY, Kondreddy Venkataramana
Tata Consultancy Services Limited, 379 Thornall Street 4th& 11th Floor Edison, Edison, New Jersey 08837, USA

Specification

Description: FORM 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003

COMPLETE SPECIFICATION
(See Section 10 and Rule 13)

Title of invention:
METHOD AND SYSTEM FOR DYNAMIC GENERATION OF IMAGE HOTSPOTS

Applicant

Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India

Preamble to the description:

The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
[001] The disclosure herein generally relates to the field of image processing and, more particularly, to a method and system for dynamic generation of image hotspots.
BACKGROUND
[002] Image hotspots are points or areas in a picture that pop up a window when clicked. Image Hotspots enables you to transform an image into an interactive or a content exploration activity. For example, the popup can contain text and can be made really interactive with sound, images, videos, a website, or a combination of all of these. The hotspots are mainly used in retail, automobile industry, fashion industry, apparels in e-commerce sites and the like. For example, e-commerce sites will have beautiful room scenes with all the products depicted in the image. One of the effective ways to attract customers and show them all the products associated with the room scene is by tagging all the products with a hotspot. This will boost customer engagement, easy product discovery, and a boost in sales. However, tagging all product hotspots is time consuming and tedious process.
[003] In conventional approaches hotspots are created manually by drawing boundary box for each product present in the main image. Some conventional approaches are used to customize the hotspot. However, the creation of hotspots is done manually. Further, there is no effective approaches to identify products present in the bigger image accurately.
SUMMARY
[004] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a method for dynamic generation of image hotspots. The method includes receiving, via one or more interfaces, an input image and a plurality of product images to be mapped with the input image, wherein the input image comprises a plurality of objects. Further, the method includes generating, by one or more hardware processors, a plurality of sub-images based on the input image using a segmentation technique, wherein the segmentation technique splits the input image into a predefined number of sub-images. Furthermore, the method includes generating, by the one or more hardware processors, a first bounding box corresponding to each of the plurality of objects associated with each of the plurality of sub-images using an Object Relational Mapping (ORM) technique, wherein each first bounding box is associated with a first set of bounding box coordinates. Furthermore, the method includes identifying, by the one or more hardware processors, a plurality of matching product images from among the plurality of objects associated with each of the plurality of sub-images using a color intensity based image matching technique. Furthermore, the method includes stitching, by the one or more hardware processors, the plurality of mapping product images identified from the plurality of sub-images into the input image by: (i) computing a second set of bounding box coordinates for each of the plurality of matching product images based on a corresponding first set of bound box coordinates and an image coordinate system associated with the input image using a normalization technique and (ii) generating a second bounding box for each of the plurality of matching product images in the input image based on an associated second set of bounding box coordinates using the ORM technique. Furthermore, the method includes computing, by the one or more hardware processors, a center coordinate for each second bounding box associated with each of the plurality of matching product images based on an associated second set of bounding box coordinates. Finally, the method includes generating, by the one or more hardware processors, a dynamic hotspot for each of the plurality of matching product images in the input image based on the corresponding center coordinates using a collision free dynamic hotspot generation technique.
[005] In another aspect, a system for dynamic generation of image hotspots. The system includes at least one memory storing programmed instructions, one or more Input /Output (I/O) interfaces, and one or more hardware processors operatively coupled to the at least one memory, wherein the one or more hardware processors are configured by the programmed instructions to receive via one or more interfaces, an input image and a plurality of product images to be mapped with the input image, wherein the input image comprises a plurality of objects. Further, the one or more hardware processors are configured by the programmed instructions to generate, a plurality of sub-images based on the input image using a segmentation technique, wherein the segmentation technique splits the input image into a predefined number of sub-images. Furthermore, the one or more hardware processors are configured by the programmed instructions to generate a first bounding box corresponding to each of the plurality of objects associated with each of the plurality of sub-images using an Object Relational Mapping (ORM) technique, wherein each first bounding box is associated with a first set of bounding box coordinates. Furthermore, the one or more hardware processors are configured by the programmed instructions to identify a plurality of matching product images from among the plurality of objects associated with each of the plurality of sub-images using a color intensity based image matching technique. Furthermore, the one or more hardware processors are configured by the programmed instructions to stitch the plurality of mapping product images identified from the plurality of sub-images into the input image by: (i) computing a second set of bounding box coordinates for each of the plurality of matching product images based on a corresponding first set of bound box coordinates and an image coordinate system associated with the input image using a normalization technique and (ii) generating a second bounding box for each of the plurality of matching product images in the input image based on an associated second set of bounding box coordinates using the ORM technique. Furthermore, the one or more hardware processors are configured by the programmed instructions to compute a center coordinate for each second bounding box associated with each of the plurality of matching product images based on an associated second set of bounding box coordinates. Finally, the one or more hardware processors are configured by the programmed instructions to generate a dynamic hotspot for each of the plurality of matching product images in the input image based on the corresponding center coordinates using a collision free dynamic hotspot generation technique.
[006] In yet another aspect, a computer program product including a non-transitory computer-readable medium having embodied therein a computer program for dynamic generation of image hotspots. The computer readable program, when executed on a computing device, causes the computing device to receive an input image and a plurality of product images to be mapped with the input image, wherein the input image comprises a plurality of objects. Further, the computer readable program, when executed on a computing device, causes the computing device to generate, a plurality of sub-images based on the input image using a segmentation technique, wherein the segmentation technique splits the input image into a predefined number of sub-images. Furthermore, the computer readable program, when executed on a computing device, causes the computing device to generate a first bounding box corresponding to each of the plurality of objects associated with each of the plurality of sub-images using an Object Relational Mapping (ORM) technique, wherein each first bounding box is associated with a first set of bounding box coordinates. Furthermore, the computer readable program, when executed on a computing device, causes the computing device to identify a plurality of matching product images from among the plurality of objects associated with each of the plurality of sub-images using a color intensity based image matching technique. Furthermore, the computer readable program, when executed on a computing device, causes the computing device to stitch the plurality of mapping product images identified from the plurality of sub-images into the input image by: (i) computing a second set of bounding box coordinates for each of the plurality of matching product images based on a corresponding first set of bound box coordinates and an image coordinate system associated with the input image using a normalization technique and (ii) generating a second bounding box for each of the plurality of matching product images in the input image based on an associated second set of bounding box coordinates using the ORM technique. Furthermore, the computer readable program, when executed on a computing device, causes the computing device to compute a center coordinate for each second bounding box associated with each of the plurality of matching product images based on an associated second set of bounding box coordinates. Finally, the computer readable program, when executed on a computing device, causes the computing device to generate a dynamic hotspot for each of the plurality of matching product images in the input image based on the corresponding center coordinates using a collision free dynamic hotspot generation technique.
[007] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS

[008] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
[009] FIG. 1 is a functional block diagram of a system for dynamic generation of image hotspots, in accordance with some embodiments of the present disclosure.
[0010] FIG. 2 illustrates a functional architecture of the system of FIG. 1, for dynamic generation of image hotspots, in accordance with some embodiments of the present disclosure.
[0011] FIGS. 3A and 3B (collectively called as FIG. 3) is an exemplary flow diagram illustrating a processor implemented method 300 for dynamic generation of image hotspots implemented by the system of FIG. 1, according to some embodiments of the present disclosure.
[0012] FIGS. 4A and 4B illustrates sub-image generation for the processor implemented method for dynamic generation of image hotspots implemented by the system of FIG. 1, according to some embodiments of the present disclosure.
[0013] FIGS. 5A and 5B illustrates hotspot displacement in colliding bounding boxes for the processor implemented method for dynamic generation of image hotspots implemented by the system of FIG. 1 according to some embodiments of the present disclosure.
[0014] FIG. 6 illustrates generated dynamic hotspots in the input image for the processor implemented method for dynamic generation of image hotspots implemented by the system of FIG. 1 according to some embodiments of the present disclosure.


DETAILED DESCRIPTION OF EMBODIMENTS
[0015] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments.
[0016] Image hotspots are points or areas in the picture that pop up a window when clicked. Image Hotspots enables you to transform an image into an interactive or a content exploration activity. For example, the popup can contain text and can be made really interactive with sound, images, videos, a website, or a combination of all of these. The hotspots are mainly used in retail, automobile industry, fashion industry, apparels in e-commerce sites and the like. For example, e-commerce sites will have beautiful room scenes with all the products depicted in the image. One of the effective ways to attract customers and show them all the products associated with the room scene is by tagging all the products with a hotspot. This will boost customer engagement, easy product discovery, and a boost in sales. However, tagging all product hotspots is time consuming and tedious process. In conventional approaches hotspots are created manually by drawing boundary box for each product present in the main image. Some conventional approaches are used to customize the hotspot. However, there is no effective approaches to identify products present in the bigger image accurately.
[0017] To overcome the challenges in the conventional approaches, the present disclosure provides a dynamic hotspot generation technique. The dynamic hotspot generation technique initially segments an input image into sub-images. Product images are identified from each sub-image using a color intensity based matching technique and bounding boxes are generated around the matching product images. Further, the product images identified from the sub-images are stitched back to the input image using a normalization technique. After generating center coordinates of each stitched back bounding box, colliding bounding boxes are identified and displacement applied to the center coordinates to avoid collision. Finally, the generated image hotspots are displayed in the input image. The dynamic image hotspot generation reduces many hours of manual efforts. This present disclosure can be used for multiple industries such as home improvement, fashion industry, automotive industry, safety industry and the like. The pixel level color intensity based matching helps in obtaining image flipping and image rotation free accurate matching of product images.
[0018] Referring now to the drawings, and more particularly to FIGS. 1 through 6, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
[0019] FIG. 1 is a functional block diagram of a system 100 for dynamic generation of image hotspots, in accordance with some embodiments of the present disclosure. The system 100 includes or is otherwise in communication with hardware processors 102, at least one memory such as a memory 104, and an I/O interface 112. The hardware processors 102, memory 104, and the Input /Output (I/O) interface 112 may be coupled by a system bus such as a system bus 108 or a similar mechanism. In an embodiment, the hardware processors 102 can be one or more hardware processors.
[0020] The I/O interface 112 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface 112 may include a variety of software and hardware interfaces, for example, interfaces for peripheral device(s), such as a keyboard, a mouse, an external memory, a printer and the like. Further, the I/O interface 112 may enable the system 100 to communicate with other devices, such as web servers, and external databases.
[0021] The I/O interface 112 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, local area network (LAN), cable, etc., and wireless networks, such as Wireless LAN (WLAN), cellular, or satellite. For the purpose, the I/O interface 112 may include one or more ports for connecting several computing systems with one another or to another server computer. The I/O interface 112 may include one or more ports for connecting several devices to one another or to another server.
[0022] The one or more hardware processors 102 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, node machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the one or more hardware processors 102 is configured to fetch and execute computer-readable instructions stored in the memory 104.
[0023] The memory 104 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, the memory 104 includes a plurality of modules 106. The memory 104 also includes a data repository (or repository) 110 for storing data processed, received, and generated by the plurality of modules 106.
[0024] The plurality of modules 106 include programs or coded instructions that supplement applications or functions performed by the system 100 for dynamic generation of image hotspots. The plurality of modules 106, amongst other things, can include routines, programs, objects, components, and data structures, which performs particular tasks or implement particular abstract data types. The plurality of modules 106 may also be used as, signal processor(s), node machine(s), logic circuitries, and/or any other device or component that manipulates signals based on operational instructions. Further, the plurality of modules 106 can be used by hardware, by computer-readable instructions executed by the one or more hardware processors 102, or by a combination thereof. The plurality of modules 106 can include various sub-modules (not shown). The plurality of modules 106 may include computer-readable instructions that supplement applications or functions performed by the system 100 for the Creating dynamic image hotspots. In an embodiment, the modules 106 includes an sub-images generation module (shown in FIG. 2), a first bounding box generation module (shown in FIG. 2), an image stitching module (shown in FIG. 2), a center coordinate computation module (shown in FIG. 2) and a dynamic hotspot generation module (shown in FIG. 2). In an embodiment, FIG. 2 illustrates a functional architecture of the system of FIG. 1, for dynamic generation of image hotspots, in accordance with some embodiments of the present disclosure.
[0025] The data repository (or repository) 110 may include a plurality of abstracted piece of code for refinement and data that is processed, received, or generated as a result of the execution of the plurality of modules in the module(s) 106.
[0026] Although the data repository 110 is shown internal to the system 100, it will be noted that, in alternate embodiments, the data repository 110 can also be implemented external to the system 100, where the data repository 110 may be stored within a database (repository 110) communicatively coupled to the system 100. The data contained within such external database may be periodically updated. For example, new data may be added into the database (not shown in FIG. 1) and/or existing data may be modified and/or non-useful data may be deleted from the database. In one example, the data may be stored in an external system, such as a Lightweight Directory Access Protocol (LDAP) directory and a Relational Database Management System (RDBMS). Working of the components of the system 100 are explained with reference to the method steps depicted in FIG. 3, FIGS. 6A and FIG. 6B.
[0027] FIG. 3 is an exemplary flow diagram illustrating a method 300 for dynamic generation of image hotspots implemented by the system of FIG. 1, according to some embodiments of the present disclosure. In an embodiment, the system 100 includes one or more data storage devices or the memory 104 operatively coupled to the one or more hardware processor(s) 102 and is configured to store instructions for execution of steps of the method 300 by the one or more hardware processors 102. The steps of the method 300 of the present disclosure will now be explained with reference to the components or blocks of the system 100 as depicted in FIG. 1 and the steps of flow diagram as depicted in FIG. 3. The method 300 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types. The method 300 may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communication network. The order in which the method 300 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 300, or an alternative method. Furthermore, the method 300 can be implemented in any suitable hardware, software, firmware, or combination thereof.
[0028] At step 302 of the method 300, the one or more hardware processors 102 are configured by the programmed instructions to receive an input image and a plurality of product images to be mapped with the input image. For example, the input image can be an image of a room environment, or it can be an image of an apparel, or it can be an image of car and the like. The input image includes a plurality of objects.
[0029] At step 304 of the method 300, the sub-images generation module 202 executed by one or more hardware processors 102 is configured by the programmed instructions to generate a plurality of sub-images based on the input image using a segmentation technique. The segmentation technique is a customized technique which splits the input image into a predefined number of sub-images. For example, the input image can be divided into 5 blocks as shown in FIG. 4A. Now referring to FIG. 4A, the blocks B1, B2, B3 and B4 are of equal size. The block B1 is the bottom left dashed line area, B2 is top left dashed line area, B3 is top right dashed line area and the B4 is the bottom right dashed line area. The block B5 (overlaps with B1, B2, B3 and B4) is comparatively a bigger block than the other blocks which is given in between solid lines. In an embodiment, the segmentation is done as explained as follows. Now referring to FIG. 4B, initially, the input image 402 is divided vertically into two halves by width (for example a Left-Image & a Right-Image shown in 404). The Left-Image and Right-Image 404 are rotated clockwise by 90 and then each image is divided in two halves by width as shown in 406. Each half returns two equal images, a total of 4 images (B1, B2, B3, B4). To obtain the block B5, the input image is cropped from the middle using the values (0, h//4, w, h-h//4).
[0030] In another embodiment, the image is segmented into 9 sub-images as explained below: The first block (sub-image) is obtained by “Left 50% of width of original image & Top 50% of height original image”. The second block is obtained by “Right 50% of width of original image & Top 50% of height original image”. The third block is obtained by “Left 50% of width of original image & Bottom 50% of height original image”. The fourth block is obtained by “Right 50% of width of original image & Bottom 50% of height original image”. The fifth block is obtained by “50% height, 100% width (0, 25% of height, width, 75% of height)”. The sixth block is obtained by “25% height, 100% width (0, 0, width, 25% of height)”. The seventh block is obtained by “25% height, 100% width (0, 25% of height, width, 50% of height)”. The eighth block is obtained by “25% height, 100% width (0, 50% of height, width, 75% of height)”. The ninth block is obtained by “25% height, 100% width (0, 75% of height, width, 100% of height)”.
[0031] At step 306 of the method 300, the first bounding box generation module 204 executed by the one or more hardware processors 102 is configured by the programmed instructions to generate a first bounding box corresponding to each of the plurality of objects associated with each of the plurality of sub-images using an Object Relational Mapping (ORM) technique, wherein each first bounding box is associated with a first set of bounding box coordinates (for example, left top (x,y), right top (x,y), left bottom (x,y) and right bottom(x,y).
[0032] At step 308 of the method 300, the matching product images identification module 206 executed by the one or more hardware processors 102 is configured by the programmed instructions to identify a plurality of matching product images from among the plurality of objects associated with each of the plurality of sub-images using a color intensity based image matching technique.
[0033] In an embodiment, the method of identifying the plurality of matching product images from among the plurality of objects associated with each of the plurality of sub-images using the color intensity based image matching technique includes receiving the plurality of objects associated each of the plurality of sub-images. Further, a 1-Dimensional (1D) color intensity array is generated corresponding to each of a plurality of objects associated with each of the plurality of sub-images using a color intensity value generation technique. A color intensity value is assigned to each pixel based on a comparison among red, green and blue color values of the pixel. Further, the plurality of matching product images from among the plurality of objects associated with each of the plurality of sub-images are identified based on a corresponding 1D color intensity array using Mean Square Deviation (MSD) technique. For example, mean square of the plurality of objects and the mean square of each of the plurality of product images are computed and deviation is compared. The images having MSD less than a predefined threshold are identified as matching product images.
[0034] In an embodiment, the method of generating 1-Dimensional (1D) color intensity array corresponding to each of the plurality of objects is performed using the following conditions. explained as follows. If R=G=B => 0, If R>G,B => 1, If B>R,G=> 2, If G>R,B => 3, If R=G > B => 4, If B=G > R => 5 and If R=B > G => 6.
[0035] For example, a 2D RGB color value matrix (5X5) is given in Table I and the corresponding ID color intensity array is shown in Table II. Now referring to Table I and Table II, considering the RGB color value of the first pixel is (5,5,5) which satisfies first condition (If R=G=B => 0). Hence the corresponding color intensity value in the ID color intensity array is ‘0’. Similarly, 25 entries are made in the ID color intensity array based on the RGB color values of each pixel and the corresponding conditions. This pixel level color intensity based matching helps in obtaining image flipping and image rotation free accurate matching.
Table I – 2D RGB color value matrix
5,5,5 4,3,9 6,6,3 5,3,5 1,2,2
4,6,99 44,67,88 34,25,78 34,56,45 78,89,98
23,67,54 23,24,21 34,26,72 3,56,45 78,8,98
4,6,99 44,67,88 34,25,78 34,56,45 78,89,98
4,6,99 44,67,88 34,25,78 34,56,45 78,98,98

Table II – ID color intensity array
0 1 4 6 5 2 2 3 ……. 5

[0036] At step 310 of the method 300, the image stitching module 208 executed by the one or more hardware processors 102 is configured by the programmed instructions to stitch the plurality of mapping product images identified from the plurality of sub-images into the input image by (i) computing a second set of bounding box coordinates for each of the plurality of matching product images based on a corresponding first set of bound box coordinates and an image coordinate system associated with the input image using a normalization technique and (ii) generating a second bounding box for each of the plurality of matching product images in the input image based on an associated second set of bounding box coordinates using the ORM technique.
[0037] In an embodiment, the normalization formula for each of the plurality of sub-images (blocks) are given in Table III.
Table III – Normalization formulae
Pixel point B1 B2 B3 B4 B5
Top Left normalized_x * width , normalized_y * height + height
normalized_x * width , normalized_y * height
normalized_x * width + width , normalized_y * height + height
normalized_x * width + width , normalized_y * height
normalized_x * width , normalized_y * height + height//2

Top right normalized_x * width , normalized_y * height + height normalized_x * width , normalized_y * height normalized_x * width + width , normalized_y * height + height normalized_x * width + width , normalized_y * height normalized_x * width , normalized_y * height + height//2
Right bottom normalized_x * width , normalized_y * height + height normalized_x * width , normalized_y * height normalized_x * width + width , normalized_y * height + height normalized_x * width + width , normalized_y * height normalized_x * width , normalized_y * height + height//2
Left bottom normalized_x * width , normalized_y * height + height normalized_x * width , normalized_y * height normalized_x * width + width , normalized_y * height + height normalized_x * width + width , normalized_y * height normalized_x * width , normalized_y * height + height//2
[0038] At step 312 of the method 300, the center coordinate computation module 210 executed by the one or more hardware processors 102 is configured by the programmed instructions to compute a center coordinate for each second bounding box associated with each of the plurality of matching product images based on an associated second set of bounding box coordinates. For example, the center coordinate is computed using the following formulae.
XCen = (Bottom Left.normalized_x + Top Right.normalized_x)/2
YCen = (Bottom Left.normalized_y + Top Right.normalized_y)/2
[0039] At step 314 of the method 300, the dynamic hotspot generation module 212 executed by the one or more hardware processors 102 is configured by the programmed instructions to generate a dynamic hotspot for each of the plurality of matching product images in the input image based on the corresponding center coordinates using a collision free dynamic hotspot generation technique.
[0040] In an embodiment, the method of generating the dynamic hotspot for each of the plurality of matching product images in the input image based on the corresponding center coordinates using the collision free dynamic hotspot generation technique includes the following steps. Initially, the second bounding box associated with each of the plurality of matching product images and the corresponding center coordinates are received as input. Further, a plurality of colliding second bounding boxes are identified from among a plurality of second bounding boxes associated with the input image by computing a Euclidean distance between center coordinates of each of the plurality of second bounding boxes. The center coordinates with the Euclidean distance less than a predefined distance threshold are identified as colliding bounding boxes. Further, a plurality of collision free hotspots are obtained by rearranging the center coordinate associated with each of the plurality of colliding second bounding boxes based on a plurality of predefined displacement coordinates as shown in FIGS. 5A and 5B. Finally, the dynamic hotspot for each of the plurality of matching product images are generated based on the corresponding center coordinates and the plurality of collision free hotspots. The dynamic hotspot corresponding to each of the plurality of matching product images are displayed in the input image as shown in FIG. 6.
[0041] Now referring to FIG. 5A, the center coordinates 502A and 504A of the two pillows are very closer and hence the center points are adjusted as 502B and 504B as shown in FIG. 5B. Now referring to FIG. 6, 602 is the hot spot for window, 604 is the hotspot for one pillow, 606 is the hotspot for cupboard, 608 is the hotspot for teapoy, 610 is the hotspot for sofa and 612 is the hotspot for the black pillow.
[0042] In an embodiment, the pseudocode for adjusting the colliding center points is described below:
Pseudocode 1: Collision avoidance
1) Iterate through all the objects in the final-hashmap.
2) Grab the center-point of the object and compare the distance between the rest of the points. (Euclidean's 2D distance b/w 2 points)
3) When a pair of objects distance is less than a fixed threshold value (distance).
4) The center point values are re-adjusted for both the objects such that.
5) XVal for Object1 = Org_CenterPos - (BBX2-BBX1)/4
6) YVal for Object1 = Org_CenterPos - (BBY2-BBY1)/4
7) XVal for Object2 = Org_CenterPos + (BBX2-BBX1)/4
8) YVal for Object2 = Org_CenterPos + (BBY2-BBY1)/4
The adjustment happens diagonally such that Object1 center point moves downwards diagonally and Object2 center point moves upwards diagonally.
Experimentation details:
[0043] In an embodiment, the present disclosure is experimented with various scenes and environments, and it is identified that the preset disclosure is capable of generating image hotspots dynamically in an accurate manner. In an embodiment, the computation time for the present disclosure for complex scenes including large number of objects is 5.64 seconds and for simple images like apparels took less than 5 seconds. In a batch mode, the present disclosure took 19 seconds to generate bounding boxes for 6 scenes.
[0044] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[0045] The embodiments of present disclosure herein address the unresolved problem of Creating dynamic image hotspots. The present disclosure provides an dynamic and automated method for generating image hotspots which is a crucial thing in e-commerce and similar applications. The pixel level color intensity based matching helps in obtaining image flipping and image rotation free accurate matching. Further, the present disclosure provides a collision free dynamic image hotspot generation.
[0046] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein such computer-readable storage means contain program-code means for implementation of one or more steps of the method when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs, GPUs and edge computing devices.
[0047] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e. non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[0048] It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.
, Claims:WE CLAIM:

1. A processor implemented method (300), the method comprises:
receiving (302), via one or more interfaces, an input image and a plurality of product images to be mapped with the input image, wherein the input image comprises a plurality of objects;
generating (304), by one or more hardware processors, a plurality of sub-images based on the input image using a segmentation technique, wherein the segmentation technique splits the input image into a predefined number of sub-images;
generating (306), by the one or more hardware processors, a first bounding box corresponding to each of the plurality of objects associated with each of the plurality of sub-images using an Object Relational Mapping (ORM) technique, wherein each first bounding box is associated with a first set of bounding box coordinates;
identifying (308), by the one or more hardware processors, a plurality of matching product images from among the plurality of objects associated with each of the plurality of sub-images using a color intensity based image matching technique;
stitching (310), by the one or more hardware processors, the plurality of mapping product images identified from the plurality of sub-images into the input image by:
computing a second set of bounding box coordinates for each of the plurality of matching product images based on a corresponding first set of bound box coordinates and an image coordinate system associated with the input image using a normalization technique; and
generating a second bounding box for each of the plurality of matching product images in the input image based on an associated second set of bounding box coordinates using the ORM technique;
computing (312), by the one or more hardware processors, a center coordinate for each second bounding box associated with each of the plurality of matching product images based on an associated second set of bounding box coordinates; and
generating (314), by the one or more hardware processors, a dynamic hotspot for each of the plurality of matching product images in the input image based on the corresponding center coordinates using a collision free dynamic hotspot generation technique.

2. The method as claimed in claim 1, wherein the method of identifying the plurality of matching product images from among the plurality of objects associated with each of the plurality of sub-images using the color intensity based image matching technique comprises:
receiving the plurality of objects associated each of the plurality of sub-images;
generating a 1-Dimensional (1D) color intensity array corresponding to each of a plurality of objects associated with each of the plurality of sub-images using a color intensity value generation technique, wherein a color intensity value is assigned to each pixel based on a comparison among red, green and blue color values of the pixel;
identifying the plurality of matching product images from among the plurality of objects associated with each of the plurality of sub-images based on a corresponding 1D color intensity array using Mean Square Deviation (MSD) technique;

3. The method as claimed in claim 1, wherein the method of generating the dynamic hotspot for each of the plurality of matching product images in the input image based on the corresponding center coordinates using the collision free dynamic hotspot generation technique comprises:
receiving the second bounding box associated with each of the plurality of matching product images and the corresponding center coordinates;
identifying a plurality of colliding second bounding boxes from among a plurality of second bounding boxes associated with the input image by computing a Euclidean distance between center coordinates of each of the plurality of second bounding boxes, wherein the center coordinates with the Euclidean distance less than a predefined distance threshold are identified as colliding bounding boxes;
obtaining a plurality of collision free hotspots by rearranging the center coordinate associated with each of the plurality of colliding second bounding boxes based on a plurality of predefined displacement coordinates; and
generating the dynamic hotspot for each of the plurality of matching product images based on the corresponding center coordinates and the plurality of collision free hotspots, wherein the dynamic hotspot corresponding to each of the plurality of matching product images are displayed in the input image.

4. A system (100) comprising:
at least one memory (104) storing programmed instructions; one or more Input /Output (I/O) interfaces (112); and one or more hardware processors (102) operatively coupled to the at least one memory (104), wherein the one or more hardware processors (102) are configured by the programmed instructions to:
receive an input image and a plurality of product images to be mapped with the input image, wherein the input image comprises a plurality of objects;
generate a plurality of sub-images based on the input image using a segmentation technique, wherein the segmentation technique splits the input image into a predefined number of sub-images;
generate a first bounding box corresponding to each of the plurality of objects associated with each of the plurality of sub-images using an Object Relational Mapping (ORM) technique, wherein each first bounding box is associated with a first set of bounding box coordinates;
identify a plurality of matching product images from among the plurality of objects associated with each of the plurality of sub-images using a color intensity based image matching technique;
stitch the plurality of mapping product images identified from the plurality of sub-images into the input image by:
computing a second set of bounding box coordinates for each of the plurality of matching product images based on a corresponding first set of bound box coordinates and an image coordinate system associated with the input image using a normalization technique; and
generating a second bounding box for each of the plurality of matching product images in the input image based on an associated second set of bounding box coordinates using the ORM technique;
compute a center coordinate for each second bounding box associated with each of the plurality of matching product images based on an associated second set of bounding box coordinates; and
generate a dynamic hotspot for each of the plurality of matching product images in the input image based on the corresponding center coordinates using a collision free dynamic hotspot generation technique.

5. The system of claim 4, wherein the method of identifying the plurality of matching product images from among the plurality of objects associated with each of the plurality of sub-images using the color intensity based image matching technique comprises:
receiving the plurality of objects associated each of the plurality of sub-images;
generating a 1-Dimensional (1D) color intensity array corresponding to each of a plurality of objects associated with each of the plurality of sub-images using a color intensity value generation technique, wherein a color intensity value is assigned to each pixel based on a comparison among red, green and blue color values of the pixel;
identifying the plurality of matching product images from among the plurality of objects associated with each of the plurality of sub-images based on a corresponding 1D color intensity array using Mean Square Deviation (MSD) technique;

6. The system of claim 4, wherein the method of generating the dynamic hotspot for each of the plurality of matching product images in the input image based on the corresponding center coordinates using the collision free dynamic hotspot generation technique comprises:
receiving the second bounding box associated with each of the plurality of matching product images and the corresponding center coordinates;
identifying a plurality of colliding second bounding boxes from among a plurality of second bounding boxes associated with the input image by computing a Euclidean distance between center coordinates of each of the plurality of second bounding boxes, wherein the center coordinates with the Euclidean distance less than a predefined distance threshold are identified as colliding bounding boxes;
obtaining a plurality of collision free hotspots by rearranging the center coordinate associated with each of the plurality of colliding second bounding boxes based on a plurality of predefined displacement coordinates; and
generating the dynamic hotspot for each of the plurality of matching product images based on the corresponding center coordinates and the plurality of collision free hotspots, wherein the dynamic hotspot corresponding to each of the plurality of matching product images are displayed in the input image.

Dated this 14th Day of March 2023

Tata Consultancy Services Limited
By their Agent & Attorney

(Adheesh Nargolkar)
of Khaitan & Co
Reg No IN-PA-1086

Documents

Application Documents

# Name Date
1 202321016976-STATEMENT OF UNDERTAKING (FORM 3) [14-03-2023(online)].pdf 2023-03-14
2 202321016976-REQUEST FOR EXAMINATION (FORM-18) [14-03-2023(online)].pdf 2023-03-14
3 202321016976-PROOF OF RIGHT [14-03-2023(online)].pdf 2023-03-14
4 202321016976-FORM 18 [14-03-2023(online)].pdf 2023-03-14
5 202321016976-FORM 1 [14-03-2023(online)].pdf 2023-03-14
6 202321016976-FIGURE OF ABSTRACT [14-03-2023(online)].pdf 2023-03-14
7 202321016976-DRAWINGS [14-03-2023(online)].pdf 2023-03-14
8 202321016976-DECLARATION OF INVENTORSHIP (FORM 5) [14-03-2023(online)].pdf 2023-03-14
9 202321016976-COMPLETE SPECIFICATION [14-03-2023(online)].pdf 2023-03-14
10 202321016976-FORM-26 [27-04-2023(online)].pdf 2023-04-27
11 Abstract1.jpg 2023-05-23
12 202321016976-FER.pdf 2025-09-22

Search Strategy

1 202321016976_SearchStrategyNew_E_202314026728searchstratgy(1)E_22-09-2025.pdf