Abstract: The present disclosure relates to a method of modifying a transformable image. The method includes displaying at least one transformable image on a user device. The at least one transformable image comprises one or more dynamic components. The method also includes sensing, by the user device, at least one user gesture. The method further includes determining, by the user device, at least one modification of the one or more dynamic components of the at least one transformable image based on the sensed user gesture. The method also includes modifying, by the user device, the one or more dynamic components of the at least one transformable image according to the at least one determined modification. The method further includes rendering on the user device, a modified image.
[0001] The present disclosure relates to the field of instant messaging.
BACKGROUND
[0002] The information in this section merely provides background information related to the present disclosure and may not constitute prior art(s).
[0003] Rapid growth in the field of digital portable devices has provided significant motivation for development in instant messaging. Generally, users use instant messaging and communicate by way of publishing a post, making a comment, or sending a text or an image. Messaging applications are an increasingly popular form of communications. The electronic message-based communication is dominating the other mode of communications.
[0004] While conventional messaging applications provide text-based communication, one aspect of messaging applications that has grown recently is sticker-based communication. A sticker illustrates an image used to convey the intended message of the sender to the receiver. Stickers also convey the intended message quickly and in a simple manner to the receiver. However, the sticker-based communication provided by the conventional messaging applications is unable to provide an interactive communication system to the receiver. Particularly, the conventional messaging applications/systems do not have any means by which the receiver can represent his reaction and/or emotion to the received sticker and/or an image.
[0005] Therefore, there is a need of system which provides interactive messaging system and enables representation of the reaction of the receiver who received the sticker and/or the image.
SUMMARY OF THE INVENTION
[0006] One or more shortcomings of the prior art are overcome, and additional advantages are provided by the present disclosure. Additional features and advantages
are realized through the techniques of the present disclosure. Other embodiments and aspects of the disclosure are described in detail herein and are considered a part of the disclosure.
[0007] In a main aspect, the present disclosure provides a method of modifying a transformable image. The method includes displaying at least one transformable image on a user device. The at least one transformable image comprises one or more dynamic components. The method also includes sensing, by the user device, at least one user gesture. Further, the method includes determining, by the user device, at least one modification of the one or more dynamic components of the at least one transformable image based on the sensed user gesture. The method also includes modifying, by the user device, the one or more dynamic components of the at least one transformable image according to the at least one determined modification. Moreover, the method includes rendering on the user device, a modified image.
[0008] According to another aspect, the at least one transformable image comprises at least one of a 2-Dimensional (2D) or a 3-Dimensional (3D) graphical illustration.
[0009] According to yet another aspect, the at least one transformable image comprises metadata corresponding to the one or more dynamic components. The metadata comprises gestures and corresponding modification associated with the one or more dynamic components. The method of modifying a transformable image further comprises identifying the one or more dynamic components associated with the at least one transformable image from the metadata.
[0010] According to yet another aspect, the user gesture comprises at least one of an audio gesture, a hand gesture, a face gesture or a touch gesture.
[0011] According to yet another aspect, in case the user device is unable to determine any modification of the one or more dynamic components of the at least transformable image based on the sensed user gesture, the method further comprises modifying, by
the user device, the one or more dynamic components of the at least one transformable image using one or more machine learning techniques.
[0012] According to yet another aspect, modifying the one or more dynamic components comprises at least one topological modification comprising modifying the one or more dynamic components by at least one of scale, position and shear transformation.
[0013] According to yet another aspect, modifying the one or more dynamic components comprises modifying each of the one or more dynamic components independently.
[0014] According to yet another aspect, the modified image is an animated image comprising a sequence of intermediate interpolated images.
[0015] According to yet another aspect, the method further comprises receiving, at the user device, the at least one transformable image from another device and transmitting, by the user device to another device, the modified image.
[0016] In another main aspect, the present disclosure provides a device for modifying a transformable image. The device comprises one or more memory units and at least one processing unit operatively coupled to the one or more memory units. The at least one processing unit is configured to display at least one transformable image. The at least one transformable image comprises one or more dynamic components. The at least one processing unit is further configured to sense at least one user gesture. The at least one processing unit is also configured to determine at least one modification of the one or more dynamic components of the at least one transformable image based on the sensed user gesture. Further, the at least one processing unit is configured to modify the one or more dynamic components of the at least one transformable image according to the at least one determined modification. Moreover, the at least one processing unit is configured to render the modified image.
[0017] According to yet another aspect, the at least one processing unit is further configured to identify the one or more dynamic components associated with the at least one transformable image from metadata included in the at least one transformable image.
[0018] According to yet another aspect, in case the at least one processing unit is unable to determine any modification of the one or more dynamic components of the at least transformable image based on the sensed user gesture, the at least one processing unit is configured to modify the one or more dynamic components of the at least one transformable image using one or more machine learning techniques.
[0019] According to yet another aspect, the at least one processing unit is also configured to receive the at least one transformable image from another device and transmit the modified image to another device.
[0020] In yet another aspect, the at least one processing unit is configured to perform at least one topological modification comprising modifying the one or more dynamic components by at least one of scale, position, and shear transformation. Further, the at least one processing unit is configured to modify each of the one or more dynamic components independently.
[0021] In the above paragraphs, the most important features of the invention have been outlined, in order that the detailed description thereof that follows may be better understood and in order that the present contribution to the art may be better understood and in order that the present contribution to the art may be better appreciated. There are, of course, additional features of the invention that will be described hereinafter and which will form the subject of the claims appended hereto. Those skilled in the art will appreciate that the conception upon which this disclosure is based may readily be utilized as a basis for the designing of other structures for carrying out the several purposes of the invention. It is important therefore that the claims be regarded as including such equivalent constructions as do not depart from the spirit and scope of the invention.
OBJECT OF THE INVENTION
[0022] The object of the present disclosure is to provide a system that enables a receiver to interact with a sticker and/or image received over an instant messaging system.
BREIF DESCRIPTION OF DRAWINGS
[0023] Further aspects and advantages of the present invention will be readily understood from the following detailed description with reference to the accompanying drawings, where like reference numerals refer to identical or functionally similar elements throughout the separate views. The figures together with the detailed description below, are incorporated in and form part of the specification, and serve to further illustrate the aspects and explain various principles and advantages, in accordance with the present invention wherein:
[0024] Fig. 1 illustrates a system for modifying a transformable image in accordance with an embodiment of the present disclosure.
[0025] Fig. 2 is a block diagram illustrating a user device in accordance with an embodiment of the present disclosure.
[0026] Fig. 3 illustrates two different aspects of a sticker in accordance with an embodiment of the present disclosure.
[0027] Fig. 4 illustrates a flow chart of method of modifying a transformable image in accordance with an embodiment of the present disclosure.
[0028] Skilled person in art will appreciate that elements in the drawings are illustrated for simplicity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the drawings may be exaggerated relative to other elements to help to improve understanding of aspects of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[0029] Referring in the present document, the word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment or implementation of the present subject matter described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
[0030] While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however that it is not intended to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the scope of the disclosure.
[0031] The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device that comprises a list of components does not include only those components but may include other components not expressly listed or inherent to such setup or device. In other words, one or more elements in a system or apparatus proceeded by "comprises... a" does not, without more constraints, preclude the existence of other elements or additional elements in the system or apparatus or device.
[0032] The term "sticker", "image", "animated sticker" is used to represent similar form of digital media and is used interchangeably throughout the disclosure.
[0033] Disclosed herein is a technique for modifying a transformable image. The technique includes displaying at least one transformable image with one or more dynamic components to a user. The technique further includes sensing at least one user gesture and determining at least one modification of the one or more dynamic components of the at least one transformable image based on the sensed user gesture. The technique also includes modifying the one or more dynamic components of the at least one transformable image according to the at least one determined modification. Moreover, the technique includes rendering and/or transmitting the modified image.
Therefore, the technique disclosed in present disclosure provides an interactive and entertaining messaging system for users.
[0034] Fig. 1 illustrates a system 100 for modifying a transformable image in accordance with an embodiment of the present disclosure. The system 100 includes a first user device 102a and a second user device 102b and a server 106 communicably coupled to each other via a network 104. An example of the network 104 may include, but not limited to, the Internet, a Wide Area Network (WAN), a Local Area Network (LAN), or any combination thereof.
[0035] Each of the first user device 102a and the second user device 102b may be configured to provide a user interface to the users to enable them to interact within the system 100 using a transformable image. Each of the first user device 102a and the second user device 102b may include any mobile computing or communication device, such as, but not limited to, a notebook computer, a personal digital assistant (PDA), a mobile phone, a smartphone, a laptop, a tablet or any similar class of mobile computing device with sufficient processing, communication, and audio/video recording and playback capabilities.
[0036] In an exemplary embodiment, the first user device 102a may be configured to transmit a transformable image to a second user device 102b. In an embodiment, the first user device 102a may provide a user interface to a first user to select and/or generate a transformable image and transmit it to the second user device 102b for a second user. The transformable image may represent a form of digital media which includes one or more dynamic components which can be modified based on requirements. The transformable image may be a 2-Dimensional (2D) or a 3-Dimensional (3D) graphical illustration. In some embodiments, the first user device 102a may enable a first user to select at least one transformable image from one or more transformable images stored at the first user device 102a. In some other embodiments, the first user device 102a may generate at least one transformable image based on one or more user inputs and/or user parameters. The one or more user
parameters may include, but not limited to, user chat history, user chatting patterns, user likes/dislikes and so forth. In yet another embodiment, the first user device 102a may access transformable images stored on a cloud server over the network 104. In another embodiment, the first user device 102a may implement one or more machine learning techniques to generate at least one transformable image. In some embodiments, the one or more dynamic components may be included as metadata associated with an image. The first user device 102a may be configured to transmit the at least one transformable image to the second user device 102b. The first user device 102a may include any number of suitable hardware and corresponding software components which may be required to carry out one or more functionalities of the first user device 102a. However, description of said components has been avoided for the sake of brevity.
[0037] The second user device 102b may be configured to receive the at least one transformable image from the first user device 102a. The second user device 102b may display the received at least one transformable image to the second user. In an exemplary embodiment, the second user device 102b may receive the at least one transformable image along with the associated metadata. In an embodiment, the metadata may include, but not limited to, one or more dynamic components, gestures and corresponding modifications associated with the one or more dynamic components. In an exemplary embodiment, the user gesture may be encoded in the metadata by any suitable encoding technique. In some embodiments, the second user device 102b may determine that the received image is a transformable image by utilizing metadata and may indicate it to the second user accordingly.
[0038] Upon displaying the one transformable image to the second user, the second user device 102b may receive at least one user gesture from the second user. Particularly, the second user device 102b may sense the at least one user gesture from the second user using one or more sensor units of the second user device 102b. The user gesture may include at least one of an audio gesture, a hand gesture, a face gesture or a touch gesture. The second user device 102b may determine at least one
modification of the one or more dynamic components of the at least one transformable image based on the sensed user gesture using the metadata associated with the at least one transformable image. In another embodiment, where the second user device 102b is unable to determine any modification of the one or more dynamic components of the at least one transformable image based on the sensed user gesture, the second user device 102b may modify the one or more dynamic components of the at least one transformable image using one or more machine learning techniques. In some embodiments, the machine learning techniques may intelligently identify components which may have dynamic property and a corresponding user gesture may be best suitable for said dynamic property. The machine learning techniques may assign the identified components with the identified user gesture. Thereafter, the machine learning techniques may match the sensed user gesture with assigned user gesture and perform the modification of the dynamic components based on sensed user gesture.
[0039] In an embodiment, the modification of the one or more dynamic components of the at least one transformable image may include, but not limited to at least one of topological modification comprising modifying the one or more dynamic components by at least one of scale, position and shear transformation. In such transformation, each of the one or more dynamic components may be modified independently. For example, an image with a door and window may be transformed to only door when user performs a knock gesture, thereby modifying the position of door. In another example, the second user may change the size of an object in a transformable image by pinching with two fingers on that specific object. Similarly, other topological transformation may be performed. In alternative embodiment, the second user device 102b may perform non-topological modification of the one or more dynamic components where a single gesture may be mapped with two or more dynamic components and the modification of said two or more dynamic components may be performed simultaneously. For example, an image with ten lighted candles, each candle having corresponding dynamic components may be all blown out by single user face gesture of blowing the candles.
[0040] The second user device 102b may render the modified image to the second user. In some embodiments, the modified image is an animated image comprising a sequence of intermediate interpolated images. Particularly, each of the steps performed during modification may be taken as a frame and then combined to represent the animation of the modification to illustrate the animated modified image. The second user device 102b may include any number of suitable hardware and corresponding software components which may be required to carry out one or more functionalities of the second user device 102b. However, description of said components has been avoided for the sake of brevity.
[0041] In some embodiments, each of the first user device 102a and the second user device 102b may be communicably coupled to each other via the server 106. The server 106 may be configured to provide each of the first user device 102a and the second user device 102b corresponding user interface required to achieve the desired objective of the present disclosure. In alternative embodiments, the server 106 may be configured to perform one or more steps of any of the first user device 102a and the second user device 102b based on the requirements.
[0042] Embodiment illustrated above is exemplary in nature and the system 100 and/or any element of the system 100 may include any number of additional components required to perform the desired operation of the system 100.
[0043] Fig. 2 is a block diagram illustrating a user device 200 in accordance with an embodiment of the present disclosure. In an exemplary embodiment, said user device 200 may represent any of the first user device 102a and the second user device 102b, as illustrated in figure 1. The user device 200 may include any mobile computing or communication device, such as, but not limited to, a notebook computer, a personal digital assistant (PDA), a mobile phone, a smartphone, a laptop, a tablet or any similar class of mobile computing device with sufficient processing, communication, and audio/video recording and playback capabilities. In an exemplary embodiment, the user device 200 may include a transceiver 202, a memory unit 204, a processing unit 206, a
sensor unit 208, a determination unit 210, a modification unit 212 and a rendering unit 214. However, embodiments of the present disclosure either covers or intended to cover any additional component of the user device 200 that may be required for carrying out one or more functionalities of the user device 200.
[0044] The transceiver 202 may be configured to enable transmission and reception of at least one transformable image and associated metadata. Examples of the transformable image may include, but not limited to, an image of balloons where balloon can be busted using one or more user inputs, an image of cake with lighted candles where candles can be extinguished and so forth. In an embodiment, the transceiver 202 at the first user device 102a may be configured to transmit at least one transformable image and associated metadata to the transceiver 202 at the second user device 102b.
[0045] The user device 200 may also include a processing unit 206 configured to process the at least one transformable image and/or one or more user inputs received from a user. The user device 200 may also include the memory unit 204 configured to store data and/or instruction required for processing of the processing unit 206. In some embodiments, the memory unit 204 may store the at least one transformable image and the associated metadata and/or generated modified image. In some embodiments, the memory unit 204 may include memory storage devices such as, but not limited to, Read Only Memory (ROM), Random Access Memory (RAM), flash disk, and so forth. In some embodiments, the processing unit 206 may execute a set of instructions stored in the memory unit 204 to provide a user interface to the user. The user interface may allow the user to interact within the system 100 (shown in Fig. 1).
[0046] The user device 200 may also include the sensor unit 208 configured to sense a user gesture. The sensor unit 208 may include one or more sensing devices such as, but not limited to, microphone, camera, accelerometer, gyroscope and so forth. The sensor unit 208 may be configured to identify a user gesture and transmit the resulted information to the processing unit 206 for further processing. In an alternative embodiment, the one or more sensing devices of the sensor unit 208 may transmit one
or more captured information to the processing unit 206 to identify the user gesture. In an embodiment, the sensor unit 208 may communicate the sensed user gesture to the determination unit 210. In an alternative embodiment, the processing unit 206 may transmit the sensed user gesture to the determination unit 210. Example of the user gesture may include, a particular sound produced by a user, a movement of hand of the user, a face expression, a touch input via the user device and so forth.
[0047] The determination unit 210 may be configured to determine the at least one modification of the one or more dynamic components of the at least one transformable image based on the sensed gesture. The determination unit 210 may inspect the metadata associated with the at least one transformable image to determine the at least one modification. In some embodiments, the determination unit 210 may employ any suitable decoding techniques to identify the modification from the metadata. In exemplary embodiment, the determination unit 210 may compare the sensed user gesture with a user gestured included in the metadata and may identify the corresponding modification assigned to that specific user gesture. In an embodiment, the determined at least one modification is directly conveyed by the determination unit 210 to the modification unit 212. In an alternative embodiment, the determination unit 210 may convey the determined at least one modification to the processing unit 206 and the processing unit 206 may accordingly transmit the determined at least one modification to the modification unit 212.
[0048] The modification unit 212 may be configured to modify the one or more dynamic components of the at least one transformable image according to the at least one determined modification. In an embodiment, the modification unit 212 may perform any one of the topological or non-topological modification on the one or more dynamic components. The modification unit 212 may implement any suitable image or video processing technique which may be required to perform the desired function of the modification unit 212. In case, the determination unit 210 is unable to determine any modification of the one or more dynamic components of the at least one transformable image based on the sensed user gesture, the modification unit 212 may
be configured to modify the one or more dynamic components of the at least one transformable image using one or more machine learning techniques. The modification unit 212 may be configured to implement machine learning techniques required to perform the functionality of the modification unit 212. Example of machine learning techniques performing modification of an image may include identifying a ball as a dynamic component with dynamic property of bouncing and assigning user hand gesture representing dribbling with the ball. Therefore, the machine learning techniques may automatically make the ball to bounce when the user performs dribbling hand gesture. In an embodiment, the modification unit 212 may transmit the modified image to the rendering unit 214. In an alternative embodiment, the modification unit 212 may transmit the modified image to the processing unit 206 and the processing unit 206 may accordingly transmit the modified image to the rendering unit 214.
[0049] The rendering unit 214 may be configured to render the modified image to the user. The rendering unit 214 may be configured to implement any image or video encoding/decoding technique which may be required to perform the functionality of the rendering unit 214. In an embodiment, the rendering unit 214 may render a static modified image. In an alternative embodiment, the rendering unit 214 may render a dynamic modified image such as an animated image, a Graphics Interchange Format (GIF) and so forth. In another embodiment, the rendering may take place through audio components such as a speaker.
[0050] Embodiments illustrated above are exemplary in nature and the user device 200 may include any other additional components required to perform the desired functionality of the user device 200.
[0051] Fig. 3 illustrates two different aspects of a sticker in accordance with an embodiment of the present disclosure. The sticker may be represented by a front rendering 302a and an inner representation 302b. In exemplary embodiment, the sticker represents a face of a user. The front rendering 302a illustrates various components 304a present on the face. The components 304a include eyebrows, nose, lips, hair and so forth. In an exemplary embodiment, the components 304a are dynamic in nature and
may include one or more bone structures 304b, as illustrated by the inner representation 302b. The bone structures 304b may also be referred as one or more dynamic components 304b of the sticker. In some embodiments, said one or more dynamic components 304b may be modified based on one or more user gestures to represent a motion of the corresponding component of the sticker.
[0052] Fig. 4 is a flowchart of exemplary method 400 for modifying a transformable image, in accordance with an embodiment of the present disclosure. This flowchart is provided for illustration purposes, and embodiments are intended to include or otherwise cover any methods or procedures for generating video. Fig. 4 is described in reference to Figs. 1-2.
[0053] At block 402, the second user device 102b displays at least one transformable image to a second user. In an exemplary embodiment, the at least transformable image is received from the first user device 102a. The at least one transformable image comprises one or more dynamic components.
[0054] At step 404, the second user device 102b senses at least one user gesture of the second user. In an embodiment, the sensor unit 208 may be used for sensing at least one user gesture of the second user. Next, at step 406, the second user device 102b determines at least one modification of the one or more dynamic components of the at least one transformable image based on the sensed user gesture. In an exemplary embodiment, the determination unit 210 may determine the at least one modification of the one or more dynamic components.
[0055] At step 408, the second user device 102b modifies the one or more dynamic components of the at least one transformable image according to the at least one determined modification. In exemplary embodiment, the operation of modification may be performed by the modification unit 212.
[0056] At step 410, the rendering unit 214 may render the modified image at the second user device 102b. In some embodiments, the modified image may be transmitted to the
first user device 102a. The first user device 102a may be able to further interact with the modified image in a similar manner, as performed by the second user to interact with the initial transformable image.
[0057] The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of devices, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, unit, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
[0058] The foregoing description of the various embodiments is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the above disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein, and instead the claims should be accorded the widest scope consistent with the principles and novel features disclosed herein.
[0059] While the disclosure has been described with reference to a preferred embodiment, it is apparent that variations and modifications will occur without departing the spirit and scope of the disclosure. It is therefore contemplated that the
present disclosure covers any and all modifications, variations or equivalents that fall within the scope of the basic underlying principles disclosed above.
WE CLAIM
1. A method of modifying a transformable image, the method comprising:
displaying at least one transformable image on a user device, wherein the at least one transformable image comprises one or more dynamic components;
sensing, by the user device, at least one user gesture;
determining, by the user device, at least one modification of the one or more dynamic components of the at least one transformable image based on the sensed user gesture;
modifying, by the user device, the one or more dynamic components of the at least one transformable image according to the at least one determined modification; and
rendering on the user device, a modified image.
2. The method as claimed in claim 1, wherein the at least one transformable image comprises at least one of a 2-Dimensional (2D) or a 3-Dimensional (3D) graphical illustration.
3. The method as claimed in claim 1, wherein the at least one transformable image comprises metadata corresponding to the one or more dynamic components, wherein the metadata comprises gestures and corresponding modification associated with the one or more dynamic components.
4. The method as claimed in claim 3, further comprising identifying the one or more dynamic components associated with the at least one transformable image from the metadata.
5. The method as claimed in claim 1, wherein the user gesture comprises at least one of an audio gesture, a hand gesture, a face gesture or a touch gesture.
6. The method as claimed in claim 1, wherein in case the user device is unable to
determine any modification of the one or more dynamic components of the at least
transformable image based on the sensed user gesture, the method comprises:
modifying, by the user device, the one or more dynamic components of the at least one transformable image using one or more machine learning techniques.
7. The method as claims in claims 1, wherein modifying the one or more dynamic components comprises at least one topological modification comprising modifying the one or more dynamic components by at least one of scale, position and shear transformation.
8. The method as claimed in claim 1 to 7, wherein modifying the one or more dynamic components comprises modifying each of the one or more dynamic components independently.
9. The method as claimed in claim 1, wherein the modified image is an animated image comprising a sequence of intermediate interpolated images.
10. The method as claimed in claim 1 to 9, further comprising:
receiving, at the user device, the at least one transformable image from another device; and
transmitting, by the user device to another device, the modified image.
11. A device for modifying a transformable image, the device comprising:
one or more memory units; and
at least one processing unit operatively coupled to the one or more memory units, the at least one processing unit is configured to:
display at least one transformable image, wherein the at least one transformable image comprises one or more dynamic components;
sense at least one user gesture;
determine at least one modification of the one or more dynamic
components of the at least one transformable image based on the sensed user gesture;
modify the one or more dynamic components of the at least one transformable image according to the at least one determined modification; and
render a modified image.
12. The device as claimed in claim 11, wherein the at least one transformable image comprises metadata corresponding to the one or more dynamic components, wherein the metadata comprises gestures and corresponding modification associated with the one or more dynamic components.
13. The device as claimed in claim 12, wherein the at least one processing unit is further configured to identify the one or more dynamic components associated with the at least one transformable image from the metadata.
14. The device as claimed in claim 11, wherein in case the at least one processing unit is unable to determine any modification of the one or more dynamic components of the at least transformable image based on the sensed user gesture, the at least one processing unit is configured to:
modify the one or more dynamic components of the at least one transformable image using one or more machine learning techniques.
15. The device as claimed in claim 11 to 14, wherein the at least one processing
unit is configured to:
receive the at least one transformable image from another device; and transmit the modified image to another device.
16. The device as claims in claims 11, wherein the at least one processing unit is
configured to:
perform at least one topological modification comprising modifying the one or more dynamic components by at least one of scale, position, and shear
transformation.
17. The device as claimed in claim 11 to 16, wherein the at least one processing unit is configured to modify each of the one or more dynamic components independently.
| # | Name | Date |
|---|---|---|
| 1 | 202011037308-FORM 18 [01-07-2024(online)].pdf | 2024-07-01 |
| 1 | 202011037308-STATEMENT OF UNDERTAKING (FORM 3) [29-08-2020(online)].pdf | 2020-08-29 |
| 2 | 202011037308-POWER OF AUTHORITY [29-08-2020(online)].pdf | 2020-08-29 |
| 2 | 202011037308-Proof of Right [01-12-2020(online)].pdf | 2020-12-01 |
| 3 | 202011037308-COMPLETE SPECIFICATION [29-08-2020(online)].pdf | 2020-08-29 |
| 3 | 202011037308-FORM 1 [29-08-2020(online)].pdf | 2020-08-29 |
| 4 | 202011037308-DECLARATION OF INVENTORSHIP (FORM 5) [29-08-2020(online)].pdf | 2020-08-29 |
| 4 | 202011037308-DRAWINGS [29-08-2020(online)].pdf | 2020-08-29 |
| 5 | 202011037308-DECLARATION OF INVENTORSHIP (FORM 5) [29-08-2020(online)].pdf | 2020-08-29 |
| 5 | 202011037308-DRAWINGS [29-08-2020(online)].pdf | 2020-08-29 |
| 6 | 202011037308-COMPLETE SPECIFICATION [29-08-2020(online)].pdf | 2020-08-29 |
| 6 | 202011037308-FORM 1 [29-08-2020(online)].pdf | 2020-08-29 |
| 7 | 202011037308-POWER OF AUTHORITY [29-08-2020(online)].pdf | 2020-08-29 |
| 7 | 202011037308-Proof of Right [01-12-2020(online)].pdf | 2020-12-01 |
| 8 | 202011037308-FORM 18 [01-07-2024(online)].pdf | 2024-07-01 |
| 8 | 202011037308-STATEMENT OF UNDERTAKING (FORM 3) [29-08-2020(online)].pdf | 2020-08-29 |