Abstract: Present invention discloses a system (102) and method for providing rewards for personalized user engagements using intelligent interactive display. System (102) receive at least one data packet from one or more users (110) by scanning at least one unique identifier to enable a first validation. System (102) generate and display at least one reserved screen zone along with one or more photorealistic objects on the intelligent interactive display (302) upon first successful validation, and perform Dynamic Digital Reflection (DDR) scanning to enable a second validation. System (102) synchronize, based on Dynamic Digital Reflection (DDR), display of the one or more computing device and at least one reserved screen zone, and execute one or more DDR commands to enable a third validation. System (102) verifies an identity users (110) and authenticates proximity time and space of the user, and provides rewards for personalized user engagements using the intelligent interactive display (302).
Description:TECHNICAL FIELD
[01] The present invention relates to the field of e-commerce. In particular, it relates to a system and method providing rewards for personalized user engagements using intelligent interactive display, by evaluating physical validation in real-time to enhance user experience.
BACKGROUND
[02] Background description includes information that may be useful in understanding the present disclosure. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed disclosure, or that any publication specifically or implicitly referenced is prior art.
[03] Ecommerce, digital transactions and business refers to the buying and selling of products or services over the internet. It has become increasingly popular in recent years as more and more people have access to the internet and feel comfortable shopping online. Though the world is moving towards a hyper-technological world, the outdoor activity of people never gets reduced because of various reasons. The user experience getting over physically visiting shops, dining, entertainment, and traveling is unmatchable against any digital experience. Most companies provide rewards on performance, and engagement through various digital and non-digital systems.
[04] Rewards schemes are often used by ecommerce businesses to incentivize customer loyalty and repeat purchases. These schemes typically offer customers points or other rewards for making purchases, which can then be redeemed for discounts, free products, or other perks. Rewards schemes, can be an effective way to encourage customers to keep coming back to an ecommerce site, and can also help businesses collect data about their customers' preferences and buying habits. Some examples of ecommerce businesses with popular rewards schemes can include Amazon's Prime program, which offers free shipping and other benefits for a yearly fee, and Sephora's Beauty Insider program, which offers points for purchases that can be redeemed for free products and other rewards.
[05] Further, the customer or people engagement is becoming increasingly vital to maintain entertainment and commerce. Engagement and entertainment, combined with rewards, is an effective way to bring people to a physical location, drive shopping, improve relationships, make them involved, and network. Combining engagement and entertainment with incentives is a successful approach to attract people together, encourage purchases, strengthen bonds, get them interested, and network.
[06] However, the current digital rewards schemes digital rewards may not feel as valuable or tangible to some customers. This can make it harder for businesses to create a strong emotional connection with their customers. Some customers may be hesitant to provide personal information in order to participate in a digital rewards program. In addition, developing and maintaining the digital rewards schemes can be expensive, and may not be feasible for smaller businesses with limited resources.
[07] One of the existing patent applications US7706838 discloses physical presence digital authentication system which provides for a system that allows a user to interact with merchants and other entities via an electronic device. Another existing patent application US20220301251 discloses an Artificial Intelligence (AI) avatar-based interaction service is performed in a system including an unmanned information terminal and an interaction service device. Yet another existing patent application WO2015186116A1 discloses a platform which consolidates a merchant interface, a customer interface, and a point of interest information at one place and provides a simple computer implemented solution to make a hassle free environment both for merchants and customers. Most existing mechanisms check for physical presence by monitoring and recording user data portraits, gestures, and motion, which still present issues with respect to user privacy. Thus, there is need of a reliable solution securing user privacy and compliance with law.
[08] Therefore, there is a need for a reliable, and robust system and method providing rewards for personalized user engagements using intelligent interactive display, by evaluating physical validation in real-time to enhance user experience.
OBJECTS OF THE PRESENT DISCLOSURE
[09] Some of the objects of the present disclosure, which at least one embodiment herein satisfies are as listed herein below.
[10] It is an object of the present disclosure to provide a system and method providing rewards for personalized user engagements using intelligent interactive display.
[11] It is another object of the present invention to provide system and method providing rewards for personalized user engagements using intelligent interactive display, by securing the resources and assets for the user of the profile, and conducting proximity validation.
[12] It is another object of the present invention to provide system and method providing rewards for personalized user engagements using intelligent interactive display, to improve the user's product shopping experience, convenience, and delivery time significantly.
[13] It is another object of the present invention to provide system and method providing rewards for personalized user engagements using intelligent interactive display, self-provisioning ecommerce system, and on demand supplies management.
[14] It is another object of the present invention to provide system and method providing rewards for personalized user engagements using intelligent interactive display, which is time based, demand-based, programmable advertisement, behavioural advertisements, personalised advertisements, AI based advertisement.
[15] It is another object of the present invention to provide system and method providing rewards for personalized user engagements using intelligent interactive display, which enables contactless authentication, transaction, gift, product dispatching system.
[16] It is another object of the present invention to provide system and method providing rewards for personalized user engagements using intelligent interactive display, provides customized rewards which fit the preferences of the user.
SUMMARY
[17] The present invention relates to the field of e-commerce. In particular, it relates to a system and method providing rewards for personalized user engagements using intelligent interactive display, by evaluating physical validation in real-time to enhance user experience.
[18] An aspect of the present disclosure pertains to providing rewards for personalized user engagements using an intelligent interactive display. The system comprising: one or more pre-processor, a memory coupled to the one or more processor. The memory comprises processor-executable instructions to cause the one or more pre-processors to: receive at least one data packet from at least one of one or more users and one or more computing devices by scanning at least one unique identifier displayed on the intelligent interactive display to enable a first validation. The at least unique identifier comprises at least one of a Quick Response (QR) code, a graphical image, and a 3D image. Further, the system can be configured to generate and display at least one reserved screen zone along with one or more photorealistic objects on the intelligent interactive display upon the first successful validation, and perform a Dynamic Digital Reflection (DDR) scanning by the one or more users to enable a second validation. Furthermore, the system can be configured to synchronize, based on the Dynamic Digital Reflection (DDR), a display of the one or more computing device and the at least one reserved screen zone, and execute one or more commands to enable a third validation. The one or more commands (DDR Commands) comprise at least one of a zoom-in, a zoom-out, and a rotate screen. Finally, the system can be configured to verify an identity of the one or more users and authenticate proximity time and space of the one or more user based on the third validation, and provide rewards for personalized user engagements using the intelligent interactive display
[19] In an aspect, the one or more photorealistic objects can comprise at least one of a Quick Response (QR) code, a Virtual Reality (VR), an Augmented Reality (AR), and a hologram.
[20] In an aspect, the system can be configured to automatically create a zone separator in the at least one reserved screen zone based on the one or more users simultaneously operating the intelligent interactive display to connect and execute the DDR.
[21] In an aspect, the zone separator can be configured to assign, specify, and monitor the at least one reserved screen zone based on the termination of the DDR of the respective one or more users. The zone separator comprises at least one of a grid line, a colors separation, a fixed dimension, and a transformation patterns.
[22] In an aspect, the system can be configured to validate, by an Artificial Intelligence (AI) engine, the presence of the one or more user based on one or more parameters pertaining to liveliness detection the one or more users, wherein the one or more parameters comprises at least one of a face detection, a user location, a user eye winks, and an Eye Aspect Ratio (EAR).
[23] In an aspect, a method providing rewards for personalized user engagements by using intelligent interactive display of a system. The method includes steps for receiving, by the system, at least one data packet from at least one of one or more users and one or more computing devices by scanning at least one unique identifier displayed on the intelligent interactive display for enabling a first validation. The at least unique identifier comprises at least one of a Quick Response (QR) code, a graphical image, and a 3D image. The method includes steps for generating and displaying at least one reserved screen zone along with one or more photorealistic objects on the intelligent interactive display of the system upon the first successful validation, and performing a Dynamic Digital Reflection (DDR) scanning by the one or more users for enabling a second validation. The method includes steps for synchronizing, by the system, a display of the one or more computing device and the at least one reserved screen zone based on the Dynamic Digital Reflection (DDR), and executing one or more commands to enable a third validation, wherein the one or more commands comprises at least one of a zoom-in, a zoom-out, and a rotate screen. Finally, verifying an identity of the one or more users, and authenticating proximity time and space of the one or more user based on the third validation, and providing rewards for personalized user engagements using the intelligent interactive display.
[24] In an aspect, the one or more photorealistic objects comprises at least one of a Quick Response (QR) code, a Virtual Reality (VR), an Augmented Reality (AR), and a hologram.
[25] In an aspect, the method includes the step of automatically create a zone separator in the at least one reserved screen zone based on the one or more users simultaneously operating the intelligent interactive display to connect and execute the DDR.
[26] In an aspect, the method includes the step of assigning, specifying, and monitoring the at least one reserved screen zone based on the termination of the DDR of the respective one or more users. The zone separator comprises at least one of a grid line, a colors separation, a fixed dimension, and a transformation patterns.
[27] In an aspect, the method includes the step of validating, by an Artificial Intelligence (AI) engine, the presence of the one or more user based on one or more parameters pertaining to liveliness detection the one or more users, wherein the one or more parameters comprises at least one of a face detection, a user location, a user eye winks, and an Eye Aspect Ratio (EAR).
BRIEF DESCRIPTION OF DRAWINGS
[28] The accompanying drawings are included to provide a further understanding of the present disclosure, and are incorporated in, and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure, and together with the description, serve to explain the principles of the present disclosure.
[29] In the figures, similar components, and/or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label with a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.
[30] FIG. 1 illustrates an exemplary network architecture of the proposed system providing rewards for personalized user engagements using an intelligent interactive display, in accordance with an embodiment of the present disclosure.
[31] FIG. 2 illustrates architecture of the proposed system providing rewards for personalized user engagements using an intelligent interactive display, in accordance with an embodiment of the present disclosure.
[32] FIG. 3 illustrates an architecture of the proposed system providing rewards for personalized user engagements using an intelligent interactive display, in accordance with some embodiments of the present disclosure.
[33] FIGs. 4A-4D illustrates a schematic representation of the proposed system providing rewards for personalized user engagements using an intelligent interactive display, in accordance with an embodiment of the present disclosure.
[34] FIGS. 5 illustrates an exemplary view of a flow diagram of the proposed method providing rewards for personalized user engagements using an intelligent interactive display, in accordance with an embodiment of the present disclosure.
[35] FIG. 6 illustrates an exemplary computer system in which or with which embodiments of the present invention can be utilized in accordance with embodiments of the present disclosure.
DETAILED DESCRIPTION
[36] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims.
[37] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the invention as set forth.
[38] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
[39] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[40] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive—in a manner similar to the term “comprising” as an open transition word—without precluding any additional or other elements.
[41] Reference throughout this specification to “one embodiment” or “an embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[42] The following terms/acronyms have been used in the disclosure:
[43] Hosted Event and Non Hosted events: The Hosted Event is referred to as Event in various places of the document. Hosted events are those that are managed by a person or system, whereas non-hosted events are those that are triggered by rules, configurations, or outcomes and can either be digitally or non-digitally be short-lived.
[44] Online and OFFLINE Training /Workshops: These are the sessions where participants learn a skill or complete a course or certification. The trainer can have one on one interaction with the attendee, solve queries via live Q&A, share content via screen and more.
[45] Virtual Summits / Conferences: Best suited for small groups of participants. You can invite multiple speakers, host several tracks and sessions, have breakout sessions in between, conduct live polls & surveys. Speakers and attendees can have two-way communication between them over audio/video in real-time.
[46] Seminar/Webinars: A seminar is a form of academic instruction, either at an academic institution or offered by a commercial or professional organization. It has the function of bringing together small groups for recurring meetings, focusing each time on some subject, in which everyone present is requested to participate. A webinar is an online event that is hosted by an organization/company and broadcast to a select group of individuals through their computers via the Internet. (A webinar is sometimes also referred to as a “webcast”, “online event” or “web seminar”.)
[47] Virtual Meetups: In-person gatherings may have their own benefits. But just like social media, online meetups are the best medium to connect people you share common interests with, but in real-time. Privacy of the participants is maintained by allowing access to only the invitees.
[48] Ask me anything: Ever heard of AMA-ask me anything? a session with an expert or guest speaker. It’s a unique way of interviewing where the guest takes questions from the audience in real-time as well as some pre-submitted questions.
[49] Podcasts: Podcasts are a series of episodes consisting only audio files which can be downloaded by the listener. Typically, they are pre-recorded high-quality audios.
[50] Virtual Keynotes: It can be a Webcast or a keynote address by the speaker which is broadcasted LIVE over the internet on a source website. Keynotes can help promote your virtual event as it amplifies the message of the event by storytelling and branding of your event.
[51] Fireside Chats: Hugely popular among the startup community. It is an informal discussion with a guest and the moderator. Its comfortable setting makes it engaging for the audience as it seems like you are listening to your close friend sharing deep insights.
[52] Trade Show and Expos: A trade show is an event held to bring together members of a particular industry to display, demonstrate, and discuss their latest products and service.
[53] Board Meeting/Shareholder meetings: A Board Meeting is a formal meeting of the board of directors of an organization and any invited guests, held at definite intervals and as needed to review performance, consider policy issues, address major problems, and perform the legal business of the board. Shareholders Meeting means a meeting of the stockholders of the corporation wherein resolution are placed before the shareholders to discuss about the corporate matters and other matters required by the bylaws of the company (such as company’s performance over the relevant statutory period is reviewed and approved, Board of Directors (BOD) of the company are appointed, decisions regarding increase in share capital, major acquisitions, mergers are taken, etc.) and may be conducted at frequent intervals (like annually or six-monthly or quarter or in exceptional circumstances).
[54] A Product or Service Feedback: Customer feedback is the information, insights, issues, and input shared by your community about their experiences with your company, product, or services. This feedback guides improvements of the customer experience and can empower positive change in any business — even (and especially) when it's negative.
[55] The system can include a Product Sale/Promotion/Launch/Reveal, a Trial shows/demo shows, an Open house/orientation, Festive shows, Sport events, Summit Roadshows, street shows, street exhibition, club meet ups
[56] Brainstorming/Scrum/Retrospective Meeting: A brainstorming session is a tool for generating as many ideas or solutions as possible to a problem or issue. It is not a tool for determining the best solution to a problem or issue. Before beginning any effective brainstorming session, ground rules must be set. Scrum is an agile framework that teams use to produce products faster by breaking large development projects into smaller pieces that can be completed in short timeframes. Scrum meeting is a catch-all term that can describe different types of meetings held by Scrum teams. A retrospective is a meeting held after a product ships to discuss what happened during the product development and release process, with the goal of improving things in the future based on those learnings and conversations.
[57] Digital Events: Digital events are interactive, staged and experience-oriented events, which are realized in digital space (online) with a wide variety of platforms and tools. Provides larger benefits such as Competition, Gamification, Entertainment, Shopping, Fun, Cost-effective and Engaging
[58] Non-Digital Events: Physical In Person Hosted and non-hosted Meeting.
[59] Digital Participant: The term digital participant refers to the active involvement on digital platform or digital society through the use of modern information and communication technology (ICT), such as the Internet. This participation includes access to not only the Internet but also various online services and content.
[60] Physical Participant: Physical and Person In Participation on an event may or may or may not connects with Digital Tools to engage.
[61] Rewards: The rewards is referred in various places of the document. Rewards can any of these forms physical (tangible item), virtual or digital. Rewards also includes tangible items, instructions on how to collect them, eventual rewards, digital points, messages of thanks, messages of appreciation, invitations or access to other hosted or non-hosted events, access to gamification event, shopping points, redeemable points, application features access points, e-commerce points, trade points.
[62] DDR: Dynamic Digital Reflection scan, a display of the one or more computing device and the at least one reserved screen zone, and execute one or more DDR Command commands to enable a third validation.
[63] DDR Command: one or more commands including, but not limited to a zoom-in, a zoom-out, rotate screen, directional movement, swipe, click, gamification, and the likes. Any instructions that can be converted into a DDR command. Few examples the user shake the phone and that movement is captured same reflect to the computing device similarly or different to the DDR to validate. The user device displays a circle and instructs them to swipe along the circle's edge, same reflect to the computing device similarly or different to the DDR to validate. A robot or drone can make a directional move based on the instruction and same reflect to the computing device similarly or different to the DDR to validate.
[64] Various aspects of the present disclosure are described with respect to FIG 1-6.
[65] The present invention relates to the field of e-commerce. In particular, it relates to a system and method providing rewards for personalized user engagements using intelligent interactive display, by evaluating physical validation in real-time to enhance user experience.
[66] An aspect of the present disclosure pertains to providing rewards for personalized user engagements using an intelligent interactive display. The system comprising: one or more pre-processor, a memory coupled to the one or more processor. The memory comprises processor-executable instructions to cause the one or more pre-processors to: receive at least one data packet from at least one of one or more users and one or more computing devices by scanning at least one unique identifier displayed on the intelligent interactive display to enable a first validation. The at least unique identifier comprises at least one of a Quick Response (QR) code, a graphical image, and a 3D image. Further, the system can be configured to generate and display at least one reserved screen zone along with one or more photorealistic objects on the intelligent interactive display upon the first successful validation, and perform a Dynamic Digital Reflection (DDR) scanning by the one or more users to enable a second validation. Furthermore, the system can be configured to synchronize, based on the Dynamic Digital Reflection (DDR), a display of the one or more computing device and the at least one reserved screen zone, and execute one or more DDR commands to enable a third validation. The one or more commands (Dynamic Digital Reflection Command) comprise at least one of a zoom-in, a zoom-out, and a rotate screen. Finally, the system can be configured to verify an identity of the one or more users and authenticate proximity time and space of the one or more user based on the third validation, and provide rewards for personalized user engagements using the intelligent interactive display.
[67] In an aspect, the one or more photorealistic objects can comprise at least one of a Quick Response (QR) code, a Virtual Reality (VR), an Augmented Reality (AR), and a hologram.
[68] In an aspect, the system can be configured to automatically create a zone separator in the at least one reserved screen zone based on the one or more users simultaneously operating the intelligent interactive display to connect and execute the DDR.
[69] In an aspect, the zone separator can be configured to assign, specify, and monitor the at least one reserved screen zone based on the termination of the DDR of the respective one or more users. The zone separator comprises at least one of a grid line, a colors separation, a fixed dimension, and a transformation patterns.
[70] In an aspect, the system can be configured to validate, by an Artificial Intelligence (AI) engine, the presence of the one or more user based on one or more parameters pertaining to liveliness detection the one or more users, wherein the one or more parameters comprises at least one of a face detection, a user location, a user eye winks, and an Eye Aspect Ratio (EAR).
[71] In an aspect, a method providing rewards for personalized user engagements by using intelligent interactive display of a system. The method includes steps for receiving, by the system, at least one data packet from at least one of one or more users and one or more computing devices by scanning at least one unique identifier displayed on the intelligent interactive display for enabling a first validation. The at least unique identifier comprises at least one of a Quick Response (QR) code, a graphical image, and a 3D image. The method includes steps for generating and displaying at least one reserved screen zone along with one or more photorealistic objects on the intelligent interactive display of the system upon the first successful validation, and performing a Dynamic Digital Reflection (DDR) scanning by the one or more users for enabling a second validation. The method includes steps for synchronizing, by the system, a display of the one or more computing device and the at least one reserved screen zone based on the Dynamic Digital Reflection (DDR), and executing one or more DDR commands to enable a third validation, wherein the one or more DDR commands comprises at least one of a zoom-in, a zoom-out, and a rotate screen. Finally, verifying an identity of the one or more users, and authenticating proximity time and space of the one or more user based on the third validation, and providing rewards for personalized user engagements using the intelligent interactive display.
[72] In an aspect, the one or more photorealistic objects comprises at least one of a Quick Response (QR) code, a Virtual Reality (VR), an Augmented Reality (AR), and a hologram.
[73] In an aspect, the method includes the step of automatically create a zone separator in the at least one reserved screen zone based on the one or more users simultaneously operating the intelligent interactive display to connect and execute the DDR.
[74] In an aspect, the method includes the step of assigning, specifying, and monitoring the at least one reserved screen zone based on the termination of the DDR of the respective one or more users. The zone separator comprises at least one of a grid line, a colors separation, a fixed dimension, and a transformation patterns.
[75] In an aspect, the method includes the step of validating, by an Artificial Intelligence (AI) engine, the presence of the one or more user based on one or more parameters pertaining to liveliness detection the one or more users, wherein the one or more parameters comprises at least one of a face detection, a user location, a user eye winks, and an Eye Aspect Ratio (EAR).
[76] FIG. 1 illustrates an exemplary network architecture of the proposed system providing rewards for personalized user engagements using an intelligent interactive display, in accordance with an embodiment of the present disclosure.
[77] In an embodiment, the system 102 will be connected to a network 104, which is further connected to at least one computing devices 108-1, 108-2, … 108-N (collectively referred as computing device 108, herein) associated with one or more users devices 110-1, 110-2, … 110-N (collectively referred as computing device 110, herein). The computing device 108 may be personal computers, laptops, tablets, wristwatch or any custom-built computing device integrated within a modern diagnostic machine that can connect to a network as an IoT (Internet of Things) device. Further, the system 102 comprises an Artificial Intelligence (AI) engine 104 which can enhance the effectiveness and personalization of the digital rewards. Furthermore, the network 106 can be configured with a centralized server 112 that stores compiled data from all the digital transactions. The network architecture 100 allows various facilities in the e-commerce platform to synchronize their data in one central database which is easily accessible via the above network 106.
[78] In an embodiment, the system 102 may receive at least one input data from the at least one computing devices 108. A person of ordinary skill in the art will understand that the at least one computing devices 108 may be individually referred to as computing device 108 and collectively referred to as computing devices 108. In an embodiment, the computing device 110 may also be referred to as User Equipment (UE). Accordingly, the terms “computing device” and “User Equipment” may be used interchangeably throughout the disclosure.
[79] In an embodiment, the computing device 108 may transmit the at least one captured data packet over a point-to-point or point-to-multipoint communication channel or network 106 to the system 102.
[80] In an embodiment, the computing device 108 may involve collection, analysis, and sharing of data received from the system 102 via the communication network 106.
[81] In an embodiment, the system 102 may execute one or more instructions, through the computing device 110, using the AI engine 104, to learn correlation in the data received, for providing rewards for personalized user engagements using an intelligent interactive display, and then store the results.
[82] In an exemplary embodiment, the system 102 may include, but not be limited to, a computer enabled device, a mobile phone, a tablet, a display device, a display projector, a AR/VR/MR, a camera, a sensors, a NFC, a network (Wired or Wireless), an apparatus to dispatch gift, prints, ecommerce, instructions, a Remote Detection Service (Detection Device) enabled devices such as iBeacon technologies, NFC, IR/RF services, bluetooth to detect the devices nearby, a connect signs objects, an apparatus, a vending machine, a gift claw machine, a combination of the vending machine, and the gift claw machine, a drone, a robot, an advertisement displays, or some combination thereof.
[83] In an exemplary embodiment, the communication network 106 may include, but not be limited to, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. In an exemplary embodiment, the communication network 104 may include, but not be limited to, a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
[84] In an embodiment, the one or more computing devices 110 may communicate with the system 102 via a set of executable instructions residing on any operating system. In an embodiment, the one or more computing devices 110 may include, but not be limited to, any electrical, electronic, electro-mechanical, or an equipment, or a combination of one or more of the above devices such as mobile phone, smartphone, Virtual Reality (VR) devices, Augmented Reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the one or more computing devices 110 may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as camera, audio aid, a microphone, a keyboard, input devices such as touch pad, touch enabled screen, electronic pen, receiving devices for receiving any audio or visual signal in any range of frequencies, and transmitting devices that can transmit any audio or visual signal in any range of frequencies. It may be appreciated that the one or more computing devices 110 may not be restricted to the mentioned devices and various other devices may be used.
[85] In an embodiment, the network 104 is further configured with a centralized server 112 including a database, where all rewards for personalized user engagements can be stored. It can be retrieved based on the requirement.
[86] In an embodiment, the system 100 can be configured to receive at least one data packet from one or more users 110 associated with one or more computing devices 108 by scanning at least one unique identifier displayed on the intelligent interactive display to enable a first validation. Thus, a first connection can be established between the system 102 and the user 110 associated with the computing device 110. The at least unique identifier can include, but not limited to: a Quick Response (QR) code, a graphical image, a 3D image, and the likes. In addition, the by an Artificial Intelligence (AI) engine 104 can be configured to validate the presence of the one or more user based on one or more parameters pertaining to liveliness detection the one or more users. The one or more parameters can include, but not limited to: a face detection, a user location, a user eye winks, an Eye Aspect Ratio (EAR), and the likes.
[87] In another embodiment, the first connection can be established using Remote Detection Service (also known Detection Device) enabled devices such as iBeacon technologies, NFC, IR/RF Services, Bluetooth to detect the devices nearby. The user 110 connected to the system 102 through the unique identifier either by camera scanning or tapping or any Remote Detection Service, the first connection and first validation can be validated. The first connection facilitates in establishing a secured communication between the system 100 and the user 110, to transmit message to source system to initiate the second validation.
[88] In an embodiment, the system 100 can be configured to generate and display at least one reserved screen zone along with one or more photorealistic objects on the intelligent interactive display upon the first successful validation, and perform a Dynamic Digital Reflection (DDR) scanning by the one or more users to enable a second validation. The one or more photorealistic objects can include, but not limited to: a Quick Response (QR) code, a Virtual Reality (VR), an Augmented Reality (AR), a hologram, and the likes. Further, the system 100 can be configured to automatically create a zone separator in the at least one reserved screen zone based on the one or more users simultaneously operating the intelligent interactive display to connect and execute the DDR. The zone separator is configured to assign, specify, and monitor the at least one reserved screen zone based on the termination of the DDR of the respective one or more users. The zone separator can include, but not limited to: a grid line, a colors separation, a fixed dimension, a transformation patterns, and the likes.
[89] In an embodiment, the system 100 can be configured to synchronize, based on the Dynamic Digital Reflection (DDR), a display of the one or more computing device 108 and the at least one reserved screen zone, and execute one or more DDR commands to enable a third validation. The one or more DDR commands can include, but not limited to a zoom-in, a zoom-out, a rotate screen, and the likes. Finally, the system 100 can be configured to verify an identity of the one or more users 108 and authenticate proximity time and space of the one or more user 110 based on the third validation, and provides rewards for personalized user engagements using the intelligent interactive display.
[90] In an embodiment, the reward distribution may include, but not limited to: a digital mode, a physical mode, at the user device, via an associated dispatching unit from a computing device, via a network device unit, through the instructions provided by the computing device or user device, and the likes.
[91] Although FIG. 1 shows exemplary components of the network architecture 100, in other embodiments, the network architecture 100 may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1. Additionally, or alternatively, one or more components of the network architecture 100 may perform functions described as being performed by one or more other components of the network architecture 100.
[92] FIG. 2 illustrates architecture of the proposed system providing rewards for personalized user engagements using an intelligent interactive display, in accordance with an embodiment of the present disclosure.
[93] In an aspect, the system 102 may comprise one or more processor(s) 202. The one or more processor(s) 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) 202 may be configured to fetch and execute computer-readable instructions stored in a memory 204 of the system 102. The memory 204 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 204 may comprise any non-transitory storage device including, for example, volatile memory such as Random Access Memory (RAM), or non-volatile memory such as Erasable Programmable Read-Only Memory (EPROM), flash memory, and the like.
[94] Referring to FIG. 2, the system 102 may include an interface(s) 206. The interface(s) 206 may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) 206 may facilitate communication to/from the system 102. The interface(s) 206 may also provide a communication pathway for one or more components of the system 102. Examples of such components include, but are not limited to, processing unit/engine(s) 208 and a local database 218.
[95] In an embodiment, the processing unit/engine(s) 208 may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) 208. In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) 208 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) 208 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) 208. In such examples, the system 102 may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system 102 and the processing resource. In other examples, the processing engine(s) 208 may be implemented by electronic circuitry.
[96] In an embodiment, the local database 218 may comprise data that may be either stored or generated as a result of functionalities implemented by any of the components of the processor 202 or the processing engines 208. In an embodiment, the local database 218 may be separate from the system 102.
[97] In an exemplary embodiment, the processing engine 208 may include one or more engines selected from any of a data acquisition module 210, a feature selection module 212, a classification module 214, and other modules 216 having functions that may include but are not limited to testing, storage, and peripheral functions, such as wireless communication unit for remote operation, audio unit for alerts and the like.
[98] In an embodiment, the data acquisition module 210 may include means receiving at least one data packet from one or more users associated with one or more computing devices by scanning at least one unique identifier displayed on the intelligent interactive display.
[99] In an embodiment, the validation module 212 may be configured to authenticate the user 110 at different stages by providing various validations which include the first validation, the second validation, and the third validation.
[100] In an embodiment, the reward dispatching module 214may may be configured to dispatch, disbursal, print message or execute the instruction shared by the system 102 based on connection with the user 110. The system 102 includes resource such as physical products, softcopies, hard copies or information to access resource.
[101] FIG. 3 illustrates an architecture of the proposed system providing rewards for personalized user engagements using an intelligent interactive display, in accordance with some embodiments of the present disclosure.
[102] Referring to FIG. 3, the proposed system 102 can be either static, or installed, or movable from one location to another. The proposed system 102 can comprise the intelligent interactive display 302, a reward dispatching unit 304, and a repository 306. The intelligent interactive display 302 can be configured to establish connection between the system 102 and at least one of the users 110 and the computing device 108 (a robot or a drone) to enable a first validation, perform a Dynamic Digital Reflection (DDR) scanning by the one or more users to enable a second validation, and verify an identity of the one or more users and authenticate proximity time and space of the one or more user based on the third validation. The reward dispatching unit 304 can be configured to dispatch, disbursal, print message or execute the instruction shared by the system 102 based on connection with the user 110. The system 102 includes resource such as physical products, softcopies, hard copies or information to access resource. The repository 306 can be storage medium which includes but not limited to: an instruction, a messages, an events details (Hosted and non-hosted), a Social media engagements, a reward programs, rewards (Coupon, Offers, Gifts), a commands, a subscriptions, an ecommerce, an advertisement, a games, a VR/AR/Mixed reality, a scheduled events, an event crowd source engagement, a publish/subscribed rewards, gifts, incentives, offers, customer continuity, customer loyalty programs, authorised product purchases, personalised ecommerce, virtual, physical, and virtual immersive experience at physical locations, access portable gift boxes as chocolates, an on demand or programmed tickets or access cards.
[103] In an embodiment, the reward distribution may include, but not limited to: a digital mode, a physical mode, at the user device, via an associated dispatching unit from a computing device, via a network device unit, through the instructions provided by the computing device or user device, and the likes.
[104] In an embodiment, the system 102 can include the AI engine 104 to perform validation of the one or more users 110 by using liveness detection i.e. validating whether a person is physically present at a particular location.
[105] In an embodiment, during initial registration, the system 102 can capture the image of the user’s face and the unique identifying objects. The live feed from the computing device 108 is fed into the AI validation module (not shown in figure). Firstly, a face detection AI module can be used to identify the user’s identity. Further, an object detection module can transfer learnt on the captured unique object can locate the object and its location in the video feed. For instance, direct the user to stand to the right-hand side of a statue (unique object). Thus, the object detection module in combination with the face detection module can be used to validate the physical presence of a person at a physical location. Further, in order to enable liveliness detection, a mathematical modelling is used to detect eye winks, a white pixels increases when eyes are open and vice versa. In addition, a deep learning can be used, but it is better to use a numerical solution on Edge device such as mobile.
[106] In an embodiment, numerically detect winks using signals by calculating frequency of pixels in range 0–255 (histogram). Compute spread of non-zero pixels in the histogram. When the eye is closed, the spread will take a sudden dip. Try to fit an inverse sigmoid curve at the tail-end of the above signal. If a successful fit is found, then declare it as ‘wink’ event. The curve would take shape of ‘S’ when the eye is opened for a few seconds. This can be parameterized using a sigmoid function as below equation (1). Hence, an eye wink will take the form of inverse sigmoid function. Parametric curve fit algorithm can solve a nonlinear least-squares problem
------- Equation (1)
[107] In an embodiment, an Eye Aspect Ratio (EAR) as mentioned in below equation (2). The EAR and points plotted around the eye of the user is known in the art. Equation (2) can also be used to compute and determine eye blinks to confirm liveliness of the user 110.
------- Equation (2)
The landmark points such as p1, p2,…, pn, can be obtained from the deep learning model. The Object tracking and video tracking algorithms in computer vision such as MIL, KCF, CSRT, MedianFlow, optical flow or classic methods such as DeepSORT can be utilized to find the correlation between the tracking information of the unique object vs. the entire video. If the flow of the video is not correlated with the flow of the unique object, then it is a potential fraud.
[108] In an embodiment, the AI engine 104 can be configured to enable goal driven program for reward recommendation and disbursals based on the configured goals or programmatic goals or AI recommended goals. The AI engine 104 can be configured to enable AI engagement recommendation and discovery based on the various user 110 and system 102 connected patterns. The system 110 can be connected to a network 106 where user data is learned by the system 102 to research user behaviours, user different buying habits, brand engagement to deliver AI customised engagements to bring more contextually relevant rewards, advertising, and shopping experience.
[109] In an embodiment, the AI engine 104 can be configured to enable AI driven quiz, games using hand gestures, voice emotions or its combinations. Classify the sign language symbols using the Convolution Neural Network (CNN), as known in the art. After successful training of the CNN model, the corresponding alphabet of a sign language symbol will be predicted. Evaluate the classification performance of our model using the non-normalized and normalized confusion matrices. Finally, obtain the classification accuracy score of the CNN model in this task.
[110] In an embodiment, the AI engine 104 can be configured to enable AI based prediction on user engagement or resource engagement or both at a location to creates on demand traffic for advertisement, resources, and sales. The system 102 can be connected to a network where the system 102 can learn about the user's data and accessible resources to analyse their involvement. Analytics for crowds, sales, and stock are used to drive digital supply chain management and produce quick, personalised e-commerce sales while minimising costs and time.
[111] In an embodiment, the AI engine 104 can be configured to enable human less shop, where the system 102 can be linked to a network where it has access to an electronic commerce repository that can be physically or virtually tracked, watched, and browsed through another digital device, the same device, or another connected device. The system 102 would learn from the data to analyse user engagement, crowd analytics, and sale analytics to drive the supply chain management of e-commerce products and generate the best tailored resource availability while optimizing time and cost.
[112] In an embodiment, the AI engine 104 can be configured to enable virtual shop, augmented reality shop, metaverse shop like platform, book anywhere and disbursement at the checkout. The system 102 can be linked to a network where it has access to an electronic commerce repository that may be seen or browsed through an external digital system, the same system, or another connected system. The data from one or more systems/apparatus would allow the user or system at a remote location to explore and replicate a virtual system. Using AR/VR/Hologram technology and display generating equipment, the user can visit a virtual store when they are at home, on the go, or in a nearby area.
[113] In an embodiment, the AI engine 104 can be configured to enable AI based targeted ads. Using customer data such as demographics, age, etc to show the advertisement of interest. Using machine learning techniques to identify the potential KPIs that can be used to show the advertisement that the user 110 might like. Segmenting and identifying the needs of the customers and help determine how the advertisement might meet those needs. Thus, optimizing the needs of the user 110.
[114] In an embodiment, the AI engine 104 can be configured to enable AI based Personalized suggestions for offers / gifts that suits the customer's needs and likes. Identify the individual customer needs using complex machine learning algorithms to provide personalized offers / gifts that will benefit the respective customer.
[115] In an embodiment, the AI engine 104 can be configured to enable generative AI to display ad messages that are personalized for the customer. Personalized advertisement text that matches the customer likes and interests. This is achieved using the state-of-the-art natural language processing algorithms.
[116] In an embodiment, the AI engine 104 can be configured to enable AI based contextual ads. Using artificial intelligence algorithms to identify the advertisement that suits the current scenario or context. For example, if an advertisement is to be shown in a restaurant, the artificial intelligence algorithm makes sure that it matches that context.
[117] In an embodiment, the AI engine 104 can be configured to enable reinforcement learning based reward analysis and training for the generated advertisement. The user responses of all the advertisement shown using the artificial intelligence algorithm will be analysed. Reinforcement learning algorithms will be used to retrain all the other algorithms mentioned above.
[118] FIGs. 4A-4D illustrates a schematic representation of the proposed system providing rewards for personalized user engagements using an intelligent interactive display, in accordance with an embodiment of the present disclosure.
[119] In an embodiment, FIG 4A depicts the intelligent interactive display 302 including a video and the unique identifier (QR code) at bottom. The user 110 can scan the QR code to connect to the system, and establish the first connection and enable the first validation.
[120] In an embodiment, FIG 4B depicts the intelligent interactive display 302 including the user 110 connected to the system 110 with or without a confirmation. When a link is made, the system 102 creates the at least one reserved screen zone to enable DDR, the second validation, on the receiving end, to enable the second validation.
[121] In an embodiment, FIG 4C depicts the intelligent interactive display 302 including the user 110 receiving a code regarding the available screen zone and the screen zone changes to display a DDR validation display element. The user 110 will find allocated screen zone to complete the DDR validation and enable the third validation.
[122] In an embodiment, FIG 4D depicts the intelligent interactive display 302 including the user 110 performing DDR on the at least one reserved screen zone to perform final authentication on physical and proximity validation. Once the authentication is completed then allow to access the repository 306.
[123] FIGS. 5 illustrates an exemplary view of a flow diagram of the proposed method providing rewards for personalized user engagements using an intelligent interactive display, in accordance with an embodiment of the present disclosure.
[124] In an embodiment, the proposed method 500 proposed method providing rewards for personalized user engagements using an intelligent interactive display. At step 502, receive, by the system, at least one data packet from one or more users associated with one or more computing devices by scanning at least one unique identifier displayed on the intelligent interactive display for enabling a first validation, wherein the at least unique identifier comprises at least one of a Quick Response (QR) code, a graphical image, and a 3D image. At step 504, generating and displaying at least one reserved screen zone along with one or more photorealistic objects on the intelligent interactive display of the system upon the first successful validation as in step 502, and performing a Dynamic Digital Reflection (DDR) scanning by the one or more users for enabling a second validation.
[125] Further, at step 506, synchronizing, by the system, a display of the one or more computing device and the at least one reserved screen zone based on the Dynamic Digital Reflection (DDR), and executing one or more commands to enable a third validation, wherein the one or more commands comprises at least one of a zoom-in, a zoom-out, a rotate screen, directional movement, and any instructions that can be converted into a DDR command. At step 508, verifying an identity of the one or more users, and authenticating proximity time and space of the one or more user based on the third validation as in step 506, and providing rewards or instruction or messages for personalized user engagements using the intelligent interactive display.
[126] FIG. 6 illustrates an exemplary computer system in which or with which embodiments of the present invention can be utilized in accordance with embodiments of the present disclosure.
[127] Referring to FIG. 6, computer system includes an external storage device 610, a bus 620, a main memory 630, a read only memory 640, a mass storage device 650, communication port 560, and a processor 670. A person skilled in the art will appreciate that computer system may include more than one processor and communication ports. Examples of processor 670 include, but are not limited to, an Intel® Itanium® or Itanium 2 processor(s), or AMD® Opteron® or Athlon MP® processor(s), Motorola® lines of processors, FortiSOC™ system on a chip processors or other future processors. Processor 470 may include various modules associated with embodiments of the present invention. Communication port 660 can be any of an RS-232 port for use with a modem based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. Communication port 660 may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which computer system connects.
[128] In an embodiment, the memory 630 can be Random Access Memory (RAM), or any other dynamic storage device commonly known in the art. Read only memory 640 can be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or BIOS instructions for processor 670. Mass storage 560 may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), e.g. those available from Seagate (e.g., the Seagate Barracuda 7102 family) or Hitachi (e.g., the Hitachi Deskstar 7K1000), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g. an array of disks (e.g., SATA arrays), available from various vendors including Dot Hill Systems Corp., LaCie, Nexsan Technologies, Inc. and Enhance Technology, Inc.
[129] In an embodiment, the bus 620 communicatively couples processor(s) 670 with the other memory, storage and communication blocks. Bus 520 can be, e.g. a Peripheral Component Interconnect (PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), USB or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects processor 570 to software system.
[130] In another embodiment, operator and administrative interfaces, e.g. a display, keyboard, and a cursor control device, may also be coupled to bus 620 to support direct operator interaction with computer system. Other operator and administrative interfaces can be provided through network connections connected through communication port 560. External storage device 610 can be any kind of external hard-drives, floppy drives, IOMEGA® Zip Drives, Compact Disc - Read Only Memory (CD-ROM), Compact Disc - Re-Writable (CD-RW), Digital Video Disk - Read Only Memory (DVD-ROM). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system limit the scope of the present disclosure.
[131] If the specification states a component or feature “may”, “can”, “could”, or “might” be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic.
[132] As used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.
[133] It is to be appreciated by a person skilled in the art that while various embodiments of the present disclosure have been elaborated for system and method for providing rewards for personalized user engagements using an intelligent interactive display, and incorporates an effective mechanism to dispatch the digital rewards. However, the teachings of the present disclosure are also applicable for other types of applications as well, and all such embodiments are well within the scope of the present disclosure. However, the system and method providing rewards for personalized user engagements using an intelligent interactive display is also equally implementable in other industries as well, and all such embodiments are well within the scope of the present disclosure without any limitation.
[134] Accordingly, the present disclosure provides a system and method providing rewards for personalized user engagements using an intelligent interactive display.
[135] Moreover, in interpreting the specification, all terms should be interpreted in the broadest possible manner consistent with the context. In particular, the terms “comprises” and “comprising” should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced. Where the specification claims refer to at least one of something selected from the group consisting of A, B, C….and N, the text should be interpreted as requiring only one element from the group, not A plus N, or B plus N, etc.
[136] While the foregoing describes various embodiments of the disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof. The scope of the disclosure is determined by the claims that follow. The disclosure is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the disclosure when combined with information and knowledge available to the person having ordinary skill in the art.
ADVANTAGES OF THE PRESENT DISCLOSURE
[137] The present disclosure provides a system and method providing rewards for personalized user engagements using intelligent interactive display.
[138] The present disclosure provides system and method providing rewards for personalized user engagements using intelligent interactive display, by securing the resources and assets for the user of the profile, and conducting proximity validation.
[139] The present disclosure provides system and method providing rewards for personalized user engagements using intelligent interactive display, self-provisioning ecommerce system, and on demand supplies management.
[140] The present disclosure provide system and method providing rewards for personalized user engagements using intelligent interactive display, which is time based, demand-based, programmable advertisement, behavioural advertisements, personalised advertisements, AI based advertisement.
[141] The present disclosure provides system and method providing rewards for personalized user engagements using intelligent interactive display, which enables contactless authentication, transaction, gift, product dispatching system.
[142] The present disclosure provides a system and method providing rewards for personalized user engagements using intelligent interactive display, provides customized rewards which fit the preferences of the user.
, Claims:1. A system (102) providing rewards for personalized user engagements using an intelligent interactive display (302), the system (102) comprises:
one or more processors (202); and
a memory coupled to the one or more processors (202), wherein said memory (204) stores instructions which when executed by the one or more processors (202) cause the system (102) (102) to:
receive at least one data packet from at least one of one or more users (110) and one or more computing devices (108) by scanning at least one unique identifier displayed on the intelligent interactive display (302) to enable a first validation, wherein the at least unique identifier comprises at least one of a Quick Response (QR) code, a graphical image, and a 3D image;
generate and display at least one reserved screen zone along with one or more photorealistic objects on the intelligent interactive display (302) upon the first successful validation, and perform a Dynamic Digital Reflection (DDR) scanning by the one or more users (110) to enable a second validation;
synchronize, based on the Dynamic Digital Reflection (DDR), a display of the one or more computing device and the at least one reserved screen zone, and execute one or more DDR commands to enable a third validation, wherein the one or more DDR Commands comprises at least one of a zoom-in, a zoom-out, and a rotate screen; and
verify an identity of the one or more users (110) and authenticate proximity time and space of the one or more user based on the third validation, and provide rewards for personalized user engagements using the intelligent interactive display (302).
2. The system (102) as claimed in claim 1, wherein the one or more photorealistic objects comprises at least one of a Quick Response (QR) code, a Virtual Reality (VR), an Augmented Reality (AR), and a hologram.
3. The system (102) as claimed in claim 1, wherein the system (102) is configured to:
automatically create a zone separator in the at least one reserved screen zone based on the one or more users (110) simultaneously operating the intelligent interactive display (302) to connect and execute the DDR.
4. The system (102) as claimed in claim 3, wherein the zone separator is configured to:
assign, specify, and monitor the at least one reserved screen zone based on the termination of the DDR of the respective one or more users (110), wherein the zone separator comprises at least one of a grid line, a colors separation, a fixed dimension, and a transformation patterns.
5. The system (102) as claimed in claim 1, wherein the system (102) is configured to:
validate, by an Artificial Intelligence (AI) engine, the presence of the one or more user based on one or more parameters pertaining to liveliness detection the one or more users (110), wherein the one or more parameters comprises at least one of a face detection, a user location, a user eye winks, and an Eye Aspect Ratio (EAR).
6. A method providing rewards for personalized user engagements by using intelligent interactive display (302) of a system (102), the method comprises:
receiving, by the system (102), at least one data packet from at least one of one or more users (110) and one or more computing devices (108) by scanning at least one unique identifier displayed on the intelligent interactive display (302) for enabling a first validation, wherein the at least unique identifier comprises at least one of a Quick Response (QR) code, a graphical image, and a 3D image;
generating and displaying at least one reserved screen zone along with one or more photorealistic objects on the intelligent interactive display (302) of the system (102) upon the first successful validation, and performing a Dynamic Digital Reflection (DDR) scanning by the one or more users (110) for enabling a second validation;
synchronizing, by the system (102), a display of the one or more computing device and the at least one reserved screen zone based on the Dynamic Digital Reflection (DDR), and executing one or more DDR commands to enable a third validation, wherein the one or more DDR commands comprises at least one of a zoom-in, a zoom-out, and a rotate screen; and
verifying an identity of the one or more users (110), and authenticating proximity time and space of the one or more user based on the third validation, and providing rewards for personalized user engagements using the intelligent interactive display (302).
7. The method as claimed in claim 6, wherein the one or more photorealistic objects comprises at least one of a Quick Response (QR) code, a Virtual Reality (VR), an Augmented Reality (AR), and a hologram.
8. The method as claimed in claim 6, wherein the method comprises:
automatically creating a zone separator in the at least one reserved screen zone based on the one or more users (110) simultaneously operating the intelligent interactive display (302) by connecting to execute the DDR.
9. The method as claimed in claim 8, wherein the zone separator comprises:
assigning, specifying, and monitoring the at least one reserved screen zone based on the termination of the DDR of the respective one or more users (110), wherein the zone separator comprises at least one of a grid line, a colors separation, a fixed dimension, and a transformation patterns.
10. The method as claimed in claim 1, wherein the method comprises:
validating, by an Artificial Intelligence (AI) engine, the presence of the one or more user based on one or more parameters pertaining to liveliness detection the one or more users (110), wherein the one or more parameters comprises at least one of a face detection, a user location, a user eye winks, and an Eye Aspect Ratio (EAR).
| # | Name | Date |
|---|---|---|
| 1 | 202341017877-STATEMENT OF UNDERTAKING (FORM 3) [16-03-2023(online)].pdf | 2023-03-16 |
| 2 | 202341017877-POWER OF AUTHORITY [16-03-2023(online)].pdf | 2023-03-16 |
| 3 | 202341017877-FORM FOR STARTUP [16-03-2023(online)].pdf | 2023-03-16 |
| 4 | 202341017877-FORM FOR SMALL ENTITY(FORM-28) [16-03-2023(online)].pdf | 2023-03-16 |
| 5 | 202341017877-FORM 1 [16-03-2023(online)].pdf | 2023-03-16 |
| 6 | 202341017877-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [16-03-2023(online)].pdf | 2023-03-16 |
| 7 | 202341017877-EVIDENCE FOR REGISTRATION UNDER SSI [16-03-2023(online)].pdf | 2023-03-16 |
| 8 | 202341017877-DRAWINGS [16-03-2023(online)].pdf | 2023-03-16 |
| 9 | 202341017877-DECLARATION OF INVENTORSHIP (FORM 5) [16-03-2023(online)].pdf | 2023-03-16 |
| 10 | 202341017877-COMPLETE SPECIFICATION [16-03-2023(online)].pdf | 2023-03-16 |
| 11 | 202341017877-ENDORSEMENT BY INVENTORS [17-03-2023(online)].pdf | 2023-03-17 |
| 12 | 202341017877-Correspondence_SIPP Scheme_03-04-2023.pdf | 2023-04-03 |
| 13 | 202341017877-PA [22-12-2023(online)].pdf | 2023-12-22 |
| 14 | 202341017877-FORM FOR STARTUP [22-12-2023(online)].pdf | 2023-12-22 |
| 15 | 202341017877-EVIDENCE FOR REGISTRATION UNDER SSI [22-12-2023(online)].pdf | 2023-12-22 |
| 16 | 202341017877-ASSIGNMENT DOCUMENTS [22-12-2023(online)].pdf | 2023-12-22 |
| 17 | 202341017877-8(i)-Substitution-Change Of Applicant - Form 6 [22-12-2023(online)].pdf | 2023-12-22 |