Sign In to Follow Application
View All Documents & Correspondence

A System And A Method To Provide An Interactive User Experience During A Theatre Session

Abstract: A system (100) to provide an interactive user experience during a theatre session is provided. A login module (120) enables a plurality of users to register to the theatre session automatically by allowing the plurality of users to scan a quick response code. An interactivity module (130) allows the plurality of users to cast vote for a plurality of options corresponding to a plurality of characters displayed on a user device. An intensity-based interactivity module (140) enables the plurality of users to jolt the user device or stroke on a screen of the user device constantly. A sound-based interactivity module (150) enables the plurality of users to generate a vocal response via a microphone embedded in the user device. An artificial intelligence-based interactivity module (160) utilizes a plurality of preferences and a personal information in the on-screen event to create a personalized experiences to the plurality of users. FIG. 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
29 November 2023
Publication Number
22/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

VIREZA REVOLUTION PRIVATE LIMITED
49/1, S.KARIAPPA ROAD, BASAVANAGUDI, BANGALORE, 560004, KARNATAKA, INDIA

Inventors

1. ARJUN N R
VIREZA REVOLUTION PRIVATE LIMITED; 49/1 S.KARIAPPA ROAD, BASAVANAGUDI BANGALORE KARNATAKA INDIA 560004

Specification

DESC:EARLIEST PRIORITY DATE:
This Application claims priority from a provisional patent application filed in India having Patent Application No. 202341037081, filed on November 29, 2023, and titled “SYSTEM AND METHOD FOR REAL-TIME AUDIENCE INTERACTION WITH A MEDIA”.
FIELD OF INVENTION
[0001] Embodiments of the present disclosure relate to the field of media, and more particularly, a system and a method to provide an interactive user experience during a theatre session.
BACKGROUND
[0002] Films have continuously evolved since their origin. The magic lantern shows may be considered as predecessor of the films. In the magic lantern shows, a picture painted on a glass is projected by a lantern (just a candle and a lens) onto a wall. The lantern is considered as an earlier version of today’s projectors. Introduction of the cinematography enabled recording and projection of black and white images without sound. Film companies started sprouting throughout world, understanding economic potential of motion picture industry after the introduction of the cinematography. Further, arrival of panning cameras enabled the shooting of panoramas. Film editing techniques, animation techniques, and color films evolved as time progressed. A breakthrough in evolution of the films happened with the introduction of the film with synchronized sound recorded on a disc. Advancement in film technology enabled the recording of sounds on the films to pave the way for talkies. The desire for wartime propaganda against the opposition created a renaissance in film industry during Second World War. Television and video cassette recorder (VCR) made films very popular. The world witnessed the development of commercially independent films later. The seamless connectivity offered by the internet paved the way for many streaming platforms and over-the-top (OTT) releases of films.
[0003] The cost for watching a film in a theatre is comparatively higher than that of the subscription charge of an over-the-top (OTT) platform. The advertisements popping up during shows are equally irritating in theatres as well as over-the-top (OTT) platforms. The pre scripted programs lack ability to interact with users, making the programs less engaging.
[0004] Further, historical evolution of the film from Magic Lantern shows to black-and-white moving pictures with sound, is essential and should remain as the foundation. However, a contemporary context must be included to address current entertainment landscape. In previous generation, users had limited options for fresh, cutting-edge video entertainment. Theatres are primary destination for new content, as television in countries often aired old movies, serials, or cartoons for younger audiences. This made theatres the epicenter of entertainment for people across demographics.
[0005] Today, the scenario has changed dramatically. With proliferation of social media, YouTube, and OTT platforms, audiences spend 3-4 hours daily consuming readily available content. The films often premiere on streaming platforms within weeks of their theatrical release. This abundance of video entertainment has led to overexposure, making traditional viewing experiences less engaging and impactful than before.
[0006] Hence, there is a need for an improved system for system to provide an interactive user experience which addresses the aforementioned issue(s).
OBJECTIVE OF THE INVENTION
[0007] An objective of the present invention is to enable users to interact with an on-screen event in a theatre session by scanning a quick response code placed in a corresponding seat in the theatre session through casting a vote, jolting or tapping the user device, vocal response, and personal information received from the users, within a predetermined time prior to a scene thereby reflecting preferences of the users in the on-screen event.
[0008] Another objective of the invention is to provide an advertising platform to offer creative possibilities for branding professionals to build brand awareness and affinity without interfering with the user’s experience.
BRIEF DESCRIPTION
[0009] In accordance with an embodiment of the present disclosure, a system to provide an interactive user experience during a theatre session is provided. The system includes a processing subsystem hosted on a server wherein the processing subsystem is configured to execute on a network to control bidirectional communications among a plurality of modules. The processing subsystem includes a login module configured to enable a plurality of users to register to the theatre session automatically by allowing the plurality of users to scan a quick response code placed in a plurality of corresponding seats booked for the theatre session. The processing subsystem also includes an interactivity module operatively coupled to the login module, wherein the interactivity module is configured to allow the plurality of users to cast vote for a plurality of options corresponding to a plurality of characters displayed on a user device operated by the plurality of users. The interactivity module is configured to enable the plurality of users to view an interactive segment corresponding to the plurality of options that received a maximum votes. Further, the processing subsystem includes an intensity-based interactivity module operatively coupled to the interactivity module wherein the intensity-based interactivity module is configured to enable the plurality of users to jolt the user device or stroke on a screen of the user device constantly for a predetermined time to detect intensity of engagement of the plurality of users. The intensity-based interactivity module is configured to count the jolt detected by an accelerometer and equate the jolt to stroke to determine the intensity of engagement of the plurality of users. The intensity-based interactivity module is also configured to enable the plurality of users to view effect of the engagement of the plurality of users on an on-screen event based on the intensity of actions performed on the user device. Further, the processing subsystem includes a sound-based interactivity module operatively coupled to the intensity-based interactivity module, wherein the sound-based interactivity module is configured to enable the plurality of users to generate a vocal response via a microphone embedded in the user device for the predetermined time to capture the vocal response in real-time that influence content displayed on the on-screen event. The sound-based interactivity module is configured to measure decibel level of the vocal response. Further, the sound-based interactivity module is configured to enable the plurality of users to view the interactive segment on the on-screen event if the decibel level reaches a predetermined threshold value. Furthermore, the processing subsystem includes an artificial intelligence-based interactivity module operatively coupled to the sound-based interactivity module wherein the artificial intelligence-based interactivity module is configured to utilize a plurality of preferences and a personal information in the on-screen event to create a personalized experiences to the plurality of users wherein the plurality of users are allowed to input the personal information and the plurality of preferences.
[0010] In accordance with another embodiment of the present disclosure, a method to provide an interactive user experience during a theatre session is provided. The method includes enabling, by a login module, a plurality of users to register to the theatre session automatically by allowing the plurality of users to scan a quick response code placed in a plurality of corresponding seats booked for the theatre session. The method also includes allowing, by an interactivity module, the plurality of users to cast vote for a plurality of options corresponding to a plurality of characters displayed on a user device operated by the plurality of users. Further, the method includes enabling, by the interactivity module, the plurality of users to view an interactive segment corresponding to the plurality of options that received a maximum votes. Further, the method also includes enabling, by an intensity-based interactivity module, the plurality of users to jolt the user device or stroke on a screen of the user device constantly for a predetermined time to detect intensity of engagement of the plurality of users. Furthermore, the method includes counting, by the intensity-based interactivity module, the jolt detected by an accelerometer and equate the jolt to stroke to determine the intensity of engagement of the plurality of users. Moreover, the method includes enabling, by the intensity-based interactivity module, the plurality of users to view effect of the engagement of the plurality of users on an on-screen event based on the intensity of actions performed on the user device. Additionally, the method includes enabling, by a sound-based interactivity module, the plurality of users to generate a vocal response via a microphone embedded in the user device for the predetermined time to capture the vocal response in real-time that influence content displayed on the on-screen event. The method includes measuring, by the sound-based interactivity module, decibel level of the vocal response. The method also includes enabling, by the sound-based interactivity module, the plurality of users to view the interactive segment on the on-screen event if the decibel level reaches a predetermined threshold value. The method also includes utilizing, by an artificial intelligence-based interactivity module, a plurality of preferences and a personal information in the on-screen event to create a personalized experiences to the plurality of users wherein the plurality of users are allowed to input the personal information and the plurality of preferences.
[0011] To further clarify the advantages and features of the present disclosure, a more particular description of the disclosure will follow by reference to specific embodiments thereof, which are illustrated in the appended figures. It is to be appreciated that these figures depict only typical embodiments of the disclosure and are therefore not to be considered limiting in scope. The disclosure will be described and explained with additional specificity and detail with the appended figures.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] The disclosure will be described and explained with additional specificity and detail with the accompanying figures in which:
[0013] FIG. 1 is a block diagram representation of a system to provide an interactive user experience during a theatre session in accordance with an embodiment of the present disclosure;
[0014] FIG. 2 is a block diagram of an exemplary embodiment of system to provide an interactive user experience during a theatre session of FIG. 1 in accordance with an embodiment of the present disclosure;
[0015] FIG. 3 is a block diagram of a computer or a server in accordance with an embodiment of the present disclosure;
[0016] FIG. 4 (a) illustrates a flow chart representing the steps involved in a method to provide an interactive user experience during a theatre session in accordance with an embodiment of the present disclosure; and
[0017] FIG. 4 (b) illustrates continued steps of the method of FIG. 4 (a) in accordance with an embodiment of the present disclosure.
[0018] Further, those skilled in the art will appreciate that elements in the figures are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the figures by conventional symbols, and the figures may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the figures with details that will be readily apparent to those skilled in the art having the benefit of the description herein.
DETAILED DESCRIPTION
[0019] For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the figures and specific language will be used to describe them. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Such alterations and further modifications in the illustrated system, and such further applications of the principles of the disclosure as would normally occur to those skilled in the art are to be construed as being within the scope of the present disclosure.
[0020] The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more devices or subsystems or elements or structures or components preceded by "comprises... a" does not, without more constraints, preclude the existence of other devices, sub-systems, elements, structures, components, additional devices, additional sub-systems, additional elements, additional structures or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
[0021] Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
[0022] In the following specification and the claims, reference will be made to a number of terms, which shall be defined to have the following meanings. The singular forms “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise.
[0023] Embodiments of the present disclosure relates to system to provide an interactive user experience during a theatre session is provided. The system includes a processing subsystem hosted on a server wherein the processing subsystem is configured to execute on a network to control bidirectional communications among a plurality of modules. The processing subsystem includes a login module configured to enable a plurality of users to register to the theatre session automatically by allowing the plurality of users to scan a quick response code placed in a plurality of corresponding seats booked for the theatre session. The processing subsystem also includes an interactivity module operatively coupled to the login module, wherein the interactivity module is configured to allow the plurality of users to cast vote for a plurality of options corresponding to a plurality of characters displayed on a user device operated by the plurality of users. The interactivity module is configured to enable the plurality of users to view an interactive segment corresponding to the plurality of options that received a maximum votes. Further, the processing subsystem includes an intensity-based interactivity module operatively coupled to the interactivity module wherein the intensity-based interactivity module is configured to enable the plurality of users to jolt the user device or stroke on a screen of the user device constantly for a predetermined time to detect intensity of engagement of the plurality of users. The intensity-based interactivity module is configured to count the jolt detected by an accelerometer and equate the jolt to stroke to determine the intensity of engagement of the plurality of users. The intensity-based interactivity module is also configured to enable the plurality of users to view effect of the engagement of the plurality of users on an on-screen event based on the intensity of actions performed on the user device. Further, the processing subsystem includes a sound-based interactivity module operatively coupled to the intensity-based interactivity module, wherein the sound-based interactivity module is configured to enable the plurality of users to generate a vocal response via a microphone embedded in the user device for the predetermined time to capture the vocal response in real-time that influence content displayed on the on-screen event. The sound-based interactivity module is configured to measure decibel level of the vocal response. Further, the sound-based interactivity module is configured to enable the plurality of users to view the interactive segment on the on-screen event if the decibel level reaches a predetermined threshold value. Furthermore, the processing subsystem includes an artificial intelligence-based interactivity module operatively coupled to the sound-based interactivity module wherein the artificial intelligence-based interactivity module is configured to utilize a plurality of preferences and a personal information in the on-screen event to create a personalized experiences to the plurality of users wherein the plurality of users are allowed to input the personal information and the plurality of preferences.
[0024] FIG. 1 is a block diagram representation of a system to provide an interactive user experience during a theatre session in accordance with an embodiment of the present disclosure. The system (100) includes a processing subsystem (105) hosted on a server (108). In one embodiment, the server (108) may include a cloud-based server. In another embodiment, parts of the server (108) may be a local server coupled to a first user device. The processing subsystem (105) is configured to execute on a network (115) to control bidirectional communications among a plurality of modules. In one example, the network (115) may be a private or public local area network (LAN) or Wide Area Network (WAN), such as the Internet. In another embodiment, the network (115) may include both wired and wireless communications according to one or more standards and/or via one or more transport mediums. In one example, the network (115) may include wireless communications according to one of the 802.11 or Bluetooth specification sets, or another standard or proprietary wireless communication protocol. In yet another embodiment, the network (115) may also include communications over a terrestrial cellular network, including, a global system for mobile communications (GSM), code division multiple access (CDMA), and/or enhanced data for global evolution (EDGE) network.
[0025] The processing subsystem (105) includes a login module (120), an interactivity module (130), an intensity-based interactivity module (140), a sound-based interactivity module (150), and an artificial intelligence-based interactivity module (160).
[0026] The login module (120) is configured to enable a plurality of users to register to the theatre session automatically by allowing the plurality of users to scan a quick response code placed in a plurality of corresponding seats booked for the theatre session, thereby allowing the plurality of users to a particular theatre session, such as specific auditorium in a movie chain and then a film begins.
[0027] The interactivity module (130) is operatively coupled to the login module (120). The interactivity module (130) is configured to allow the plurality of users to cast vote for a plurality of options corresponding to a plurality of characters displayed on a user device operated by the plurality of users. Examples of the user device includes, but is not limited to, personal computer (PC), a mobile phone, a tablet device, a personal digital assistant (PDA), a smart phone, a laptop and pagers. As used herein, the plurality of options is referred to as a kind of activity the plurality of characters performs within the storyline. Typically, the plurality of options indicates a path or action or behavior that the plurality of characters chooses based on an input provided by the plurality of users. The input includes, but is not limited to, the cast vote, jolt or tap the user device, vocal response, personal information and the plurality of preferences.
[0028] Further, the plurality of users is also allowed to cast the vote upon activating a screen lock on the user device.
[0029] The interactivity module (130) is also configured to enable the plurality of users to view an interactive segment corresponding to the plurality of options that received a maximum vote. For example, in the film, the storyline unfolds based on decisions performed by the plurality of users. If a hero skips college: his friends invite him out for drinks, while a heroine calls him to join her for shopping. At this point, the maximum votes decide which option the hero should choose. Now, the maximum votes determine the hero action, directly influencing the storyline, thereby throughout the film, various decisions made by the plurality of users makes each screening unique.
[0030] The intensity-based interactivity module (140) is operatively coupled to the interactivity module (130). The intensity-based interactivity module (140) is configured to enable the plurality of users to jolt the user device or stroke on a screen of the user device constantly for a predetermined time to detect intensity of engagement of the plurality of users. The intensity-based interactivity module (140) is also configured to count the jolt detected by an accelerometer and equate the jolt to stroke to determine the intensity of engagement of the plurality of users. Further, the intensity-based interactivity module (140) is configured to enable the plurality of users to view effect of the engagement of the plurality of users on an on-screen event based on the intensity of actions performed on the user device. For example, consider a dramatic battle scene in which the hero and a villain engage in a furious conflict, pushing each other back and forth. Prior to this scenario, the user device is vibrated or increase in brightness of the user device that indirectly prompt the plurality of users to tap on the user device for the predetermined time. Each tap increases potency of the hero's assaults; thus, the more taps performed by the plurality of users, the more powerful and impacting the hero's punches become. As the plurality of users taps more, the hero's combat style improves, becoming more spectacular and dynamic.
[0031] For example, consider a predetermined time for 5 seconds, the user device starts vibrating prior to a scene. For example, in an audience of 100 users 700 to 1000 taps or jolts trigger a light punch effect, 1000 to 1200 taps or jolts trigger a medium punch effect and above 1200 taps or jolts trigger a high-impact punch effect. Simultaneously, intensity of the plurality of users is measured through a cumulative average of the taps per second during an interaction window instead of absolute thresholds. Thresholds and measurement methods ensure dynamic, real-time audience interaction.
[0032] The sound-based interactivity module (150) is operatively coupled to the intensity-based interactivity module (140). The sound-based interactivity module (150) is configured to enable the plurality of users to generate a vocal response via a microphone embedded in the user device for the predetermined time to capture the vocal response in real-time that influence content displayed on the on-screen event. The vocal response from the plurality of users includes at least one of the whistle or a shout.
[0033] The sound-based interactivity module (150) is also configured to measure decibel level of the vocal response. The sound-based interactivity module (150) is configured to enable the plurality of users to view the interactive segment on the on-screen event if the decibel level reaches a predetermined threshold value. For example, if the decibel level rises above the predetermined threshold, the hero performs with an outstanding, high-energy dance routine. If the decibel level is below the predetermined threshold, the vocal response from the plurality of users is quieter, the hero performs a more conventional dancing move.
[0034] It must be noted that, the predetermined time is provided to the plurality of users prior to the scene, allowing the plurality of users to cast vote, jolt or tap the user device, vocal response.
[0035] In one embodiment, the user device is vibrated or increase its brightness prompting the plurality of users to input the cast vote, jolt or tap the user device, vocal response.
[0036] The artificial intelligence-based interactivity module (160) is operatively coupled to the sound-based interactivity module (150). The artificial intelligence-based interactivity module (160) is configured to utilize a plurality of preferences and a personal information in the on-screen event to create a personalized experiences to the plurality of users. The information and the plurality of preferences includes favorite food, movies, vacation spot, drink, name, age, and photo. The plurality of users is allowed to input the personal information and the plurality of preferences. For example, consider a scenario where a user is watching the film, and the name of the user is Suresh Kumar. Before the film starts, the user is prompted to input the personal information and the plurality of preferences. For example, Suresh Kumar may input favorite food as pizza, favorite vacation spot as Bangkok, and your favorite drink as beer. Other users may have entered similar details about the plurality of preferences. Now, as the film movie begins, as the hero enters a restaurant, and the waiter asks what he’d like to order. The hero glances at the menu, then says with his trademark style, “Maybe I’ll have a pizza just the way Suresh Kumar likes it.”
[0037] Typically, the plurality of preferences and the personal information is transmitted from the user device to the server (108). The server (108) randomly selects one or two users for each of the theatre session and selected details are sent to a partner platform via an application programming interface, which specializes in real-time AI-generated video content. The partner platform generates a personalized video (e.g., hero saying “Suresh Kumar” and “pizza”) and sends the personized video back to the server (108). The personalized video is then downloaded and prepared in the backend system (desktop application or digital cinema server), thereby supporting real-time generation of thousands of unique videos for simultaneous playback across multiple theaters.
[0038] FIG. 2 is a block diagram of an exemplary embodiment of a system to provide an interactive user experience during a theatre session in accordance with an embodiment of the present disclosure. The processing subsystem (105) includes a computation module (170) operatively coupled to the login module (120), interactivity module (130), the intensity-based interactivity module (140), the sound-based interactivity module (150), and the artificial intelligence-based interactivity module (160). The computation module (170) is configured to perform computations based on the input received from the user device. The input includes the cast vote, jolt or tap the user device, vocal response, personal information and the plurality of preferences. Typically, the computation module (170) performs the following:
• Aggregating the votes received from the plurality of users to tally, calculate percentages, and determine most popular or selected vote.
• Intensity Measurement based on the jolts or taps detected by the accelerometer to calculate magnitude, frequency, or force of the movements to assess the plurality of user's level of interaction or involvement.
• Compute the plurality of preferences of the plurality of users.
[0039] Further, the login module (120) is configured to communicate the input with a desktop application that functions as a video player capable of connecting the input from the computation module (170). The video player is configured to play high-definition local files in ultra-high definition, high definition, and Digital Cinema Package formats, and to shift between a plurality of scenes in the on-screen event based on the input received from the user device and computations performed by the computation module (170).
[0040] Further, the interactive video platform serves as an advertising platform configured to offer creative possibilities for branding professionals to build brand awareness and affinity without interfering with the plurality of user’s experience. Typically, the advertising platform offers creative possibilities for branding professionals to build brand awareness and affinity without interfering with the plurality of users.
[0041] In one embodiment, the interactive video platform is utilized in one or more applications. The applications includes but is not limited to digital devices for remote interactivity, brand campaigns in physical or hybrid events, corporate videos and events, allowing real-time voting and data collection for engagement analytics.
[0042] In an example, consider a scenario where the plurality of users (180) are watching a film in a theatre session (190). The plurality of users (180) is allowed to register through the login module (120) by scanning a QR code on the seat by a user device (475), thereby the plurality of users (180) is allowed to join a current theater session (190) and can interact with current on-screen event. As the film progresses, the interactivity module (130) allows the plurality of users (180) to cast votes on a decision that the film’s character, say the hero, needs to make. For example, the hero might need to choose between two options, such as "go to the party" or "get-together with friends." Now, the plurality of users (180) can cast the vote on the user device (475), and then aggregating all the votes to determine which choice is most popular. Once the votes are counted, the film adapts to show result of the option that received the maximum votes. Further, as the film progresses, the plurality of users (180) can engage through the intensity-based interactivity module (140), which uses the accelerometer in the user device (475) to detect how actively the plurality of users (180) are participating. For example, if the plurality of users (180) jolts the user device (475) or taps the screen on the user device (475) for a predetermined time enthusiastically, the system (100) can measure intensity and influence the on-screen event. Furthermore, the sound-based interactivity module (150), the plurality of users (180) are allowed to respond through a vocal response, which is detected by microphone of the user device (475) for the predetermined time. If vocal response is loud enough, the system (100) might trigger an additional interactive element, such as a surprise event or change in the storyline based on the sound's decibel level. Furthermore, the artificial intelligence-based interactivity module (160) personalizes the experience. The user device (475) transmits cast vote, jolt or tap the user device (475), vocal response, personal information and the plurality of preferences to a server (108). Moreover, computation module (170) or the server (108) combines all these inputs together and a result is sent to a back-end system (480) for execution of the inputs received from the user device (475). The back-end system (480) includes the following:
• Desktop computer (460) – The result from the server (108) is communicated to a desktop computer (460) running a desktop application which is proprietary. Typically, the desktop application works on both windows and mac platforms and determined corresponding clip media to play.
• Digital Cinema Server (470) – The digital cinema server (470) receives the result from the desktop application that manages media content for playback.
• Theater Projection System or the theatre session (190) - The digital cinema server (470) communicates with the theater’s projection system to play the corresponding clip without interrupting the story line.

[0043] Further, to address potential compatibility issues with a digital cinema server provider:
• Procure custom hardware from an original equipment manufacturer (OEM) and develop proprietary firmware.
• Transmit the result directly to the proprietary firmware of the digital cinema server (470).
• The firmware then communicates with the theater projection system or the theatre session (190) to play the corresponding clip.
[0044] FIG. 3 is a block diagram of a computer or a server in accordance with an embodiment of the present disclosure. The server (108) includes processor(s) (430), and memory (410) operatively coupled to the bus (420). The processor(s) (430), as used herein, means any type of computational circuit, such as, but not limited to, a microprocessor, a microcontroller, a complex instruction set computing microprocessor, a reduced instruction set computing microprocessor, a very long instruction word microprocessor, an explicitly parallel instruction computing microprocessor, a digital signal processor, or any other type of processing circuit, or a combination thereof.
[0045] The memory (410) includes several subsystems stored in the form of executable program which instructs the processor (430) to perform the method steps illustrated in FIG. 1. The memory (410) includes a processing subsystem (105) of FIG.1. The processing subsystem (105) includes a plurality of modules. The plurality of modules includes a login module (120), interactivity module (130), intensity-based interactivity module (140), sound-based interactivity module (150) and an artificial intelligence-based interactivity module (160).
[0046] A login module (120) configured to enable a plurality of users to register to the theatre session automatically by allowing the plurality of users to scan a quick response code placed in a plurality of corresponding seats booked for the theatre session. The processing subsystem (105) also includes an interactivity module (130) operatively coupled to the login module (120), wherein the interactivity module (130) is configured to allow the plurality of users to cast vote for a plurality of options corresponding to a plurality of characters displayed on a user device operated by the plurality of users. The interactivity module (130) is configured to enable the plurality of users to view an interactive segment corresponding to the plurality of options that received a maximum votes. Further, the processing subsystem (105) includes an intensity-based interactivity module (140) operatively coupled to the interactivity module wherein the intensity-based interactivity module (140) is configured to enable the plurality of users to jolt the user device or stroke on a screen of the user device constantly for a predetermined time to detect intensity of engagement of the plurality of users. The intensity-based interactivity module (140) is configured to count the jolt detected by an accelerometer and equate the jolt to stroke to determine the intensity of engagement of the plurality of users. The intensity-based interactivity module (140) is also configured to enable the plurality of users to view the effect of the engagement of the plurality of users on an on-screen event based on the intensity of actions performed on the user device. Further, the processing subsystem (105) includes a sound-based interactivity module (150) operatively coupled to the intensity-based interactivity module (140), wherein the sound-based interactivity module (150) is configured to enable the plurality of users to generate a vocal response via a microphone embedded in the user device for the predetermined time to capture the vocal response in real-time that influence content displayed on the on-screen event. The sound-based interactivity module (150) is configured to measure the decibel level of the vocal response. Further, the sound-based interactivity module (150) is configured to enable the plurality of users to view the interactive segment on the on-screen event if the decibel level reaches a predetermined threshold value. Furthermore, the processing subsystem (105) includes an artificial intelligence-based interactivity module (160) operatively coupled to the sound-based interactivity module (150) wherein the artificial intelligence-based interactivity module (160) is configured to utilize a plurality of preferences and a personal information in the on-screen event to create a personalized experiences to the plurality of users wherein the plurality of users are allowed to input the personal information and the plurality of preferences.
[0047] The bus (420) as used herein refers to be internal memory channels or computer network that is used to connect computer components and transfer data between them. The bus (420) includes a serial bus or a parallel bus, wherein the serial bus transmits data in bit-serial format and the parallel bus transmits data across multiple wires. The bus (420) as used herein, may include but not limited to, a system bus, an internal bus, an external bus, an expansion bus, a frontside bus, a backside bus and the like.
[0048] FIG. 4 (a) illustrates a flow chart representing the steps involved in a method to provide an interactive user experience during a theatre session in accordance with an embodiment of the present disclosure. FIG. 4 (b) illustrates continued steps of the method of FIG. 4 (a) in accordance with an embodiment of the present disclosure. The method (400) includes enabling, by a login module, a plurality of users to register to the theatre session automatically by allowing the plurality of users to scan a quick response code placed in a plurality of corresponding seats booked for the theatre session in step 410. Typically, the quick response code is precisely linked to the seat reserved for the theatre session and acts as an identification. For example. Consider there are 3 users, user A, user B and user C booked seats 1, 2 and 3 for the film. The seats are assigned with a unique QR code that is linked to the each of the user A, user B and the user C.
[0049] In one embodiment, the plurality of users is allowed to cast the vote upon activating a screen lock on the user device.
[0050] Yet in another embodiment, the login module is configured to communicate the input with a desktop application that functions as a video player capable of connecting the input from the computation module. The video player is configured to play high-definition local files in ultra-high definition, high definition, and Digital Cinema Package formats, and to shift between a plurality of scenes in the on-screen event based on the input received from the user device and computations performed by the computation module. The interactive video platform serves as an advertising platform configured to offer creative possibilities for branding professionals to build brand awareness and affinity without interfering with the plurality of user’s experience.
[0051] Further, the method (400) includes enabling, by the interactivity module, the plurality of users to view an interactive segment corresponding to the plurality of options that received a maximum votes in step 430. Typically, the interactive segment is the theatre session that actively engages the plurality of users to interact directly with the storyline.
[0052] In one embodiment, the interactivity module is configured to enable the plurality of users to sway the user device to left, right, or center to vote. Further, the interactivity module is also configured to detect direction of the sway by utilizing the accelerometer thereby registering the vote of the plurality of the users.
[0053] Furthermore, the method (400) includes enabling, by an intensity-based interactivity module, the plurality of users to jolt the user device or stroke on a screen of the user device constantly for a predetermined time to detect intensity of engagement of the plurality of users in step 440. Examples of the user device includes, but is not limited to smartphone, tablet, smartwatch, and laptop.
[0054] Moreover, the method (400) includes counting, by the intensity-based interactivity module, the jolt detected by an accelerometer and equate the jolt to stroke to determine the intensity of engagement of the plurality of users in step 450. Typically, the accelerometer on the user device monitors changes in acceleration induced by the jolt. The jolt is noticed when there is a rapid increase in acceleration.
[0055] Additionally, the method (400) includes enabling, by the intensity-based interactivity module, the plurality of users to view effect of the engagement of the plurality of users on an on-screen event based on the intensity of actions performed on the user device in step 460.
[0056] Further, the method (400) includes enabling, by a sound-based interactivity module, the plurality of users to generate a vocal response via a microphone embedded in the user device for the predetermined time to capture the vocal response in real-time that influence content displayed on the on-screen event in step 470. Typically, the vocal response from the plurality of users includes at least one of the whistle or a shout
[0057] The method (400) also includes measuring, by the sound-based interactivity module, decibel level of the vocal response in step 480.
[0058] Further, the method (400) includes enabling, by the sound-based interactivity module, the plurality of users to view the interactive segment on the on-screen event if the decibel level reaches a predetermined threshold value in step 490.
[0059] Further, the method (400) includes utilizing, by an artificial intelligence-based interactivity module, a plurality of preferences and a personal information in the on-screen event to create a personalized experiences to the plurality of users wherein the plurality of users are allowed to input the personal information and the plurality of preferences in step 500. Typically, the personal information and the plurality of preferences includes favorite food, movies, vacation spot, drink, name, age, and photo.
[0060] Various embodiments of the system to provide an interactive user experience during a theatre session above provides various benefits upon allowing the plurality of users to interact with the on-screen event directly upon providing input from the user device, that seamlessly integrates with the film, thereby minimizing disruptions in the storyline. Further, the system (100) provides a connection with the plurality of characters, creating a unique experience for each of the plurality of users. Also, the system (100) provides an advertising platform configured to offer creative possibilities for branding professionals to build brand awareness and affinity without interfering with the plurality of user’s experience.
[0061] The techniques described in this disclosure may be implemented, at least in part, in hardware, software, firmware, or any combination thereof. For example, various aspects of the described techniques may be implemented within one or more processors, including one or more microprocessors, digital signal processors (DSPs), application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), or any other equivalent integrated or discrete logic circuitry, as well as any combinations of such components. The term “processor” or “processing subsystem” may generally refer to any of the foregoing logic circuitry, alone or in combination with other logic circuitry, or any other equivalent circuitry. A electronic control unit including hardware may also perform one or more of the techniques of this disclosure.
[0062] Such hardware, software, and firmware may be implemented within the same device or within separate devices to support the various techniques described in this disclosure. In addition, any of the described units, modules, or components may be implemented together or separately as discrete but interoperable logic devices. Depiction of different features as modules or units is intended to highlight different functional aspects and does not necessarily imply that such modules or units must be realized by separate hardware, firmware, or software components. Rather, functionality associated with one or more modules or units may be performed by separate hardware, firmware, or software components, or integrated within common or separate hardware, firmware, or software components.
[0063] It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the disclosure and are not intended to be restrictive thereof.
[0064] While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person skilled in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein.
[0065] The figures and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, the order of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts need to be necessarily performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples.
,CLAIMS:1. A system (100) to provide an interactive user experience during a theatre session comprising:
characterized in that,
a processing subsystem (105) hosted on a server (108) wherein the processing subsystem (105) is configured to execute on a network (115) to control bidirectional communications among a plurality of modules comprising:
a login module (120) configured to enable a plurality of users to register to the theatre session automatically by allowing the plurality of users to scan a quick response code placed in a plurality of corresponding seats booked for the theatre session;
an interactivity module (130) operatively coupled to the login module (120), wherein the interactivity module (130) is configured to:
allow the plurality of users to cast vote for a plurality of options corresponding to a plurality of characters displayed on a user device operated by the plurality of users; and
enable the plurality of users to view an interactive segment corresponding to the plurality of options that received a maximum votes;
an intensity-based interactivity module (140) operatively coupled to the interactivity module (130) wherein the intensity-based interactivity module (140) is configured to:
enable the plurality of users to jolt the user device or stroke on a screen of the user device constantly for a predetermined time to detect intensity of engagement of the plurality of users;
count the jolt detected by an accelerometer and equate the jolt to stroke to determine the intensity of engagement of the plurality of users; and
enable the plurality of users to view effect of the engagement of the plurality of users on an on-screen event based on the intensity of actions performed on the user device;
a sound-based interactivity module (150) operatively coupled to the intensity-based interactivity module (140), wherein the sound-based interactivity module (150) is configured to:
enable the plurality of users to generate a vocal response via a microphone embedded in the user device for the predetermined time to capture the vocal response in real-time that influence content displayed on the on-screen event;
measure decibel level of the vocal response; and
enable the plurality of users to view the interactive segment on the on-screen event if the decibel level reaches a predetermined threshold value; and
an artificial intelligence-based interactivity module (160) operatively coupled to the sound-based interactivity module (150) wherein the artificial intelligence-based interactivity module (160) is configured to utilize a plurality of preferences and a personal information in the on-screen event to create a personalized experiences to the plurality of users wherein the plurality of users are allowed to input the personal information and the plurality of preferences.

2. The system (100) as claimed in claim 1, wherein the interactivity module (130) is configured to:
enable the plurality of users to sway the user device to left, right, or center to vote; and
detect direction of the sway by utilizing the accelerometer thereby registering the vote of the plurality of the users.
3. The system (100) as claimed in claim 1, wherein the plurality of users is allowed to cast the vote upon activating a screen lock on the user device.

4. The system (100) as claimed in claim 1, wherein the information and the plurality of preferences comprises favorite food, movies, vacation spot, drink, name, age, and photo.

5. The system (100) as claimed in claim 1, wherein the vocal response from the plurality of users comprises at least one of the whistle or a shout.

6. The system (100) as claimed in claim 1, wherein the processing subsystem (105) comprises a computation module (170) operatively coupled to the login module (120), interactivity module (130), the intensity-based interactivity module (140), the sound-based interactivity module (150), and the artificial intelligence-based interactivity module (160) wherein the computation module (170) is configured to perform computations based on the input received from the user device, wherein the input comprises the cast vote, jolt or tap the user device, vocal response, personal information and the plurality of preferences.

7. The system (100) as claimed in claim 1, wherein the plurality of users is allowed to input the cast vote, jolt or tap the user device, vocal response, personal information and the plurality of preferences within a predetermined time prior to a next scene in the theatre session.

8. The system (100) as claimed in claim 1, wherein the login module (120) is configured to communicate the input with a desktop application that functions as a video player capable of connecting the input from the computation module (170).

9. The system (100) as claimed in claim 7, wherein the video player is configured to play high-definition local files in ultra-high definition, high definition, and Digital Cinema Package formats, and to shift between a plurality of scenes in the on-screen event based on the input received from the user device and computations performed by the computation module (170).

10. The system (100) as claimed in claim 1, wherein the interactive segment as an advertising platform configured to offer creative possibilities for branding professionals to build brand awareness and affinity without interfering with the plurality of user’s experience.

11. The system (100) as claimed in claim 1, wherein the interactive segment is configured to integrate the input from the user device into a script of the on-screen event, thereby enhancing the user engagement without disruption of storyline.

12. A method (400) to provide an interactive user experience during a theatre session comprising:
characterized in that,
enabling, by a login module, a plurality of users to register to the theatre session automatically by allowing the plurality of users to scan a quick response code placed in a plurality of corresponding seats booked for the theatre session; (410)
allowing, by an interactivity module, the plurality of users to cast vote for a plurality of options corresponding to a plurality of characters displayed on a user device operated by the plurality of users; (420)
enabling, by the interactivity module, the plurality of users to view an interactive segment corresponding to the plurality of options that received a maximum votes; (430)
enabling, by an intensity-based interactivity module, the plurality of users to jolt the user device or stroke on a screen of the user device constantly for a predetermined time to detect intensity of engagement of the plurality of users; (440)
counting, by the intensity-based interactivity module, the jolt detected by an accelerometer and equate the jolt to stroke to determine the intensity of engagement of the plurality of users; (450)
enabling, by the intensity-based interactivity module, the plurality of users to view effect of the engagement of the plurality of users on an on-screen event based on the intensity of actions performed on the user device; (460)
enabling, by a sound-based interactivity module, the plurality of users to generate a vocal response via a microphone embedded in the user device for the predetermined time to capture the vocal response in real-time that influence content displayed on the on-screen event; (470)
measuring, by the sound-based interactivity module, decibel level of the vocal response; (480)
enabling, by the sound-based interactivity module, the plurality of users to view the interactive segment on the on-screen event if the decibel level reaches a predetermined threshold value; and (490)
utilizing, by an artificial intelligence-based interactivity module, a plurality of preferences and a personal information in the on-screen event to create a personalized experiences to the plurality of users wherein the plurality of users are allowed to input the personal information and the plurality of preferences. (500)
Dated this 26th day of November 2024
Signature

Gokul Nataraj E
Patent Agent (IN/PA-5309)
Agent for the Applicant

Documents

Application Documents

# Name Date
1 202341037081-STATEMENT OF UNDERTAKING (FORM 3) [29-05-2023(online)].pdf 2023-05-29
2 202341037081-PROVISIONAL SPECIFICATION [29-05-2023(online)].pdf 2023-05-29
3 202341037081-PROOF OF RIGHT [29-05-2023(online)].pdf 2023-05-29
4 202341037081-FORM FOR STARTUP [29-05-2023(online)].pdf 2023-05-29
5 202341037081-FORM FOR SMALL ENTITY(FORM-28) [29-05-2023(online)].pdf 2023-05-29
6 202341037081-FORM 1 [29-05-2023(online)].pdf 2023-05-29
7 202341037081-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [29-05-2023(online)].pdf 2023-05-29
8 202341037081-EVIDENCE FOR REGISTRATION UNDER SSI [29-05-2023(online)].pdf 2023-05-29
9 202341037081-DRAWINGS [29-05-2023(online)].pdf 2023-05-29
10 202341037081-PostDating-(17-04-2024)-(E-6-134-2024-CHE).pdf 2024-04-17
11 202341037081-APPLICATIONFORPOSTDATING [17-04-2024(online)].pdf 2024-04-17
12 202341037081-FORM-26 [29-05-2024(online)].pdf 2024-05-29
13 202341037081-DRAWING [27-11-2024(online)].pdf 2024-11-27
14 202341037081-CORRESPONDENCE-OTHERS [27-11-2024(online)].pdf 2024-11-27
15 202341037081-COMPLETE SPECIFICATION [27-11-2024(online)].pdf 2024-11-27
16 202341037081-FORM-8 [02-04-2025(online)].pdf 2025-04-02