Abstract: ABSTRACT Voice-Controlled Projection System with Trolley Adjustment The present invention relates to a voice-controlled projection system and method for projecting contents onto a surface. The system includes a projecting device placed on a trolley, which can be adjusted in height and orientation. A interacts with the system by providing voice commands indicating the desired contents to be projected. The system analyzes the voice commands, maps them to specific contents from a predefined list, and extracts the corresponding contents. An actuator associated with the trolley adjusts its height or orientation to align the projection with the desired surface area. The system then projects the extracted contents onto the surface, creating an interactive and immersive experience. Additional features include the use of drones for projection adjustments, internet connectivity for accessing relevant data, real-time trolley movement range determination, and customizable audio output based on projected contents.
Description:VOICE-CONTROLLED PROJECTION SYSTEM WITH TROLLEY ADJUSTMENT
Field of the Invention
[0001]. This invention is in the field of projection systems and interactive display technologies. It specifically focuses on voice-controlled projection devices that allow s to easily control and customize the projection of various contents onto a surface. The invention combines voice recognition, content extraction, trolley adjustment, and projection capabilities to provide an intuitive and interactive experience. Applications of the invention can be found in education, entertainment, presentations, and interactive displays.
Background of the Invention
[0002]. Conventionally, projection systems have been widely used for displaying content on various surfaces, serving purposes such as educational presentations, entertainment events, and interactive displays. However, these conventional systems have inherent limitations that affect their usability and overall experience.
[0003]. Conventionally, projection systems have been used to display contents onto surfaces for various purposes, such as educational presentations, entertainment events, and interactive displays. These systems typically involve manual control and adjustment of the projection device, often requiring complex setups and technical expertise. Users have had to operate the system through buttons, remote controls, or computer interfaces, which can be time-consuming and unintuitive.
[0004]. One significant problem with conventional projection systems is the reliance on manual control interfaces. Users typically need to operate the system using buttons, remote controls, or computer interfaces, which can be cumbersome and unintuitive. This often results in a steep learning curve for s, making it difficult for them to quickly and efficiently navigate the system and access the desired content. The complex setup and technical expertise required for manual control can further impede the 's ability to fully utilize the projection system.
[0005]. Another challenge is the manual adjustment of the projection device's position, height, or orientation. In traditional systems, s have to physically manipulate or reposition the device to align the projection with the desired surface. This process can be time-consuming, inconvenient, and may lead to disruptions during presentations or events. Moreover, achieving precise alignment and maintaining stability can be challenging, affecting the quality and accuracy of the projected content.
[0006]. Furthermore, conventional projection systems often lack automation and real-time responsiveness. This limits the interactive and dynamic capabilities of the system, hindering its ability to provide an engaging and personalized experience. Users may face difficulties in dynamically controlling the projected content or adapting to changing requirements during a presentation or event. The lack of automation also restricts the system's ability to automatically adjust settings based on real-time conditions or preferences.
[0007]. Therefore, we need a new method and system for controlling a projecting device to project contents onto a surface with voice control, trolley adjustment, content extraction, and real-time responsiveness, providing a -friendly and interactive projection experience.
[0008]. We need a system that is real-time responsiveness and continuous content updating based on detected gestures further enhance the interactive experience and also can manipulate and control the projected content through gestures, resulting in a personalized and engaging learning or entertainment environment.
[0009]. Therefore, we need a new method and system, that overcome the limitations of conventional methods by introducing voice control, trolley adjustment, content extraction, and real-time responsiveness and provide advantages such as improved -friendliness, intuitive interaction, automatic trolley adjustment, access to a vast array of content, and dynamic control capabilities and also significantly enhances the projection system's usability, flexibility, and immersive experience for s in various applications.
Summary of the Invention
[00010]. The present invention is a method and system for controlling a projecting device to project one or more contents onto a surface. The objective of the invention is to overcome the limitations of conventional projection systems by introducing innovative technologies and functionalities that enhance -friendliness, intuitive interaction, automated adjustments, access to a wide range of content, and dynamic control capabilities.
[00011]. In one aspect, the present invention proposes a system and method that utilizes voice recognition technology, allowing s to provide voice commands indicating the content they want to project. This eliminates the need for complex manual controls and offers a more intuitive and -friendly interaction method. By analyzing and interpreting the voice information, the system's processor recognizes the desired content.
[00012]. In one aspect, the present invention aims to enhance convenience and accuracy, the invention introduces trolley adjustment capabilities. An actuator associated with the projecting device's trolley allows for automatic adjustment of the height or orientation. This eliminates the need for manual repositioning and ensures precise alignment of the projected content with the desired surface. The system can dynamically control the trolley movement based on real-time range determination, further enhancing flexibility and ease of use.
[00013]. In one aspect, the present invention includes a trolley adjustment capability to enhance convenience and accuracy. An actuator associated with the projecting device's trolley enables automatic adjustment of the height or orientation. This eliminates the need for manual repositioning and ensures that the projected content aligns precisely with the desired surface. The system can dynamically control the trolley movement based on real-time range determination, further enhancing flexibility and ease of use.
[00014]. In another aspect, the present invention aims to enables the extraction of relevant content from the internet, expanding the available options for videos, images, and audio. This feature enhances the 's ability to create engaging presentations or entertainment experiences.
[00015]. In another aspect, the system have Real-time responsiveness and gesture control capabilities further enhance interactivity. Users can manipulate and control the projected content through gestures, providing a more personalized and engaging experience. The system can detect and interpret gestures in real-time, allowing for dynamic content control and adaptability.
[00016]. In another aspect, the present invention aims to improve the experience of projection systems by introducing voice control, trolley adjustment, content extraction, and real-time responsiveness. By addressing the limitations of conventional methods, the invention offers a more -friendly, intuitive, and immersive approach to controlling projection devices and projecting content onto surfaces.
[00017]. Other objects, advantages, and novel features of the invention will become apparent from the following detailed description of the invention when considered in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF DRAWINGS
[00018]. To further clarify the advantages and features of the present disclosure, a more particular description of the disclosure will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the disclosure and are therefore not to be considered limiting of its scope. The disclosure will be described and explained with additional specificity and detail with the accompanying drawings.
[00019]. The subject matter that is regarded as the invention is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other aspects, features, and advantages of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
[00020]. Figure 1 illustrates a System Overview, in accordance with an embodiment of the present invention;
[00021]. Figure 2 illustrates a Trolley Adjustment, in accordance with an embodiment of the present invention;
[00022]. Figure 3 illustrates a step by step process, in accordance with an embodiment of the present invention;
[00023]. Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
Detailed Description of the Invention
[00024]. For the purpose of promoting an understanding of the principles of the invention, reference will now be made to the embodiment illustrated in the drawings, and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended. Such alterations and further modifications in the illustrated system, and such further applications of the principles of the invention as illustrated therein would be contemplated as would normally occur to one skilled in the art to which the invention relates. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skilled in the art. The system, methods, and examples provided herein are illustrative only and are not intended to be limiting.
[00025]. The term “some” as used herein is to be understood as “none or one or more than one or all.” Accordingly, the terms “none,” “one,” “more than one,” “more than one, but not all” or “all” would all fall under the definition of “some.” The term “some embodiments” may refer to no embodiments or to one embodiment or to several embodiments or to all embodiments, without departing from the scope of the present disclosure.
[00026]. The terminology and structure employed herein is for describing, teaching, and illuminating some embodiments and their specific features. It does not in any way limit, restrict or reduce the spirit and scope of the claims or their equivalents.
[00027]. More specifically, any terms used herein such as but not limited to “includes,” “comprises,” “has,” “consists,” and grammatical variants thereof do not specify an exact limitation or restriction and certainly do not exclude the possible addition of one or more features or elements, unless otherwise stated, and furthermore must not be taken to exclude the possible removal of one or more of the listed features and elements, unless otherwise stated with the limiting language “must comprise” or “needs to include.”
[00028]. Whether or not a certain feature or element was limited to being used only once, either way, it may still be referred to as “one or more features” or “one or more elements” or “at least one feature” or “at least one element.” Furthermore, the use of the terms “one or more” or “at least one” feature or element do not preclude there being none of that feature or element, unless otherwise specified by limiting language such as “there needs to be one or more . . . ” or “one or more element is required.”
[00029]. Unless otherwise defined, all terms, and especially any technical and/or scientific terms, used herein may be taken to have the same meaning as commonly understood by one having ordinary skills in the art.
[00030]. Reference is made herein to some “embodiments.” It should be understood that an embodiment is an example of a possible implementation of any features and/or elements presented in the attached claims. Some embodiments have been described for the purpose of illuminating one or more of the potential ways in which the specific features and/or elements of the attached claims fulfill the requirements of uniqueness, utility and non-obviousness.
[00031]. Use of the phrases and/or terms including, but not limited to, “a first embodiment,” “a further embodiment,” “an alternate embodiment,” “one embodiment,” “an embodiment,” “multiple embodiments,” “some embodiments,” “other embodiments,” “further embodiment”, “furthermore embodiment”, “additional embodiment” or variants thereof do not necessarily refer to the same embodiments. Unless otherwise specified, one or more particular features and/or elements described in connection with one or more embodiments may be found in one embodiment, or may be found in more than one embodiment, or may be found in all embodiments, or may be found in no embodiments. Although one or more features and/or elements may be described herein in the context of only a single embodiment, or alternatively in the context of more than one embodiment, or further alternatively in the context of all embodiments, the features and/or elements may instead be provided separately or in any appropriate combination or not at all. Conversely, any features and/or elements described in the context of separate embodiments may alternatively be realized as existing together in the context of a single embodiment.
[00032]. Any particular and all details set forth herein are used in the context of some embodiments and therefore should not be necessarily taken as limiting factors to the attached claims. The attached claims and their legal equivalents can be realized in the context of embodiments other than the ones used as illustrative examples in the description below. Embodiments of the present invention will be described below in detail with reference to the accompanying drawings.
[00033]. The present invention described is a system that allows s to project content onto a surface using voice commands. It comprises a projecting device, trolley, sensor, actuator, and processor. Users provide voice information, and the processor analyzes and recognizes their commands, mapping them to predefined content. The processor then extracts the specific content, retrieving relevant data from the internet if needed. The trolley's actuator adjusts its position or orientation to ensure accurate alignment with the surface. The projecting device then projects the content using advanced projection techniques. The system offers advantages such as voice-based interaction, content customization, efficient extraction, adjustable trolley, optimal projection, and a -friendly design, enabling a personalized and immersive experience for projecting multimedia content.
[00034]. Fig. 1 describes System Overview is an illustrative representation of the major components and their interactions within the system, wherein provides a visual representation of the major components and their interactions in the system. It helps to understand how the 's voice information is processed, commands are mapped to contents, the trolley is adjusted, and the contents are projected onto the surface.
[00035]. The system comprises a projecting device that generates and projects the contents onto the surface. the projecting device is typically mounted or placed on the trolley. This connection allows the projecting device to be easily transported and positioned as needed. The trolley serves as a platform or support structure for the projecting device.
[00036]. The system comprises a sensor, which is associated with the projecting device, receives voice information from the . The sensor captures the 's voice commands and relays them to the processor for further analysis. The processor, another important component, interacts with the projecting device to control its actions. It receives the voice information from the sensor, performs voice recognition, and analyzes the 's commands. Based on the analysis, the processor determines the appropriate actions for the projecting device. The system further comprises an actuator, that is associated with the projecting device, is responsible for adjusting the height or orientation of the trolley. It receives instructions from the processor and moves or positions the trolley accordingly. This adjustment ensures that the projected contents align correctly with the surface.
[00037]. Ultimately, the projecting device emits light or visual signals that form the projected contents. These signals are directed towards the surface, where the contents are displayed. The projected device is responsible for generating the visual output, and the other components like the trolley, sensor, actuator, and processor work together to facilitate its operation and control.
[00038]. As according to an embodiment, the trolley is an essential component that provides mobility and stability to the system. trolley is designed to support and hold the projecting device securely. The projecting device is typically mounted or attached to the trolley, ensuring that it remains in place during operation, wherein the trolley can be depicted as a cart-like structure with wheels or any other means of movement, allowing for easy transportation and positioning of the projecting device. The wheels enable the trolley to be moved smoothly and effortlessly, making it convenient to navigate and place the projecting device in different locations.
[00039]. As according to an embodiment of the present invention, connecting mechanism between the trolley and the projecting device may vary depending on the specific design and requirements of the system. It can involve brackets, clamps, or other securing mechanisms that firmly attach the projecting device to the trolley. This ensures that the projecting device remains stable and aligned with the desired projection direction.
[00040]. In one embodiment, the system includes a sensor associated with the projecting device to enhance interactivity and enables the system to receive voice information from the . It can be depicted as a microphone symbol or a small device connected to the projecting device.
[00041]. In one embodiment, the connection between the sensor and the projecting device is typically established through wiring or wireless communication and enabling the system to capture the 's voice commands and initiate the projection process accordingly. The trolley plays a supporting role by providing the platform for the projecting device, but it is not directly connected to the sensor.
[00042]. Regarding the connection between the sensor the trolley, the sensor is usually integrated or mounted on the projecting device itself. Since the projecting device is placed on the trolley, the sensor's connection is indirectly established through the projecting device. The trolley serves as a platform that supports the projecting device and allows for easy transportation and positioning.
[00043]. As according to the present invention, the system further includes an actuator, wherein the actuator is responsible for adjusting the height or orientation of the trolley on which the projecting device is placed. It can be represented as an arrow or a symbol indicating movement or adjustment.
[00044]. In terms of connectivity, the actuator is typically connected to the trolley and the projecting device. It is designed to manipulate the position or angle of the trolley to achieve the desired projection setup.
[00045]. The connection between the actuator and the trolley is mechanical in nature. The actuator is integrated into the trolley's structure, allowing it to exert force and control the movement of the trolley. This connection enables the actuator to adjust the height or orientation of the trolley according to the system's requirements.
[00046]. In one embodiment the connection of the actuator with the projecting device, the actuator indirectly influences its position by manipulating the trolley. As the actuator adjusts the trolley's height or orientation, the projecting device mounted on the trolley moves accordingly. This ensures that the projected content aligns properly with the surface.
[00047]. Regarding the sensor, the actuator is not directly connected to it. The sensor's role is to capture the 's voice commands, while the actuator focuses on the physical adjustment of the trolley. However, the actuator's movements are controlled by the commands processed by the system's processor, which receives input from the sensor. This allows for coordinated adjustments based on the 's instructions.
[00048]. As according to the present invention, the actuator is connected to the trolley to manipulate its height or orientation, indirectly affecting the positioning of the projecting device. The actuator's movements are controlled by the system's processor, which receives input from the sensor capturing the 's voice commands.
[00049]. The system further includes a processor that is a central component of the system that plays a crucial role in analysing and interpreting the voice information received from the sensor. It can be depicted as a central processing unit (CPU) or a computer symbol.
[00050]. As according to the present invention, the processor is connected to various components within the system to facilitate their coordination and control, wherein the processor is connected to the sensor, which captures the 's voice information. The sensor sends the voice data to the processor for analysis and recognition of commands. This connection allows the processor to receive and process the voice information in order to understand the 's intentions.
[00051]. Additionally, the processor is connected to the actuator, wherein the connection enables the processor to control the actuator's movements and adjust the height or orientation of the trolley accordingly. By manipulating the actuator, the processor can ensure that the projecting device is positioned correctly for optimal projection.
[00052]. As according to an the present invention, the processor is also connected to the projecting device itself. This connection allows the processor to send commands and instructions to the projecting device, such as initiating the projection of specific content onto the surface. The processor controls the projecting device based on the recognized commands and the extracted content.
[00053]. Furthermore, the processor interacts with the trolley through the actuator. Although there is no direct physical connection between the processor and the trolley, the processor controls the actuator's movements, which in turn adjusts the position of the trolley. This coordination ensures that the projecting device is properly positioned for projection.
[00054]. As according to the present invention, the processor is connected to the sensor to receive voice information, the actuator to control its movements, and the projecting device to send commands and instructions. It plays a central role in analysing voice data, recognizing commands, and coordinating the various components of the system for efficient and accurate projection.
[00055]. As according to an embodiment of the present invention a surface is the target area where the contents are projected and displayed for viewing. It can be represented as a wall, screen, or any appropriate surface. In terms of connectivity, the surface is not directly connected to the other components of the system like the projecting device, trolley, sensor, actuator, or processor. Instead, it serves as the output platform for the projected contents.
[00056]. As according to the present invention, projecting device emits light or visual signals that form the projected contents, which are then directed towards the surface. The projecting device can be positioned on the trolley, which allows for easy movement and adjustment. The trolley, in turn, can be controlled by the actuator, which is under the command of the processor, wherein sensor captures the voice information provided by the and sends it to the processor for analysis. The processor interprets the voice commands and controls the projecting device's actions, such as adjusting the trolley's position and initiating the projection.
[00057]. Ultimately, the projected contents are displayed on the surface for the audience to see. The surface itself does not have a direct connection to the other components, but it serves as the medium where the visual output of the system is presented.
[00058]. As according to fig. 1 that representation provides a basic visualization of the components and their relationships in the system. The projecting device is connected to the trolley, which is further connected to the sensor, actuator, and processor. The processor analyses the input from the sensor and controls the actuator to adjust the trolley's position. The contents are projected from the projecting device onto the surface for display
[00059]. As according to an example of the present invention, interaction between the and the system and the definition for the same are as follows:
[00060]. User: Represented as an individual or an icon, the signifies the person interacting with the system.
[00061]. Sensor/Microphone: Depicted as a symbol or an icon, the sensor or microphone represents the device used to capture the 's voice input. It can be connected to the through a line or an arrow, indicating the input source.
[00062]. Voice Input: Shown as speech bubbles, audio waves, or text, the voice input visually represents the commands or information provided by the through the sensor or microphone. It can be connected to the sensor symbol, indicating the flow of voice data.
[00063]. Processor: Represented by a symbol or an icon, the processor is the component responsible for analysing and processing the voice input. It can be connected to the voice input symbol, indicating the data flow from the sensor to the processor.
[00064]. Command Recognition: This step highlights the processor's ability to analyze and recognize the commands within the voice input. It can be represented by an arrow or a line connecting the processor and the voice input, indicating the processing of the 's commands.
[00065]. Mapping: Once the commands are recognized, the processor maps them to specific contents or functionalities within the system. This mapping process can be shown by connecting the recognized commands to corresponding content symbols or icons, indicating the linkage between the recognized commands and the specific actions or outputs.
[00066]. Fig, 2 demonstrates the trolley adjustment capability of the system. It shows the actuator modifying the height or orientation of the trolley based on the content to be projected.
[00067]. The system include step 201, wherein of the trolley is at initial position the trolley is a platform or cart with wheels, on top of which the projecting device is placed. This position represents the starting point before any adjustment is made.
[00068]. As according to an example of the present invention, the initial position is to provide a visual reference of the trolley's starting state. It helps establish the context for the subsequent steps in the trolley adjustment process. By illustrating the trolley as a platform or cart with wheels, it conveys that the trolley is a movable structure capable of transportation and positioning of the projecting device.
[00069]. In Figure 2, step 202 represents the processor analysing the content to be projected. The processor symbol is shown connected to the content or information that needs to be projected. The purpose of this step is to depict the involvement of the processor in analyzing the content. The processor is responsible for processing and interpreting the content data to determine the appropriate adjustment needed for the trolley. step 202 involve analyzing factors such as the size, aspect ratio, or specific requirements of the content.
[00070]. As according to an embodiment of the present invention, a visually connecting the processor symbol to the content, that signifies that the processor is actively analysing the content to gather the necessary information for the trolley adjustment. The step 202 highlights the system's intelligence and capability to adapt the trolley's height or orientation based on the specific content to ensure optimal projection quality.
[00071]. Figure 2, step 203 represents the decision-making process where the system determines whether trolley adjustment is required based on the content analysis. This step can be visually represented by a branching path or a decision symbol, wherein the step 203 is to depict the system's ability to make an informed decision regarding the need for trolley adjustment. The decision is based on the analysis of the content and specific criteria set by the system.
[00072]. As according to an embodiment of the present invention, the branching path or decision symbol visually represents the system evaluating the analysis results and determining whether the current position or orientation of the trolley is suitable for projecting the content. If the analysis indicates that adjustment is necessary, the system will proceed to the next step. If not, it may bypass the trolley adjustment step and move on to the projection phase.
[00073]. The step 203 highlights the system's intelligent decision-making capability, ensuring that the trolley adjustment is performed only when deemed necessary for optimal projection quality.
[00074]. Figure 2, step 204 represents the actual adjustment of the trolley's height or orientation when it is determined that trolley adjustment is required. This step is depicted by illustrating the actuator connected to the trolley and using arrows or symbols to indicate the movement or adjustment of the trolley, wherein step 203 is to visually demonstrate the action taken by the system to modify the position or orientation of the trolley based on the content analysis. The actuator, which is connected to the trolley, is responsible for carrying out the necessary adjustments.
[00075]. The step 204 highlights the dynamic and adaptive nature of the system, as it can modify the trolley's position or orientation in real time to ensure the projected content is accurately displayed on the surface.
[00076]. In Figure 2, step 205 represents the trolley in its adjusted position, aligned correctly for optimal projection of the content onto the surface. This step is illustrated by showing the trolley in its modified position, indicating that it has been aligned accurately, wherein the step 205 is to visually demonstrate the outcome of the trolley adjustment process. After the height or orientation adjustment in step 204, the trolley is positioned in such a way that the projecting device is aligned correctly for optimal projection onto the surface.
[00077]. As according to an embodiment of the present invention, the modified position of the trolley can be depicted by showing its updated height, tilt, or rotation, depending on the specific adjustments made during step 204. This ensures that the projected content will be displayed accurately and without distortion on the surface.
[00078]. By representing the trolley in its aligned position, step 205 highlights the system's ability to dynamically adjust the trolley's configuration to achieve the best possible projection quality and experience.
[00079]. Fig, 2 illustration of how the system adjusts the trolley's height or orientation based on the content to be projected and indicating the movement or adjustment of the trolley can help convey the trolley adjustment capability of the system effectively.
[00080]. Fig, 3 depicts steps provide an overview of the process involved in the interaction with the system, voice recognition and analysis, content mapping and extraction, trolley adjustment, and the final projection of the contents onto the surface.
Fig, 3, Step 301: User provides voice information
[00081]. The associated with the projecting device interacts with the system by providing voice information, wherein as according to an embodiment of the present invention, the interaction can be done by speaking into a sensor or microphone connected to the system.
[00082]. As according to an embodiment of the present invention, the voice information serves as input for the system and indicates the specific content or contents that the wants to project onto the surface, wherein the voice information is captured by the sensor and sent for further processing by the system's components.
[00083]. As according to an embodiment of the present invention, the may verbally express their desired content, such as requesting a specific video, image, or presentation topic.
[00084]. The step 301 initiates the interaction between the and the system, allowing the to communicate their intentions and preferences through voice commands.
Fig. 3, Step 302: Voice recognition and command analysis
[00085]. The processor, which is a component associated with the projecting device, receives the voice information captured by the sensor, wherein the processor performs voice recognition, a process that involves analysing and interpreting the 's voice commands.
[00086]. As according to an embodiment of the present invention, the system uses advanced algorithms and speech recognition techniques, the processor converts the voice information into a digital format that can be understood and processed by the system.
[00087]. Through voice recognition, the processor identifies and recognizes the specific commands embedded within the voice information.
[00088]. The step 302 allows the system to understand the 's intentions and the actions they want to perform.
[00089]. As according to an embodiment of the present invention, the recognized commands serve as instructions for the system to carry out specific operations or tasks related to projecting the desired content onto the surface.
[00090]. As according to an embodiment of the present invention, the Voice recognition and command analysis enable the system to accurately interpret the 's voice input and proceed with the subsequent steps in the process.
Fig. 3, Step 303: Mapping commands to predefined content
[00091]. Once the processor has recognized the 's commands, it proceeds to map these commands to a pre-defined list of contents, wherein the pre-defined list contains a collection of content options that the system can project onto the surface to associate each recognized command with its corresponding content from the predefined list.
[00092]. As according to an embodiment of the present invention, by mapping the commands to specific contents, the system can identify the exact content or contents that the intends to project, weherein the mapping process can be based on keywords, matching patterns, or a predefined mapping algorithm and allows the system to determine which content or contents are relevant and should be considered for projection based on the 's commands.
[00093]. As according to an embodiment of the present invention, the mapping step 303 helps streamline the content selection process and ensures that the system accurately responds to the 's intentions.
[00094]. Once the commands are successfully mapped to the predefined content, the system can proceed to the next step 304, which involves extracting the specific content for projection.
Fig. 3, Step 304: Content extraction
[00095]. After mapping the 's commands to predefined content, the processor proceeds to extract the specific content or contents that are to be projected onto the surface, wherein the extraction process involves retrieving relevant data from various sources, such as the internet or local storage, based on the identified content.
[00096]. For example, if the commands the system to project a specific video, the processor may retrieve the video file from an online platform or a local database. Similarly, if the requests an image or audio content, the processor searches for and retrieves the corresponding files, wherein the content extraction step ensures that the system obtains the necessary data required for projection.
[00097]. As according to an embodiment of the present invention, depending on the complexity of the content, the processor may perform additional processing or conversion to ensure compatibility with the projecting device.
[00098]. The step 304 enables the system to access and retrieve the specific content that matches the 's commands, providing a seamless and accurate projection experience.
[00099]. Once the content extraction is complete, the system can proceed to adjust the trolley's position or orientation to ensure optimal projection onto the surface.
Fig. 3, Step 305: Trolley adjustment
[000100]. Once the content extraction is complete, the system proceeds to adjust the position or orientation of the trolley, wherein the trolley is the platform on which the projecting device is placed, and its adjustment is essential to align the projected content correctly with the surface.
[000101]. As according to an embodiment of the present invention, the adjustment process is performed using an actuator, which is a component associated with the projecting device and trolley, wherein the actuator enables the system to modify the height or orientation of the trolley as needed.
[000102]. For example, if the projected content requires a higher position, the actuator raises the trolley to achieve the desired height. Similarly, if the content needs a specific tilt or rotation, the actuator adjusts the trolley's orientation accordingly.
[000103]. As according to an embodiment of the present invention, the purpose of the trolley adjustment is to ensure that the projected content is displayed at the optimal position and angle, providing a clear and well-aligned projection on the surface, wherein by adjusting the trolley, the system can adapt to different projection requirements and enhance the overall viewing experience for the s.
Fig. 3 Step 306: Projection of contents
[000104]. Once the trolley is adjusted to the desired position and orientation, the projecting device starts the projection of the extracted content onto the surface, wherein the projecting device utilizes suitable projection techniques to display the content clearly and vividly.
[000105]. As according to an embodiment of the present invention, one common technique involves using a light source, such as a lamp or LED, to emit light towards the content.
[000106]. For example, the light passes through optical elements, such as lenses or mirrors, which focus and direct the light beams onto the surface.
[000107]. The projected content can be in various forms, such as images, videos, presentations, or any other visual media, wherein the projecting device ensures that the projected content is accurately aligned with the surface and displayed at the appropriate size and resolution.
[000108]. The projection process enables the audience to view and interact with the content, whether it is for educational presentations, entertainment purposes, or interactive displays.
[000109]. By projecting the extracted content onto the surface, the system fulfils its primary objective of providing a visually engaging and immersive experience for the s.
[000110]. As according to the present inventio, the system has advantages, wherein the advantages includes:
[000111]. Voice-based Interaction: The system allows s to provide voice commands and information, offering a convenient and intuitive way of interacting with the system. Voice recognition technology enables seamless communication between the and the system.
[000112]. Content Customization: The system maps commands to predefined content, allowing for personalized and tailored projections. Users can specify the exact content they want to project onto the surface, enhancing the flexibility and customization of the system.
[000113]. Efficient Content Extraction: The system extracts specific content based on commands, which may involve retrieving relevant data from the internet. This enables access to a vast range of multimedia content, such as videos, images, or audio, enriching the projected content and enhancing the experience.
[000114]. Trolley Adjustment Capability: The system incorporates an actuator to adjust the height or orientation of the trolley on which the projecting device is placed. This ensures that the projected content aligns correctly with the surface, optimizing visibility and clarity.
[000115]. Optimal Projection: With the trolley properly adjusted, the system can project the extracted content onto the surface using appropriate projection techniques. This ensures clear and high-quality projection, enhancing the viewing experience for the audience.
[000116]. User-Friendly Design: The system's components, including the projecting device, trolley, sensor, actuator, and processor, work together seamlessly to provide a -friendly experience. The system simplifies the process of content projection and allows s to interact effortlessly with the technology.
[000117]. Enhanced Engagement and Communication: The system's ability to project multimedia content onto a surface can facilitate effective communication and engagement in various settings. It can be used for presentations, educational purposes, entertainment, and collaborative work, enabling dynamic visual communication.
[000118]. Innovative and Unique Solution: The system described in the queries combines voice recognition, content extraction, trolley adjustment, and projection technologies to offer a unique and innovative solution. This can set the system apart from existing solutions and provide a competitive advantage.
[000119]. The figures and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of the embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible.
, Claims:We Claim:
1. A method for controlling a projecting device to project one or more contents on a surface, the method comprising:
receiving, through a sensor associated with the projecting device, a voice information from a associated with the projecting device, wherein the voice information indicates the one or more contents to be projected on the surface;
recognizing, by a processor associated with the projecting device, one or more commands by analyzing the voice information;
mapping, by the processor, the one or more commands with a pre-defined list of contents to recognize the one or more contents associated with the one or more commands;
extracting, by the processor, the one or more contents to be projected on the surface, wherein the surface is a wall of a room;
adjusting, by an actuator associated with the projecting device, one or more of height or orientation of a trolley on which the projecting device is placed, wherein the one or more of the height or orientation of the trolley is adjusted such that the one or more contents are projected on the surface; and
projecting, by the projecting device, the one or more contents on the surface, wherein the projecting device connected with an internet.
2. The method according to claim 1, further comprising:
adjusting the projection of the one or more contents with respect to the surface by using a drone.
3. The method according to claim 1, further comprising:
determining a real time range of the movement of the trolley of the projecting device; and
automatically adjusting the movement of the trolley based on the real time range.
4. The method according to claim 1, further comprising:
accessing an internet to search relevant data related to the one or more commands, wherein the relevant data comprises one or more of videos, images, and audio;
extracting the relevant data from the internet; and
projecting the extracted data on the surface.
5. The method according to claim 1, further comprising:
generating an audio output based on the one or more contents projected on the surface; and
changing a frequency of the audio output based on a preference.
6. A system for controlling a projecting device to project one or more contents on a surface, the system comprising:
a trolley;
a processor; and
a computer-readable medium communicatively coupled to the processor, wherein the computer-readable medium stores processor-executable instructions, which when executed by the processor, cause the processor to:
receive, through a sensor associated with the projecting device, a voice information from a associated with the projecting device, wherein the voice information indicates the one or more contents to be projected on the surface;
recognize one or more commands by analyzing the voice information;
map the one or more commands with a pre-defined list of contents to recognize the one or more contents associated with the one or more commands;
extract the one or more contents to be projected on the surface, wherein the surface is a wall of a room;
adjust, by an actuator associated with the projecting device, one or more of height or orientation of a trolley on which the projecting device is placed, wherein the one or more of the height or orientation of the trolley is adjusted such that the one or more contents are projected on the surface; and
project the one or more contents on the surface, wherein the projecting device connected with an internet.
7. The system according to claim 8, wherein the processor is further configured to:
adjust the projection of the one or more contents with respect to the surface by using a drone.
8. The system according to claim 8, wherein the processor is further configured to:
determine a real time range of the movement of the trolley of the projecting device; and
automatically adjust the movement of the trolley based on the real time range.
9. The system according to claim 12, wherein the processor is further configured to:
access an internet to search relevant data related to the one or more commands, wherein the relevant data comprises one or more of videos, images, and audio;
extract the relevant data from the internet; and
project the extracted data on the surface.
10. The system according to claim 8, wherein the processor is further configured to:
generate an audio output based on the one or more contents projected on the surface; and
change a frequency of the audio output based on a preference.
Dated this on 19th day of July 2023
Ajay Kaushik
Agent for the Applicant [IN/PA-2159]
AKSH IP ASSOCIATES
| # | Name | Date |
|---|---|---|
| 1 | 202311048594-STATEMENT OF UNDERTAKING (FORM 3) [19-07-2023(online)].pdf | 2023-07-19 |
| 2 | 202311048594-POWER OF AUTHORITY [19-07-2023(online)].pdf | 2023-07-19 |
| 3 | 202311048594-FORM 1 [19-07-2023(online)].pdf | 2023-07-19 |
| 4 | 202311048594-DRAWINGS [19-07-2023(online)].pdf | 2023-07-19 |
| 5 | 202311048594-DECLARATION OF INVENTORSHIP (FORM 5) [19-07-2023(online)].pdf | 2023-07-19 |
| 6 | 202311048594-COMPLETE SPECIFICATION [19-07-2023(online)].pdf | 2023-07-19 |
| 7 | 202311048594-FORM-9 [13-10-2023(online)].pdf | 2023-10-13 |
| 8 | 202311048594-FORM 18 [13-10-2023(online)].pdf | 2023-10-13 |
| 9 | 202311048594-Proof of Right [17-10-2023(online)].pdf | 2023-10-17 |
| 10 | 202311048594-FER.pdf | 2025-04-17 |
| 11 | 202311048594-OTHERS [29-08-2025(online)].pdf | 2025-08-29 |
| 12 | 202311048594-FER_SER_REPLY [29-08-2025(online)].pdf | 2025-08-29 |
| 13 | 202311048594-DRAWING [29-08-2025(online)].pdf | 2025-08-29 |
| 14 | 202311048594-CLAIMS [29-08-2025(online)].pdf | 2025-08-29 |
| 1 | 202311048594_SearchStrategyNew_E_8594E_16-04-2025.pdf |