Abstract: Provided is a system and method for reducing latency in real time data streaming, the system comprising a memory (203) configured to store computer-executable instructions and one or more processors (201) configured to execute the instructions to receive first data at a receiving device (107) from a source (101), wherein the source (101) and the receiving device (105) are connected by a server (103). The one or more processors (201) further determine latency in the first data, based on Optical Character Recognition (OCR). The one or more processors (201) further extract second data at the receiving device (105) at a speed based on the determined latency. The one or more processors (201) process, at the receiving device (105), the second data at the speed of extracting to reduce latency in real time data streaming.
[001] Example embodiments of the present invention generally relate to broadcasting and data streaming, and more particularly relate to a method for reducing latency in real time data streaming for broadcasting and live video applications.
BACKGROUND
[002] Live data streaming is data that is constantly received by, and usually presented to, an end-user i.e., a client while it is being delivered by a streaming provider using a server in real time. Currently, many protocols are utilized for the live data streaming. One of the secure methods to use a Real Time Streaming Protocol (RTSP), for the live data streaming is interleaved Transmission Control Protocol (TCP).However, in the case where RTSP is interleaved TCP, there is no way to transmit Real-Time Transport Control Protocol (RTCP) feedback from the client to the server. Therefore, when the client reports a lower available bandwidth, the server does not adapt rapidly enough, causing interruption during the live data streaming. Further, latency may be built up at the server due to issues related to network connectivity, storage and processing speed of the server. Also, the latency keeps increasing over a period of time. Furthermore, as the client is unable to effectively understand that the latency is getting built up at the server, corrective actions cannot be taken at the client end to reduce the latency. Consequently, higher latency may produce significant delays during live data streaming thereby having undesirable impact on user experience.
SUMMARY OF THE INVENTION
[003] Accordingly, there is a need for a system that may identify and reduce latency in live data streaming for broadcasting and live video streaming applications.
[004] Some example embodiments disclosed herein provide a system for reducing latency in real time data streaming.The system comprising a memory configured to store computer-executable instructions and one or more processors configured to execute the instructions to receive, from a source, first data at a receiving device, wherein the source and the receiving device are connected by a server. The one or more processors are further configured to determine latency in the first data, based on Optical Character Recognition (OCR). The one or more processors are further configured to extract second data at the receiving device at a speed based on the determined latency. The one or more processors are further configured to process the second data at the receiving device, at the speed of extracting to reduce latency in real time data streaming.
[005] According to an embodiment, wherein to determine the latency in the first data based on the OCR, the one or more processors are further configured to obtain a plurality of images from the first data. The one or more processors extract a timestamp for each of the plurality of images of the first data and extract an instantaneous timestamp of the receiving device.
[006] According to an embodiment, wherein the one or more processors are further configured to compare the timestamp of the first data and the instantaneous timestamp of the receiving device.
[007] According to an embodiment, wherein the one or more processors are further configured to determine the latency based on the comparison.
[008] According to an embodiment, wherein the one or more processors are further configured to extract the second data at the receiving device when the determined latency is greater than a threshold value.
[009] According to an embodiment, wherein to process the second data at the speed of extracting, the one or more processors are configured to process the second data at a speed greater than speed of processing the first data.
[0010] According to an embodiment, the one or more processors are further configured to terminate the extraction of the second data at the receiving device when the determined latency is less than or equal to the threshold value.
[0011] According to an embodiment, wherein the one or more processors are further configured to calculate one or more of a time period for extraction of the second data at the receiving device, based on the determined latency and the speed of the second data to be extracted at the receiving device, based on the determined latency.
[0012] According to an embodiment, wherein the one or more processors are further configured to create an index of recorded data stream based on the OCR, wherein the recorded data stream comprises the first data and the second data.
[0013] Some example embodiments disclosed herein provide a method for reducing latency in real time data streaming, the method comprising receiving first data from a source, at a receiving device, wherein the source and the receiving device are connected by a server. The method may further include determining latency in the first data, based on Optical Character Recognition (OCR). The method may further include extracting second data from the server, at the receiving device at a speed based on the determined latency. The method may further include processing the second data at the receiving device, at the speed of extracting to reduce latency in real time data streaming.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] The presently disclosed embodiments will be further explained with reference to the attached drawings. The drawings shown are not necessarily to scale, with emphasis instead generally being placed upon illustrating the principles of the presently disclosed embodiments.
[0015] FIG. 1 illustrates a schematic diagram of a network environment of a system for reducing latency in real time data streaming, in accordance with an example embodiment.
[0016] FIG. 2 illustrates a block diagram of a system for reducing latency in real time data streaming, in accordance with an example embodiment.
[0017] FIG. 3 illustrates an exemplary scenario of latency in real time data streaming, in accordance with an example embodiment.
[0018] FIG. 4 illustrates an exemplary scenario of a source device and a client device for reducing latency in real time data streaming, in accordance with an example embodiment.
[0019] FIG. 5 illustrates an exemplary scenario indicating transmission of data stream from a source device to a receiving device without latency built up at server during real time data streaming, in accordance with an example embodiment.
[0020] FIG. 6 illustrates an exemplary scenario for indexing of recorded content using OCR, in accordance with an example embodiment.
[0021] FIG. 7 illustrates a flow diagram of a method for reducing latency in real time data streaming, in accordance with an example embodiment.
DETAILED DESCRIPTION OF THE INVENTION
[0022] FIG. 1 illustrates a schematic diagram of a network environment of a system 100 for reducing latency in real time data streaming, in accordance with an example embodiment. In the system 100, a source device 101 may be communicatively coupled to a server 103 a client device 105 via a network 107. Further, it is possible that one or more components may be rearranged, changed, added, and/or removed in the system 100.
[0023] In some example embodiments, the source device 101 may be any user accessible device such as a portable hand held device, a camera, digital cameras, camcorders, web cams, surveillance cameras or video recorders or any device that may be configured to capture images in real-time to execute one or more functions. In an example embodiment, the source device 101 may be configured for streaming data over the network 107 in real-time. In an example embodiment, streaming may refer to the process of continuously transmitting data to a client device 105 or a user, where the data is typically displayed or otherwise provided to the user. The data may be for example one or more of audio data, video data, text data, a combination of the audio data, the video data and the text data, and any other format of data along with video data. In an example embodiment, real-time streaming refers to the process of streaming data that is generated in real-time. The source device 101 may comprise an image sensor, a processor, a memory and a communication interface and one or more different modules to perform different functions. The processor, the memory and the communication interface may be communicatively coupled to each other. In some example embodiments, the source device 101 may be communicatively coupled to the client device 105 via the network 109. In some example embodiments, the source device 101 may be installed at multiple locations inside an indoor area, for example, offices or homes may include multiple cameras placed at different locations inside the indoor area or in vicinity of the indoor area. In some example embodiments, the source device 101 may be installed in an outdoor area, for example, on street lights, or on electric poles to capture outdoor events. In such example embodiments, the source device 101 may comprise processing means such as a central processing unit (CPU), storage means such as on-board read only memory (ROM) and random access memory (RAM), the one or more sensors such as a camera, a display enabled user interface (such as a touch screen display), and other components as may be required for specific functionalities.
[0024] In some example embodiments, the server 103 may store data associated with one or more of an image or video content associated with one or more objects captured by the source device 101. In some example embodiments, the server 103 may store information generated by the system 100 in accordance with the one or more of the image or the video content associated with the one or more objects. In an example embodiment, the server 103 may be a physical server or a cloud server. In an example embodiment, the server 103 may be a Network Time Protocol (NTP) server that synchronizes time of the source device 101 and the client device 105 with reference to Co-ordinated Universal Time (UTC), i.e., the server 103 may standardize the time captured by the source device 101 and the client device 105 in accordance with the UTC. Thus, timing synchronization is achieved even if the source device 101 and the client device 105 are operating at different time zones.
[0025] In some example embodiments, the client device 105 may be any receiving device that may receive the live video data or live audio data or a combination of the video data, audio data, or text data from the source device 101 connected by a server 103 and a network 107. In an example embodiment, the client device 105 may be a mobile device, or a laptop, or a computer in which an application 105a is installed. In an example embodiment, the user may play video or audio using the application 105a. In an example embodiment, the application 105a may receive the video frames from the source device 101, process the video frames and display the video frames on the client device 105.
[0026] The network 107 may be wired, wireless, or any combination of wired and wireless communication networks, such as cellular, Wi-Fi, internet, local area networks, or the like. In one embodiment, the network 107 may include one or more networks such as a data network, a wireless network, a telephony network, or any combination thereof. It is contemplated that the data network may be any local area network (LAN), metropolitan area network (MAN), wide area network (WAN), a public data network (e.g., the Internet), short range wireless network, or any other suitable packet-switched network, such as a commercially owned, proprietary packet-switched network, e.g., a proprietary cable or fiber-optic network, and the like, or any combination thereof. In an example embodiment, the network may include Real time streaming protocol (RTSP) for transmission of audio or video or a combination of video and audio data in real time from source device 101 to client device 105. In addition, the wireless network 107 may be, for example, a cellular network and may employ various technologies including enhanced data rates for global evolution (EDGE), general packet radio service (GPRS), global system for mobile communications (GSM), Internet protocol multimedia subsystem (IMS), universal mobile telecommunications system (UMTS), etc., as well as any other suitable wireless medium, e.g., worldwide interoperability for microwave access (WiMAX), Long Term Evolution (LTE) networks (for e.g. LTE-Advanced Pro), 5G New Radio networks, ITU-IMT 2020 networks, code division multiple access (CDMA), wideband code division multiple access (WCDMA), wireless fidelity (Wi-Fi), wireless LAN (WLAN), Bluetooth, Internet Protocol (IP) data casting, satellite, mobile ad-hoc network (MANET), and the like, or any combination thereof.
[0027] FIG. 2 illustrates a block diagram of client device 105 for reducing latency in real time data streaming, in accordance with an example embodiment. The client device 105 may include a processing means such as at least one processor 201 (hereinafter, also referred to as “processor 201”), storage means such as at least one memory 203 (hereinafter, also referred to as “memory 203”), and a communication means such as at least one communication interface 205 (hereinafter, also referred to as “communication interface 205”). The processor 201 may retrieve computer program code instructions that may be stored in the memory 203 for execution of the computer program code instructions. The processor 201 may include a real time streaming module 201a to perform different functions of real time data processing.
[0028] The processor 201 may be embodied in a number of different ways. For example, the processor 201 may be embodied as one or more of various hardware processing means such as a coprocessor, a microprocessor, a controller, a digital signal processor (DSP), a processing element with or without an accompanying DSP, or various other processing circuitry including integrated circuits such as, for example, an ASIC (application specific integrated circuit), an FPGA (field programmable gate array), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like. As such, in some embodiments, the processor 201 may include one or more processing cores configured to perform independently. A multi-core processor may enable multiprocessing within a single physical package. Additionally or alternatively, the processor 201 may include one or more processors configured in tandem via the bus to enable independent execution of instructions, pipelining and/or multithreading. In some embodiments, the processor 201 may be configured to provide Internet-of-Things (IoT) related capabilities to users of the system101. The communication interface 205 may provide an interface for accessing various features and data stored in the system. Additionally or alternatively, the processor 201 may include one or more processors capable of processing large volumes of workloads and operations to provide support for big data analysis. The real time streaming module 201a may process events related to streaming of data from source device 101 to client device 105.
[0029] The memory 203 may be non-transitory and may include, for example, one or more volatile and/or non-volatile memories. In other words, for example, the memory 203 may be an electronic storage device (for example, a computer readable storage medium) comprising gates configured to store data (for example, bits) that may be retrievable by a machine (for example, a computing device like the processor 201). The memory 203 may be configured to store information, data, content, applications, instructions, or the like, for enabling the apparatus to carry out various functions in accordance with an example embodiment of the present invention. For example, the memory 203 may be configured to buffer input data for processing by the processor 201. As exemplarily illustrated in FIG. 2, the memory 203 may be configured to store instructions for execution by the processor 201. As such, whether configured by hardware or software methods, or by a combination thereof, the processor 201 may represent an entity (for example, physically embodied in circuitry) capable of performing operations according to an embodiment of the present invention while configured accordingly. Thus, for example, when the processor 201 is embodied as an ASIC, FPGA or the like, the processor 201 may be specifically configured hardware for conducting the operations described herein. Alternatively, as another example, when the processor 201 is embodied as an executor of software instructions, the instructions may specifically configure the processor 201 to perform the algorithms and/or operations described herein when the instructions are executed. However, in some cases, the processor 201 may be a processor specific device (for example, a mobile terminal or a fixed computing device) configured to employ an embodiment of the present invention by further configuration of the processor 201 by instructions for performing the algorithms and/or operations described herein. The processor 201 may include, among other things, a clock, an arithmetic logic unit (ALU) and logic gates configured to support operation of the processor 201.
[0030] The communication interface 205 may comprise input interface and output interface for supporting communications to and from the source device 101, and client device 105 or any other component with which the system 101 may communicate. The communication interface 205 may be any means such as a device or circuitry embodied in either hardware or a combination of hardware and software that is configured to receive and/or transmit data to/from a communications device in communication with the client device 105. In this regard, the communication interface 205 may include, for example, an antenna (or multiple antennae) and supporting hardware and/or software for enabling communications with a wireless communication network. Additionally or alternatively, the communication interface 205 may include the circuitry for interacting with the antenna(s) to cause transmission of signals via the antenna(s) or to handle receipt of signals received via the antenna(s). In some environments, the communication interface 205 may alternatively or additionally support wired communication. As such, for example, the communication interface 205 may include a communication modem and/or other hardware and/or software for supporting communication via cable, digital subscriber line (DSL), universal serial bus (USB) or other mechanisms.
[0031] FIG. 3 illustrates an exemplary scenario of latency in real time data streaming, in accordance with example embodiment. In FIG. 3, there is shown a source device 301 (similar to the source device 101) transmitting live video data to a client device 305 (similar to the client device 105) via a server 303 (similar to 103) and a network 107. The source device 301 may transmit a source data 307 and the client device 305 may receive a first data 309. In an example embodiment, the client device 305 is a receiving device that receives the source data 307 transmitted by the source device 301.
[0032] According to one example embodiment, the client device 305 may request to the source device 301 for a real time live video via the application 105a installed in it. For example, the user may request on the application 105a in mobile phone to see the live video captured by the source device 301. In response to the request made by the client device 305, the source device 301 may transmit the live video data that is the source data 307 in real time to the client device 305 via the server 303. In response to the request made by the client device 305, the source device 301 may send the live video data in a continuous stream of data via the server 303.
[0033] In an example embodiment, the source data 307 transmitted by the source device 301 may contain multiple data packets such as source data 1 (sd1), source data 2 (sd2), source data 3 (sd3),…source data 12 (sd12). And the first data 309 received by the client device 305 may receive only limited data packets such as source data 1 (sd1), source data 2 (sd2), source data 3 (sd3) ,…source data 6 (sd6). The first data 309 received by the client device 305 is not the complete source data 307 transmitted by the source device 301 at an instant of time. In an example embodiment, the reason for this may be due to delay or latency at the client device 305 or the server 303.
[0034] In an example embodiment, the delay or latency in the first data 309 received by the client device 305 may be due to issues in the network 107 between the client device 305 and the server 303. In an example embodiment, the latency may be built up at the server 303 due to issues related to network connectivity, storage and processing speed of the server 303. To this end, there is a need for a system 100 that may identify latency built up at the client device 305 or at the server 303. The identified latency is then reduced thereby making the video/ audio data easily accessible to the user as disclosed in fig.4.
[0035] FIG. 4 illustrates an exemplary scenario 400 of a source device and a client device for reducing latency in real time video streaming, in accordance with an example embodiment. In FIG. 4, there is shown a source device 401 (similar to the source device 101) capturing a video of an indoor premises 405a in real time, a client device 403(similar to the client device 105) playing the live video 405b. The source device 401 may be used for capturing a video of outdoor premises (not shown in FIG 4) in real time. For example the source device 401 may be a surveillance camera that can be used in both indoor and outdoor premises to capture images in real time.
[0036] In an example embodiment, the client device 403 may request the source device 401 for the live video data 405a of the indoor premises that the source device 401 is capturing at an instant of time. In response to the request made by the client device 403, the source device 401 may transmit the live video data as a data stream in real time to the client device 403, via the server 103. In an example embodiment, the data stream may comprise text data, or video data or audio data or combination thereof. In an example embodiment, the data stream may include multiple data packets with plurality of images in each data packet.
[0037] In an embodiment, the system 100 may extract a timestamp for each of the plurality of images of the data packet after the first data 309 is received by the client device 403.The timestamp may represent information to identify an instant of time at which a particular event occurred. The timestamp may comprise date and time of the day and sometimes the information may be accurate to the small fraction of a second. Similarly, the timestamp comprises the time at which the image was captured by the source device 401. The system 100 may extract the timestamp value 407 of each of the plurality of image of the data stream by using Optical Character Recognition (OCR).
[0038] Further, the system 100 may extract instantaneous timestamp of the client device 403 from the actual time 409 of the client device 403. The system 100 may compare the timestamp value 411 (similar to timestamp value 407) of the live video data at which the source device 401captured the live video and the current time value 409 at which the client device 403 received the first data 309. Based on the comparison, the system 100 may determine if the difference between the timestamp value 411 of the live video data and the actual time 409 at which the client device 403 received the first data is greater than a threshold value. If the value of the difference is greater than the threshold value, the system 100 may determine that there is a lag or latency either in the server 105. For example, in Fig.4, the timestamp value 411 extracted by using OCR shows that this image was captured at 21:32:13 and the timestamp value 409 at which the client device 403 received the image is 21:38:13. In an example embodiment, the threshold value may be 3 sec and the difference between the two timestamp values is 6 seconds. As, the difference value is greater than the threshold value, the system 100 may determine that the latency is building up at the server 103. The lag or latency in the server 103 may be due to the processing speed or network issues or space complexity issues.
[0039] In an example embodiment, the latency at the client device 403 may be calculated using Presentation timestamp (PTS) methodology. In an example embodiment, in the PTS methodology, a timestamp metadata field may be used to synchronize the transmitted data stream in the source device 401 and received data stream in client device 403. And if there is any latency or delay in the time at which the client device 403 received the first data (309), the system 100 may determine the latency based on the time at which the client device 403 received the data stream and the time at which the data stream was transmitted by the source device 401. After determining the latency, the system 100 may extract more data and process the data at higher speed as disclosed in Fig.5.
[0040] FIGS. 5 illustrate an exemplary scenario 500 indicating transmission of data stream from a source device to a receiving device without latency built up at server during real time data streaming, in accordance with an example embodiment. In FIG. 5, there is shown a source device 501 (similar to the source device 101) transmitting live video data to a client device 505 (similar to the client device 105) via a server 503 (similar to 103) and a network 107. The source device 501 may transmit source data 507 (similar to the data stream) and the client device 505 may receive first data 509a and second data 509b.
[0041] In an example embodiment, the system 100 is triggered when the latency is identified after the first data 509a (similar to the first data 309) is received by the client device 505(similar to the client device 105).After determining that the latency is greater than the threshold value, the client device 505 may use an algorithm to process the second data 509b. In an embodiment, the second data 509b is the remaining data of the data stream transmitted from the source device 501 to the server 503 which the client device 505 was not able to fetch due to latency or lag in the server 503. For example, the algorithm may be a hungry pull algorithm through which the client device 505 may pull the second data 509b or the remaining data at a speed from the server 105 to compensate the delay or latency determined in Fig.4.
[0042] In an example embodiment, the speed and the time for extraction of the second data 509b from the server 503 is based on the latency of first data 509a determined at the client device 505. In an example embodiment, the speed for extraction of the second data 509b is greater than the speed at which the client device 505 received the first data 509a.
[0043] In an example embodiment, the client device 505 may pull the second data 509b till the difference between the timestamp value of the data stream from the source device 501 and the instantaneous timestamp value at which the client device 403 received the data stream is less than or equal to the threshold value. The system 100 may perform OCR operation to continuously check the latency till the latency is less than the threshold value For example, the client device 505 may continuously pull the second data 509b from the server 503 and OCR operation may be performed simultaneously to continuously check the latency till the latency is less than or equal to the threshold value. For example, if the threshold value for the latency is 3 seconds, then the client device 505 may keep pulling the second data 509b until the latency becomes equal to or lesser than the threshold value of 3 seconds. During this time period, the system 100 may perform the OCR operation simultaneously to continuously check the latency. The client device 505 may terminate the pulling of the second data 509b when the latency is less than or equal to 3 seconds. It is to be appreciated that the threshold value of 3 seconds is to indicate an example embodiment. The threshold value may be at the order of few milliseconds as well.
[0044] In an example embodiment, the system 100 may pre-calculate the time for which the client device 505 may perform pulling of the second data 509b after determining the latency of the first data 509a and before starting the pull of the second data 509b from the server 503. Similarly, the system 100 may pre-calculate the speed with which the client device 505 may pull the second data 509b after determining the latency of the first data 509a and before starting the pull of the second data 509b. Based on the pre-calculated time and speed, the client device 505 may pull the second data 509b from the server 503 for the pre-calculated time at the pre- calculated speed to compensate the lag or delay. It can be appreciated that the system 100 may perform the extraction of timestamp using OCR and the calculation of speed and time of pulling the second data 509b simultaneously.
[0045] Further, the system 100 may process or run the second data 509b on the client device 505 with the speed with which the second data 509b was extracted from the server 503. In an example embodiment, the extraction and the processing speed of the second data 509b is based on the determined latency of the first data 509a. In an example embodiment, the speed for extraction of the second data 509b is greater than the speed at which the client device 505 received the first data 509a and therefore the speed of processing the second data 509b is also higher than the speed with which the first data 509a was processed. For example, if latency of the first data 509a is more, the processing speed of second data 509b is high and if the latency of the first data 509a is low, the processing speed of the second data 509b is low.
[0046] After processing the remaining data or the second data 509b at the speed of extracting, there is no lag or delay in the data stream received by the client device 505. In an embodiment, the data stream 509 (comprising the first data 509a and the second data 509b) received by the client device 505 and the data stream 507 transmitted by the source device 501are same without any delay or latency.
[0047] Fig.6. illustrates an exemplary scenario 600 for indexing of recorded content using OCR, in accordance with an example embodiment. In FIG. 6, there is shown a client device 601 (similar to the client device 105) playing a video of an indoor premises 603 and an index 605 showing the timestamp value.
[0048] For example, in a surveillance camera, the source device 101 may store the complete data stream in the server 103. This may sometimes result in storing unnecessary information and a lot of memory is consumed in storing unnecessary information in the server 103.Therefore, in the present invention the source device 101 is triggered to capture the video only when a motion is detected in the field of view of the source device 101 within a time frame. In an example embodiment, the source device 101 may capture the data stream in a time frame of 30 seconds, and if no motion is detected in 30 seconds, the recorded content is not saved and deleted. Similarly, if motion is detected in the time frame, the recorded content is saved in the server 103. The system 100 may also extract the timestamp value of the recorded content using the OCR. In an example embodiment, the value of the time frame for which the recorded content is not saved in the server 103 may be pre-defined or dynamically set by the user. This method may save memory consumption and unnecessary information is not saved in the server 103.
[0049] In an example embodiment, the recorded content from the server 103 is sent to a user when a request is made by the user via the Application 105a. Based on the timestamp value of the recorded content, the system 100 may create an index 605 of the recoded content on the client device 601. The index 605 makes it easy for the user to select the time at which the user wants to see the video. In this way, the present invention makes use of OCR to extract the timestamp value of the recorded content and to create the index 605 on the client device 601, thereby making it easy to select the time frame for which the user wants to see the video on the client device 601. Also, this method saves a lot of memory, time and avoids storage of unnecessary information.
[0050] FIG. 7 illustrates a flow diagram of a method 700 for reducing latency in real time data streaming, in accordance with an example embodiment. It will be understood that each block of the flow diagram of the method 700 may be implemented by various means, such as hardware, firmware, processor, circuitry, and/or other communication devices associated with execution of software including one or more computer program instructions. For example, one or more of the procedures described above may be embodied by computer program instructions. In this regard, the computer program instructions which embody the procedures described above may be stored by a memory 203 of the client device 105, employing an embodiment of the present invention and executed by a real time streaming module 201a in the processor 201. As will be appreciated, any such computer program instructions may be loaded onto a computer or other programmable apparatus (for example, hardware) to produce a machine, such that the resulting computer or other programmable apparatus implements the functions specified in the flow diagram blocks. These computer program instructions may also be stored in a computer-readable memory that may direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture the execution of which implements the function specified in the flowchart blocks.
[0051] Accordingly, blocks of the flow diagram support combinations of means for performing the specified functions and combinations of operations for performing the specified functions for performing the specified functions. It will also be understood that one or more blocks of the flow diagram, and combinations of blocks in the flow diagram, may be implemented by special purpose hardware-based computer systems which perform the specified functions, or combinations of special purpose hardware and computer instructions. The method 700 illustrated by the flowchart diagram of FIG. 7 and implemented by the real time streaming module 201a, is for reducing latency in real time data streaming. Fewer, more, or different steps may be provided.
[0052] At step 701, the method comprises receiving first data from a source at a receiving device. The source and the receiving device are connected by a server through a network. The source may comprise a source device 101 for capturing a live video data. And similarly, the receiving device may comprise a client device 105 for receiving the first data 509a of the data stream transmitted by the source device 101.
[0053] At step 703, the method comprises determining latency in the first data 509a, based on Optical Character Recognition (OCR). The method comprises obtaining a plurality of images from the data stream transmitted by the source device 101. The OCR may extract a timestamp for each of the plurality of images of the data stream. A part of the data stream is received at the receiving device 105 as the first data 509a. The extracted timestamp of the first data 509a is compared with the instantaneous timestamp of the receiving device 105. Based on the comparison, the latency is determined in the first data 509a.
[0054] At step 705, the method comprises extracting the second data 509b, from the server 103, at the receiving device, at a speed based on the determined latency. The second data 509b is the remaining data in the server 103 that was not fetched or received by the receiving device (e.g. the client device 105) due to lag or latency. The time period for which the second data 509b is extracted from the server 103 is based on the determined latency of the first data 509a. Similarly, the speed at which the second data 509b is extracted from the server 103 at the receiving device is based on the determined latency of the first data 509a.
[0055] At step 707, the method comprises processing the second data 509b at the receiving device 105 at the speed of extracting to reduce latency in real time data streaming.
[0056] In an example embodiment, the system for performing the method 700 described above may comprise a processor configured to perform some or each of the operations (701-707) described above. The processor may, for example, be configured to perform the operations (701-707) by performing hardware implemented logical functions, executing stored instructions, or executing algorithms for performing each of the operations. Alternatively, the system may comprise means for performing each of the operations described above. In this regard, according to an example embodiment, examples of means for performing operations (701-707) may comprise, for example, the processor and/or a device or circuit for executing instructions or executing an algorithm for processing information as described above.
[0057] In this way example embodiments of the present disclosure provide the system 100 and the method 700 for reducing latency in real time data streaming. The invention provides methodology to reduce latency in real time video streaming using an algorithm. Also, the invention may process the data at higher speed after pulling the remaining data from server, based on the latency. The disclosed method 700 provides significant advantage in terms of reducing computational efforts and memory storage associated with the system 100 when capturing and transmitting data stream in real time.
[0058] Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these inventions pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the inventions are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the invention. Moreover, although the foregoing descriptions and the associated drawings describe example embodiments in the context of certain example combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions may be provided by alternative embodiments without departing from the scope of the invention. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
CLAIMS:We Claim:
1. A system for reducing latency in real time data streaming, the system comprising:
a memory configured to store computer-executable instructions; and
one or more processors configured to execute the instructions to:
receive, from a source, first data at a receiving device, wherein the source and the receiving device are connected by a server;
determine latency in the first data, based on Optical Character Recognition (OCR);
extract, from the server, second data at the receiving device at a speed based on the determined latency; and
process, at the receiving device, the second data at the speed of extracting to reduce latency in real time data streaming.
2. The system of claim 1, wherein to determine the latency in the first data based on the OCR, the one or more processors are further configured to:
obtain a plurality of images from the first data;
extract a timestamp for each of the plurality of images of the first data; and
extract an instantaneous timestamp of the receiving device.
3. The system of claim 2, wherein the one or more processors are further configured to compare the timestamp of the first data and the instantaneous timestamp of the receiving device.
4. The system of claim 3, wherein the one or more processors are further configured to determine the latency based on the comparison.
5. The system of claim 4, wherein the one or more processors are further configured to extract the second data at the receiving device when the determined latency is greater than a threshold value.
6. The system of claim 1, wherein to process the second data at the speed of extracting, the one or more processors are configured to process the second data at a speed greater than speed of processing the first data.
7. The system of claim 5, wherein the one or more processors are further configured to terminate the extraction of the second data at the receiving device when the determined latency is less than or equal to the threshold value.
8. The system of claim 1, wherein the one or more processors are further configured to calculate one or more of:
a time period for extraction of the second data at the receiving device, based on the determined latency; and
the speed of the second data to be extracted at the receiving device, based on the determined latency.
9. The system of claim 1, wherein the one or more processors are further configured to create an index of recorded data stream based on the OCR, wherein the recorded data stream comprises the first data and the second data.
10. A method for reducing latency in real time data streaming, the method comprising:
receiving, from a source, first data at a receiving device, wherein the source and the receiving device are connected by a server;
determining latency in the first data, based on Optical Character Recognition (OCR);
extracting, from the server, second data at the receiving device at a speed based on the determined latency; and
processing, at the receiving device, the second data at the speed of extracting to reduce latency in real time data streaming.
| # | Name | Date |
|---|---|---|
| 1 | 201911039158-FER.pdf | 2022-01-05 |
| 1 | 201911039158-STATEMENT OF UNDERTAKING (FORM 3) [27-09-2019(online)].pdf | 2019-09-27 |
| 2 | 201911039158-PROVISIONAL SPECIFICATION [27-09-2019(online)].pdf | 2019-09-27 |
| 2 | 201911039158-FORM 18 [27-11-2020(online)].pdf | 2020-11-27 |
| 3 | 201911039158-Proof of Right [12-11-2020(online)].pdf | 2020-11-12 |
| 3 | 201911039158-FORM 1 [27-09-2019(online)].pdf | 2019-09-27 |
| 4 | 201911039158-DRAWINGS [27-09-2019(online)].pdf | 2019-09-27 |
| 4 | 201911039158-COMPLETE SPECIFICATION [17-08-2020(online)].pdf | 2020-08-17 |
| 5 | 201911039158-CORRESPONDENCE-OTHERS [17-08-2020(online)].pdf | 2020-08-17 |
| 5 | abstract.jpg | 2019-10-05 |
| 6 | 201911039158-DRAWING [17-08-2020(online)].pdf | 2020-08-17 |
| 6 | 201911039158-FORM-26 [26-12-2019(online)].pdf | 2019-12-26 |
| 7 | 201911039158-DRAWING [17-08-2020(online)].pdf | 2020-08-17 |
| 7 | 201911039158-FORM-26 [26-12-2019(online)].pdf | 2019-12-26 |
| 8 | 201911039158-CORRESPONDENCE-OTHERS [17-08-2020(online)].pdf | 2020-08-17 |
| 8 | abstract.jpg | 2019-10-05 |
| 9 | 201911039158-COMPLETE SPECIFICATION [17-08-2020(online)].pdf | 2020-08-17 |
| 9 | 201911039158-DRAWINGS [27-09-2019(online)].pdf | 2019-09-27 |
| 10 | 201911039158-Proof of Right [12-11-2020(online)].pdf | 2020-11-12 |
| 10 | 201911039158-FORM 1 [27-09-2019(online)].pdf | 2019-09-27 |
| 11 | 201911039158-PROVISIONAL SPECIFICATION [27-09-2019(online)].pdf | 2019-09-27 |
| 11 | 201911039158-FORM 18 [27-11-2020(online)].pdf | 2020-11-27 |
| 12 | 201911039158-STATEMENT OF UNDERTAKING (FORM 3) [27-09-2019(online)].pdf | 2019-09-27 |
| 12 | 201911039158-FER.pdf | 2022-01-05 |
| 1 | SearchStrategyE_04-01-2022.pdf |