Sign In to Follow Application
View All Documents & Correspondence

Test Automation System With An Efficient Queueing Mechanism

Abstract: A faster test automation system (100) and associated testing method (200) is presented. Image frames of a video (302) rendered on a test device via a video application (102) are captured and stored in a buffer queue (309A). A set of image frames (314) in a selected segment of the video (302) are divided into first and second subsets (318 and 320). Image frames in the first and second subsets (318 and 320) are processed in different orders until a video quality issue is identified in a first selected image frame in the first subset (318) and in a second selected image frame in the second subset (320), thereby preventing processing of remaining image frames. A performance evaluation report for the video application (102) is generated based on the identified video quality issue, the video, the first selected image frame, the second selected image frame, and the test device.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
04 September 2019
Publication Number
10/2021
Publication Type
INA
Invention Field
PHYSICS
Status
Email
shery.nair@tataelxsi.co.in
Parent Application
Patent Number
Legal Status
Grant Date
2023-06-16
Renewal Date

Applicants

TATA ELXSI LIMITED
ITPB Road, Whitefield, Bangalore

Inventors

1. RAHUL CHANDRASEKHARAN PILLAI
TATA ELXSI LIMITED ITPB Road, Whitefield, Bangalore – 560048
2. MANU SUBRAMONIAM
TATA ELXSI LIMITED ITPB Road, Whitefield, Bangalore – 560048
3. NIKHIL DINESH PALLATHUPARAMBIL
TATA ELXSI LIMITED ITPB Road, Whitefield, Bangalore – 560048
4. SUNIL THARANGINI GOVINDARU
TATA ELXSI LIMITED ITPB Road, Whitefield, Bangalore – 560048

Specification

Claims:I or We claim:

1. A method (200) for testing performance of a video application (102), comprising:
capturing image frames from a video (302) rendered on a test device (104A) using a frame-capturing unit (306) and storing the captured image frames in a buffer queue (309A) in a memory unit (130), the frame-capturing unit (306) and the memory unit (130) being operatively coupled to a test automation system (100);
determining a number of processing threads to be generated for processing a set of image frames (314) corresponding to a selected segment of the rendered video (302) stored in the buffer queue (309A) based on one or more global computation resources thresholds and one or more currently utilized computation resources;
dividing the set of image frames (314) into subsets comprising a first subset (318) and a second subset (320) based on the determined number of processing threads;
sequentially processing image frames in the first subset (318) in a first processing order only until a specific video quality issue is identified in a first selected image frame in the first subset (318);
sequentially processing image frames in the second subset (320) in a second processing order different from the first processing order only until the specific video quality issue is identified in a second selected image frame in the second subset (320), thereby preventing processing of image frames following the first selected image frame in the first subset (318) per the first processing order and image frames following the second selected image frame in the second subset (320) per the second processing order, wherein the first and second selected image frames are designated as start and end positions of the specific video quality issue, respectively; and
generating a performance evaluation report for the video application (102) based on information related to one or more of the specific video quality issue, the video (302), the first selected image frame, the second selected image frame, and the test device (104A).

2. The method (200) as claimed in claim 1, comprising:
converting the captured image frames to a common format prior to storing the captured image frames in the buffer queue (309A);
adding a custom header to each of the image frames, wherein the custom header comprises image related information comprising frame rate, frame height, frame width, timestamp, video source, and a unique identification number; and
enqueuing each of the image frames along with a corresponding custom header in the buffer queue (309A) for analysis by the test automation system (100).

3. The method (200) as claimed in claim 1, comprising identifying each of the set of image frames (314) corresponding to the selected segment of the rendered video (302) stored in the buffer queue (309A) based on the corresponding custom header.

4. The method (200) as claimed in claim 1, comprising selectively processing only those image frames that are positioned at designated intervals in the first subset (318) and in the second subset (320) for identifying the specific video quality issue.

5. The method (200) as claimed in claim 1, comprising processing only a predefined number of the image frames in the first subset (318) and in the second subset (320) for identifying the specific video quality issue.

6. The method (200) as claimed in claim 1, comprising:
dynamically monitoring the currently utilized computation resources while processing the images frames in the first subset (318) and in the second subset (320);
determining whether further simultaneous processing of the image frames in the first subset (318) and in the second subset (320) by a frame analyzer (122) in the test automation system (100) is expected to cause the currently utilized computation resources to exceed the global computation resources thresholds;
providing a recommendation to the frame analyzer (122) to restrict utilization of the currently utilized computation resources within the global computation resources thresholds upon determining that the currently utilized computation resources are expected to exceed the global computation resources thresholds; and
controlling processing of image frames in one or more of the first subset (318) and the second subset (320) by the frame analyzer (122) based on the recommendation.

7. The method (200) as claimed in claim 1, comprising:
identifying one or more correction algorithms based on the performance evaluation report for automatically rectifying the specific video quality issue, wherein the correction algorithms comprise one or more of a video encoding method, video decoding method, and a network parameter configuration method; and
automatically updating the video (302) using the identified correction algorithms.

8. A test automation system (100) for testing performance of a video application (102), comprising:
a script executor (106) that executes automated test scripts for enabling a test device (104A) to transmit a request for a video (302) to a content server (110) via a first communications network (112), wherein the test device (104A) receives the video (302) from the content server (110) and renders the video (302) via the video application (102);
a frame-capturing unit (114) that captures image frames from the rendered video (302);
a memory unit (130) comprising a buffer queue (309A) that stores the captured image frames; and
a frame analyzer (122) that:
directs a thread controller (124) of the test automation system (100) to determine a number of processing threads for processing a set of image frames (314) corresponding to a selected segment of the rendered video (302) stored in the buffer queue (309A) based on global computation resources thresholds and currently utilized computation resources;
divides the set of image frames (314) into subsets comprising a first subset (318) and a second subset (320) based on the number of processing threads;
sequentially processes image frames in the first subset (318) in a first processing order only until a specific video quality issue is identified in a first selected image frame in the first subset (318);
sequentially processes image frames in the second subset (320) in a second processing order different from the first processing order only until the specific video quality issue is identified in a second selected image frame in the second subset (320), thereby preventing processing of image frames following the first selected image frame in the first subset (318) per the first processing order and image frames following the second selected image frame in the second subset (320) per the second processing order, wherein the first and second selected image frames are designated as start and end positions of the specific video quality issue, respectively; and
generate a performance evaluation report for the video application (102) based on information related to one or more of the specific video quality issue, the video (302), the first selected image frame, the second selected image frame, and the test device (104A).

9. The test automation system (100) as claimed in claim 8, wherein the frame-capturing unit (114) comprises one of an image-capturing unit, a camera, a high-definition multimedia interface cable, and a video capture card.

10. The test automation system (100) as claimed in claim 8, comprising a metadata tagger (120) that:
converts the captured image frames in a first format to a common format prior to storing the captured image frames in the buffer queue (309A); and
adds a custom header to each of the image frames, wherein the custom header comprises image related information comprising frame rate, frame height, frame width, timestamp, video source, and a unique identification number.

11. The test automation system (100) as claimed in claim 8, comprising a thread controller (124) that:
dynamically monitors the currently utilized computation resources while processing images frames in the first subset (318) and in the second subset (320);
determines whether further simultaneous processing of the image frames in the first subset (318) and in the second subset (320) by a frame analyzer (122) in the test automation system (100) is expected to cause the currently utilized computation resources to exceed the global computation resources thresholds; and
provides a recommendation to the frame analyzer (122) to restrict utilization of the currently utilized computation resources within the global computation resources thresholds upon determining that the currently utilized computation resources are expected to exceed the global computation resources thresholds; and
controls processing of image frames in one or more of the first subset (318) and the second subset (320) by the frame analyzer (122) based on the recommendation.

12. The test automation system (100) as claimed in claim 8, wherein the test automation system (100) is communicatively coupled to a video quality management system (132) that adjusts one or more of a video encoding method, a video decoding method, an image correction method, and a network parameter configuration method for automatically rectifying the video quality issues related to the video (302) based on the performance evaluation report.

13. The test automation system (100) as claimed in claim 8, wherein the video application (102) corresponds to one of an over-the-top application, and a video-on-demand application, and a media player application.
, Description:
RELATED ART

[0001] Embodiments of the present disclosure relate generally to a test automation system. More particularly, the present disclosure relates to a test automation system that efficiently tests performance of a video application across different types of multimedia devices simultaneously with minimal computational requirements.
[0002] Consumption of digital content has increased manifold in recent times due to seamless connectivity to the internet. Examples of digital content include digital images, videos, audio, and related media. Consumers use various multimedia devices such as set-top boxes, computers, mobile phones, smart televisions, tablets, and gaming consoles to access digital content made available by media service providers. These multimedia devices generally have media player applications for retrieving and playing the digital content provided by the media service providers.
[0003] Examples of such media service providers’ include over-the-top (OTT) content providers and video-on-demand (VOD) content providers. When these media service providers develop new video applications or develop newer version of existing video applications, the video applications have to be tested for their performance across multiple multimedia devices before making such video applications available to the consumers. Testing performance of video applications includes verifying capabilities of the video applications to render videos without any issues that affect quality of experience (QoE) of consumers. Examples of such issues include occurrence of black frames, macroblocks, video freezes, pixelations, and video buffers.
[0004] Currently, there are certain test automation systems available in the market for testing performance of video applications. However, most of existing test automation systems execute performance testing using a single device at a particular instant of time, as simultaneously testing performance across multiple devices requires significant CPU (central processing unit) resources and memory. Hence, existing test automation systems take significantly more time to complete performance testing, which delays the launch of video applications into the market.
[0005] Further, these existing test automation systems test performance of a video application in a particular test device by configuring the test device to render a sample video at different resolutions and at different frame rates. Subsequently, the existing test automation systems capture image frames associated with the sample video and enqueue the captured image frames in a single queue. For example, the US published application US20050034031A1 describes one such existing test automation system that captures and stores image frames associated with a video in a single queue.
[0006] However, such existing test automation systems require significant computing and memory resources for processing the image frames in the queue. Typically, the existing test automation systems continue to retain all image frames in the queue until all of them have been processed completely, thus blocking significant computing and memory resources for long periods of time. For example, the existing test automation systems may have to process a video segment including fifty image frames. In this example, the existing test automation systems retrieve all fifty image frames from the queue and simultaneously process all fifty image frames. Consequently, the existing test automation systems may process multiple image frames at a particular instant of time, leading to an increased demand for CPU resources. Further, the existing test automation systems may require significantly more memory and computing resources, especially when the images frames having higher resolutions are to be analyzed.
[0007] Hence, there is a need for an improved yet inexpensive test automation system and an associated method for simultaneously testing performance of one or more video applications across different multimedia devices with optimal utilization of memory and CPU resources.

BRIEF DESCRIPTION

[0008] It is an objective of the present disclosure to provide a method for testing performance of a video application. The method includes capturing image frames from a video rendered on a test device using a frame-capturing unit and storing the captured image frames in a buffer queue in a memory unit. The frame-capturing unit and the memory unit being operatively coupled to a test automation system. A number of processing threads to be generated for processing a set of image frames corresponding to a selected segment of the rendered video stored in the buffer queue is determined based on one or more global computation resources thresholds and one or more currently utilized computation resources.
[0009] The set of image frames is divided into subsets including a first subset and a second subset based on the determined number of processing threads. Image frames in the first subset is sequentially processed in a first processing order only until a specific video quality issue is first identified in a first selected image frame in the first subset. Image frames in the second subset are sequentially processed in a second processing order different from the first processing order only until the specific video quality issue is identified in a second selected image frame in the second subset. The method further includes preventing processing of image frames following the first selected image frame in the first subset per the first processing order, and image frames following the second selected image frame in the second subset per the second processing order.
[0010] The first and second selected image frames are designated as start and end positions of the specific video quality issue, respectively. A performance evaluation report is generated for the video application based on information related to one or more of the specific video quality issue, the video, the first selected image frame, the second selected image frame, and the test device.
[0011] The captured image frames in a first format are converted to a common format prior to storing the captured image frames in the buffer queue. A custom header is added to each of the image frames. The custom header includes image related information. The image related information includes frame rate, frame height, frame width, timestamp, video source, and a unique identification number. Each of the image frames along with a corresponding custom header is enqueued in the buffer queue for analysis by the test automation system. Each of the set of image frames corresponding to the selected segment of the rendered video stored in the buffer queue is identified based on the custom header. The test automation system selectively processes only image frames that are positioned at designated intervals in the first subset and in the second subset for identifying the specific video quality issue. The test automation system processes only a predefined number of the image frames in the first subset and in the second subset for identifying the specific video quality issue.
[0012] The currently utilized computation resources are dynamically monitored while processing the images frames in the first subset and in the second subset. The method further includes determining whether simultaneous processing of the image frames in the first subset and in the second subset by a frame analyzer in the test automation system is expected to cause the currently utilized computation resources to exceed the global computation resources thresholds. A recommendation is provided to the frame analyzer to restrict utilization of the currently utilized computation resources within the global computation resources thresholds upon determining that the currently utilized computation resources are expected to exceed the global computation resources thresholds. Processing of image frames in one or more of the first subset and the second subset by the frame analyzer are controlled based on the recommendation.
[0013] One or more correction algorithms are identified based on the performance evaluation report for automatically rectifying the specific video quality issues. The correction algorithms comprise one or more of a video encoding method, a video decoding method, and a network parameter configuration method. The video is updated automatically using the identified correction algorithms. It is another objective of the present disclosure to provide a test automation system for testing performance of a video application. The test automation system includes a script executor, a frame-capturing unit, a memory unit, and a frame analyzer. The script executor executes automated test scripts for enabling a test device to transmit a request for a video to a content server via a first communications network. The test device receives the video from the content server and renders the video via the video application. The frame-capturing unit that captures image frames from the rendered video. The memory unit that includes a buffer queue that stores the captured image frames.
[0014] The frame analyzer directs a thread controller of the test automation system to determine a number of processing threads for processing a set of image frames corresponding to a selected segment of the rendered video stored in the buffer queue based on global computation resources thresholds and currently utilized computation resources. In addition, the frame analyzer divides the set of image frames into subsets including a first subset and a second subset based on the number of processing threads. Moreover, the frame analyzer sequentially processes image frames in the first subset in a first processing order only until a specific video quality issue is identified in a first selected image frame in the first subset. Additionally, the frame analyzer sequentially processes image frames in the second subset in a second processing order different from the first processing order only until the specific video quality issue is identified in a second selected image frame in the second subset. The frame analyzer thereby prevents processing of image frames following the first selected image frame in the first subset per the first processing order, and the image frame in the second subset per the second processing order. The first and second selected image frames are designated as start and end positions of the specific video quality issue, respectively.
[0015] The frame analyzer generates a performance evaluation report for the video application based on information related to one or more of the specific video quality issue, the video, the first selected image frame, the second selected image frame, and the test device. The frame-capturing unit includes one of an image-capturing unit, a camera, a high-definition multimedia interface cable, and a video capture card. The test automation system further includes a metadata tagger that converts the captured image frames in a first format to a common format prior to storing the captured image frames in the buffer queue. The metadata tagger adds a custom header to each of the image frames. The custom header includes image related information. The image related information includes frame rate, frame height, frame width, timestamp, video source, and a unique identification number.
[0016] The test automation system further includes a thread controller that dynamically monitors the currently utilized computation resources while processing images frames in the first subset and in the second subset. Further, the thread controller determines whether further simultaneous processing of the image frames in the first subset and in the second subset by a frame analyzer in the test automation system is expected to cause the currently utilized computation resources to exceed the global computation resources thresholds. In addition, the thread controller provides a recommendation to the frame analyzer to restrict utilization of the currently utilized computation resources within the global computation resources thresholds upon determining that the currently utilized computation resources are expected to exceed the global computation resources thresholds.
[0017] The frame analyzer controls processing of image frames in one or more of the first subset and the second subset by the frame analyzer based on the recommendation. The test automation system is communicatively coupled to a video quality management system. The video quality management system adjusts one or more of a video encoding method, a video decoding method, an image correction method, and a network parameter configuration method for automatically rectifying the video quality issues related to the video based on the performance evaluation report. The video application corresponds to one of an over-the-top application, and a video-on-demand application, and a media player application.

BRIEF DESCRIPTION OF DRAWINGS

[0018] These and other features, aspects, and advantages of the claimed subject matter will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
[0019] FIG. 1 illustrates a block diagram of an exemplary test automation system that tests performance of a video application across one or more test devices simultaneously, in accordance with aspects of the present disclosure;
[0020] FIGS. 2A and 2B illustrate a flow diagram depicting an exemplary method for testing performance of the video application across one or more test devices simultaneously using the test automation system of FIG. 1, in accordance with aspects of the present disclosure; and
[0021] FIG. 3 illustrates a block diagram depicting an exemplary method of testing the performance of the video application simultaneously across two devices using the test automation system of FIG. 1, in accordance with aspects of the present disclosure.

DETAILED DESCRIPTION

[0022] The following description presents an exemplary system and method for automatically testing performance of one or more video applications across different types of multimedia devices. Particularly, the embodiments presented herein describe a test automation system that tests a video application and determines associated capability to render videos across different types of multimedia devices simultaneously with minimal utilization of central processing unit (CPU) resources and memory.
[0023] As noted previously, conventional test automation systems typically place all image frames to be analyzed in a single queue. Further, the conventional test automation systems analyze all image frame in the queue. Processing all the image frames from the single queue causes an increased demand for the associated CPU resources and an overall testing time. Consequently, such conventional test automation systems require significant CPU resources, memory, and time for testing performance of a video application across different multimedia devices simultaneously.
[0024] Unlike such conventional test automation systems, embodiments of the present test automation system and method enable efficient testing of performance of a video application across different multimedia devices. The test automation system divides sets of image frames captured from a video rendered by the test devices into multiple subsets. The test automation system then selectively analyzes image frames in the subsets to identify quality related issues in the video rendered by the test devices. Thus, the present test automation system obviates the need for analyzing every image frame in the subsets of image frames for identifying video related issues. Hence, the present test automation system uses fewer computational resources such as CPU time and memory, which in turn, allows for faster performance testing across multiple test devices simultaneously.
[0025] According to aspects of the present disclosure, embodiments of the present test automation system can be used for testing performance of wide varieties of video applications belonging to various media service providers. For example, the present test automation system can be used for testing performance of video applications that stream over-the-top (OTT) content provided by media service providers such as YouTube, Netflix, and Hulu. The present test automation system also tests video applications that stream video-on-demand (VOD) content provided by Now TV, Netflix, Amazon, and Hulu. Additionally, the present test automation system is also capable of testing video applications that stream live content, television content, and video content stored locally in end user devices.
[0026] Further, device manufacturers may be able to also use the present test automation system for testing performance of various video applications across their devices before launching them into the market. The present test automation system, thus, may be used for testing performances of different video applications. However, for clarity, the present disclosure describes an embodiment of the test automation system configured to test performance of an OTT application across multiple test devices simultaneously.
[0027] FIG. 1 illustrates a block diagram of an exemplary test automation system (100), in accordance with aspects of the present disclosure. In one embodiment, the test automation system (100) is configured to test quality parameters associated with videos rendered on test devices (104A-N) via a video application (102). Examples of the test devices (104A-N) include one or more of set-top boxes, computers, mobile phones, smart televisions, tablets, gaming consoles, or any other multimedia devices that are capable of rendering or streaming video content.
[0028] Furthermore, an example of the video application (102) includes an OTT application. However, it is to be noted that the video application (102) can also be an application that is capable of streaming or playing video-on-demand content, television content, video content stored locally in the test devices (104-N), live content received via protocols such as HTTP live streaming (HLS) or real-time streaming protocol (RTSP).
[0029] In one embodiment, the video application (102) is preinstalled in each of the test devices (104A-N). The test automation system (100) tests the video application (102) installed in each of the test devices (104A-N) using an associated script executor (106). The script executor (106) executes one or more automated test scripts stored in a database (108) of the test automation system (100). Execution of the automated test scripts simulates one or more actions in accordance with test parameters outlined in each of a plurality of test cases or test scenarios. Specifically, in one embodiment, execution of the automated test scripts simulates user actions including launching the video application (102) in each of the test devices (104A-N). Execution of the automated test scripts also simulates configuring each of the test devices (104A-N) to transmit a request to a content server (110) for a particular video using the video application (102).
[0030] Each of the test devices (104A-N) transmits the request for the video to the content server (110) via a first communications network (112). An example of the first communications network (112) includes a content delivery network, Wi-Fi network, an Ethernet, and a cellular data network. Further, each of the test devices (104A-N) receives the requested video from the content server (110) via the first communications network (112), and streams the requested video to the test devices (104A-N) using the video application (102).
[0031] Thus, each of the test devices (104A-N) receives the video from the content server (110) and renders the video onto a display associated with the test devices (104A-N) using the video application (102). In one embodiment, the test automation system (100) includes a frame-capturing unit (114) configured to capture the video rendered by each of the test devices (104A-N) for testing the video application (102) streaming to the test devices (104A-N). Examples of the frame-capturing unit (114) include, but are not limited to, one or more of a video capture card, High-Definition Multimedia Interface (HDMI) capture software development kits (SDKs), and an image-capturing unit.
[0032] In one embodiment, the frame-capturing unit (114) employs different video capturing mechanisms for different types of the test devices (104A-N). For example, the frame-capturing unit (114) may employ an image-capturing unit when a test device (104A) corresponds to a mobile phone or a smart TV. In another example, the frame-capturing unit (114) may employ HDMI capture SDKs and a video capture card when the test device (104A) corresponds to a laptop or a set-top-box.
[0033] In certain embodiments, a number of image frames in a video captured by the frame-capturing unit (114) varies based on a frame rate associated with a type of the frame-capturing unit (114). For example, a ten-second video captured by the image-capturing unit may include 500 image frames when a frame rate of the image-capturing unit corresponds to 50 frames per second (fps). Similarly, a ten-second video captured by an HDMI capture SDKs may include 400 image frames when a frame rate of the HDMI capture SDKs corresponds to 40 fps.
[0034] In one embodiment, the frame-capturing unit (114) transmits image frames captured from each of the test devices (104A-N) to the test automation system (100) via a second communications network (116) for analysis. Certain examples of the second communications network (116) include a Wi-Fi network, an Ethernet, an HDMI cable, a cellular data network, and a short-range communications network such as a conventional or Bluetooth Low Energy network.
[0035] The test automation system (100), thus, receives image frames captured from each of the test devices (104A-N) by the frame-capturing unit (114) via the second communications network (116). In certain embodiments, the received image frames may be in different formats if the image frames were captured using different types of frame-capturing units (114). For example, the image frames captured using the frame-capturing unit (114) such as a camera may be in an open computer vision (CV) mat format. The image frames captured using the HDMI capture SDKs and the video capture card, in contrast, may be in a byte array format.
[0036] In certain embodiments, the test automation system (100) converts the received image frames in different formats to a common format for easy processing of the received image frames. To that end, in one embodiment, the test automation system (100) includes a metadata tagger (120) that converts the received image frames that are in open CV mat format and byte array format into a bitmap format that may occupy less memory and may enable faster processing of the image frames.
[0037] Accordingly, the metadata tagger (120) along with other associated systems (106, 122, 124, 126, and 132) in the test automation system (100), for example, may include one or more general-purpose processors, specialized processors, graphical processing units, microprocessors, programming logic arrays, field programming gate arrays, cloud-based processing systems, cloud computing processors, and/or other suitable computing devices.
[0038] In one embodiment, the metadata tagger (120) adds a custom header to each of the image frames after converting all the image frames to the same format. The custom header associated with each of the image frames includes image related information, for example, frame rate, height of an image frame, and width of the image frame. The custom header further includes timestamp information indicating time at which the image frame was captured, and video source information. Moreover, the custom header further includes a corresponding unique identifier to identify each of the image frames uniquely.
[0039] In one embodiment, the test automation system (100) temporarily stores the image frames captured from each of the test devices (104A-N) along with the custom header in frame buffers (118) for a designated time. For example, the test automation system (100) may temporarily store the image frames in the frame buffers (118) for a designated time of one second. Post expiry of the designated time, the test automation system (100) moves each of the image frames in the frame buffers (118) to buffer queues (128) for analysis using the frame analyzer (122). In one embodiment, the frame buffers (118) and the buffer queues (128) reside in a memory unit (130) associated with the test automation system (100).
[0040] In one embodiment, a total number of the frame buffers (118) and a total number of the buffer queues (128) may be equivalent to a total number of test devices (104A-N) involved in testing the video application (102). Hence, the test automation system (100) includes a corresponding frame buffer and a corresponding buffer queue for each of the test devices (104A-N).
[0041] In certain embodiments, the frame analyzer (122) performs frame analysis by identifying a set of image frames associated with a starting segment, namely, a first segment of the video rendered by each of the test devices (104A-N) from the buffer queues (128). The starting segment of the rendered video is of a designated length. For example, the frame analyzer (122) identifies a first and second set of image frames corresponding to a first one-second video rendered by the test device (104A) and the test device (104N), respectively, from the buffer queues (128) based on information included in the corresponding custom header. The identified first and second set of image frames may have twenty-five image frames and thirty image frames, respectively.
[0042] The frame analyzer (122) then directs a thread controller (124) of the test automation system (100) to determine a number of threads to be generated for processing the identified first and second set of image frames. In one embodiment, the thread controller (124) determines the number of threads based on CPU and memory available with the test automation system (100), as described in detail with reference to FIG. 3. In one embodiment, the frame analyzer (122) divides each of the first and second set of image frames into specific subsets of image frames based on the number of threads determined by the thread controller (124).
[0043] In one implementation, the frame analyzer (122) may divide the first set of image frames including twenty-five image frames into two subsets including a first subset and a second subset when the number of threads generated by the thread controller (124) corresponds to two. In one embodiment, the first subset includes twelve image frames from one to twelve and the second subset includes thirteen image frames from thirteen to twenty-five. Further, the frame analyzer (122) places the first subset in a first thread and places the second subset in a second thread.
[0044] The frame analyzer (122) then processes the image frames in the first subset using a first processing thread, and image frames in the second subset using a second processing thread. Specifically, in one embodiment, the frame analyzer (122) processes the first subset in a first processing order and the second subset in a second processing order for quickly and efficiently identifying video related issues in the first one-second video rendered by the test device (104A). It may be noted that the first processing order may be same or different from the second processing order.
[0045] For example, the frame analyzer (122) retrieves the first image frame from the first subset and identifies if there are one or more video related issues in the first image frame using one or more high precision (HP) algorithms. The high precision algorithms implemented in the frame analyzer (122) correspond to image processing algorithms that are capable of identifying different types of video related issues including black frames, macroblocks, video freezes, pixelations, and video buffers. Examples of the HP algorithms include Hough circle detector, and histogram analyzer.
[0046] The frame analyzer (122) also retrieves twenty-fifth image frame from the second subset and identifies if there are any video related issues in the twenty-fifth image using the HP algorithms. The frame analyzer (122) identifies a specific issue (e.g., a macroblock) in the first image frame and the same issue in the twenty-fifth image frame. In this example, the frame analyzer (122) determines the first image frame to be a start position of the macroblock issue, and the twenty-fifth image frame to be an end position of the macroblock issue. Further, the frame analyzer (122) prevents processing of image frames following the first image frame in the first subset per the first processing order, and image frames following the twenty-fifth image frame in the second subset per the second processing order. Specifically, the frame analyzer (122) refrains from performing frame analysis on intermediate image frames, namely the second image frame to the twenty-fourth image frame, to limit use of CPU resources associated with the test automation system (100).
[0047] If the frame analyzer (122) identifies that neither the first image frame nor the twenty-fifth image frame has the specific issue, the frame analyzer (122) selects a subsequent image frame from each of the first and second subsets for frame analysis. For example, the frame analyzer (122) selects the second image frame from the first subset and the twenty-fourth image frame from the second subset, and performs frame analysis on the selected image frames to identify if there any video related issues. The frame analyzer (122) continues to select subsequent sets of image frames from the first and second subsets for frame analysis until an image frame with the specific issue is identified.
[0048] For example, a specific video quality issue may start from the fourth image frame and end at the twentieth image frame. In this example, the frame analyzer (122) iteratively selects and analyzes first four image frames from the first subset and last six image frames from the second subset. Upon identifying the specific issue in the fourth image frame and the twentieth image frame, the frame analyzer (122) skips performing frame analysis on the remaining intermediate image frames.
[0049] Thus, in the previously noted example, the frame analyzer (122) identifies the video specific issue in the first one-second video rendered by the test device (104A) by merely analyzing ten image frames instead of analyzing all of twenty-five image frames associated with the first one-second video. Similarly, in one exemplary implementation, the frame analyzer (122) performs frame analysis on every other one-second portion of the video rendered by the test device (104A) for completing performance testing of the video application (102) in the test device (104A).
[0050] In one embodiment, the test automation system (100) includes a report generator (126) that generates a performance evaluation report including video related issues identified by the frame analyzer (122) while testing performance of the video application (102) across each of the test devices (104A-N).
[0051] In certain embodiments, since the frame analyzer (122) identifies issues in every one-second video by merely analyzing certain image frames in subsets and not by analyzing every single image frame, the test automation system (100) requires fewer CPU resources. Thus, the test automation system (100) may utilize the remaining CPU resources for faster performance testing of the video application (102) across other test devices (104N) simultaneously, as described in detail with reference to FIGS. 2A-2B.
[0052] FIGS. 2A and 2B illustrate a flow diagram depicting an exemplary method (200) for testing performance of the video application (102) across the test devices (104A-N) simultaneously, in accordance with aspects of the present disclosure. The order in which the exemplary method (200) is described is not intended to be construed as a limitation, and any number of the described blocks may be combined in any order to implement the exemplary method disclosed herein, or an equivalent alternative method. Additionally, certain blocks may be deleted from the exemplary method or augmented with additional blocks with added functionality without departing from the spirit and scope of the subject matter described herein.
[0053] Further, in FIGS. 2A and 2B, the exemplary method is illustrated as a collection of blocks in a logical flow chart, which represents operations that may be implemented in hardware, software, or combinations thereof. In the context of software, the blocks represent computer instructions that, when executed by one or more processing subsystems, perform the recited operations.
[0054] At step (202), the script executor (106) launches the video application (102) to be tested across test devices (104A-N) by executing automated test scripts stored in the associated database (108). Execution of the automated test scripts also simulates generating a request for a particular video by each of the test devices (104A-N).
[0055] At step (204), each of the test devices (104-N) transmits the request for the video to the content server (110) via the first communications network (112). At step (206), each of the test devices (104A-N) receives the video requested by each of the test devices (104A-N) from the content server (110) via the first communications network (112). Further, each of the test devices (104A-N) renders the received video on an associated display screen using the video application (102).
[0056] At step (208), the frame-capturing unit (114) captures image frames from the video rendered on each of the test devices (104A-N) via the video application (102). At step (210), the frame-capturing unit (114) transmits the image frames captured from each of the test devices (104A-N) to the test automation system (100) via the second communications network (116). Thus, the test automation system (100) receives the image frames captured from each of the test devices (104A-N) by the frame-capturing unit (114). It may be noted that the received image frames may be in different formats if the image frames were captured using different types of frame-capturing units (114).
[0057] At step (212), the metadata tagger (120) converts all the received image frames that are in different formats to the same format. At step (214), the test automation system (100) temporarily stores the converted image frames in the frame buffers (118). At step (216), the test automation system (100) moves each of the converted image frames from the frame buffers (118) to the buffer queues (128) for frame analysis.
[0058] At step (218), the frame analyzer (122) retrieves and analyzes image frames stored in the buffer queues (128) for identifying performance of the video application (102) across the test devices (104A-N) simultaneously. At step (220), the report generator (126) generates a performance evaluation report for the video application (102) based on the output of the performance test of the video application (102). The performance evaluation report is generated based on the output of the performance test including one or more of specific video quality issues identified, the video, the one or more test devices (104A-N), and image frames having specific video quality issues. The performance evaluation report may be used to identify presence or absence of issues associated with the video application (102). In certain embodiments, the test automation system (100) automatically feeds the performance evaluation report as feedback to an automated video quality management system (132), for example, for identifying one or more correction algorithms. Examples of the one or more correction algorithms include one or more of automatic selection and/or adjustment of video encoding or decoding methods and network parameter configuration method. Using the test automation system (100) to timely and efficiently test the performance of the video application (102), thus, aids in improving performance of the video application (102) without requiring significant computing or memory resources.
[0059] An exemplary approach associated with simultaneous testing of performance of the video application (102) across two test devices is described with reference to FIG. 3. However, it is to be understood that the test automation system (100) is capable of testing performance of the video application (102) across any number of devices simultaneously.
[0060] FIG. 3 illustrates a block diagram depicting an exemplary method associated with simultaneous testing of performance of the video application (102) across two test devices including a first test device (104A) and a second test device (104N), in accordance with aspects of the present disclosure. As noted previously, the video application (102) is preinstalled in each of the first test device (104A) and the second test device (104N). In order to commence the performance testing, the first test device (104N) and the second test device (104N) render a first video (302) and a second video (304), respectively using the corresponding video application (102).
[0061] In one embodiment, a frame-capturing unit (306) captures images frames of the first video (302) when the first test device (104A) renders the first video (302) via the video application (102). Similarly, the same or another frame-capturing unit (308) captures images frames of the second video (304) when the second test device (104N) renders the second video (304) via the video application (102). The frame-capturing units (306 and 308) subsequently transmit the captured image frames to the test automation system (100). As noted previously, the test automation system (100) converts the captured image frames into a single format, and adds custom header to each of the captured image frames. The test automation system (100) also temporarily stores the captured image frames in the frame buffers (118), and subsequently moves the captured image frames to the buffer queues (128) for frame analysis.
[0062] To that end, in one embodiment, the test automation system (100) includes a first buffer queue (309A) and a second buffer queue (309B). The test automation system (100) facilitates queuing of image frames captured from the first test device (104A) in the first buffer queue (309A). In addition, the test automation system (100) facilitates queuing of image frames captured from the second test device (104N) in the second buffer queue (309B). Further, in certain embodiments, each of the first and second buffer queues (309A and 309B) includes multiple memory buffers.
[0063] For example, FIG. 3 depicts three such exemplary memory buffers associated with the first buffer queue (309A) including a first memory buffer (310A), a second memory buffer (310B), and a third memory buffer (310C). Moreover, FIG. 3 depicts another three exemplary memory buffers associated with the second buffer queue (309B) including a fourth memory buffer (312A), a fifth memory buffer (312B), and a sixth memory buffer (312C). In certain embodiments, the test automation system (100) places image frames in the memory buffers (310A-C and 312A-C) based on queue sizes preset for the buffer queues (309A and 309B).
[0064] For example, a designated queue size preset for the buffer queue (309A) corresponds to 200 milliseconds. In this example, the test automation system (100) places image frames associated with the first 200 milliseconds of the video (302) in the memory buffer (310A). Similarly, the test automation system (100) places image frames associated with the second 200 milliseconds in the memory buffer (310B), and image frames associated with the third 200 milliseconds in the memory buffer (310C).
[0065] In another example, a designated queue size preset for the buffer queue (309B) corresponds to 500 milliseconds. In this example, the test automation system (100) places image frames associated with the first 500 milliseconds of the video (304) in the memory buffer (312A), image frames associated with the second 500 milliseconds in the memory buffer (312B), and image frames associated with the third 500 milliseconds in the memory buffer (312C). In one exemplary implementation, the designated queue sizes may be preset for both the buffer queues (309A-B) as one-second. Hence, each of the memory buffers (310A-C and 312A-C) would include image frames associated with a one-second video portion. For example, the first memory buffer (310A) of the buffer queue (309A) includes twenty-five image frames associated with a first one-second portion of the video (302). The second memory buffer (310B) includes another twenty-five image frames associated with a second one-second portion of the video (302), and the third memory buffer (310C) includes yet another twenty-five image frames associated with a third one-second portion of the video (302). Similarly, it is to be understood that the buffer queue (309A) may include multiple memory buffers, where each memory buffer may include 25 images frames associated with subsequent one-second portions of the video (302).
[0066] Similarly, the fourth memory buffer (312A) of the buffer queue (309B) includes thirty image frames associated with a first-one second portion of the video (304). Each of the fifth and sixth memory buffers (312B and 312C) includes corresponding thirty image frames associated with a second one-second portion and a third one-second portion of the video (304), respectively.
[0067] In one embodiment, for simultaneously testing the video application (102) across the test devices (104A and 104N), the frame analyzer (122) analyzes image frames in the buffer queue (309A) and image frames in the buffer queue (309B) in parallel in different processing threads of the test automation system (100). For simplicity, the processing threads are referred as threads in subsequently described embodiments of the test automation system (100). To commence the processing of image frames in the buffer queues (309A-B), the frame analyzer (122) identifies a set of image frames (314) enqueued in the memory buffer (310A), and another set of image frames (316) enqueued in the memory buffer (312A) based on associated custom header information. In one embodiment, the set of image frames (314) may include twenty-five image frames and the set of image frames (316) may include thirty image frames.
[0068] Post identifying the sets of image frames (314 and 316), the frame analyzer (122) directs the thread controller (124) to determine a number of threads to be generated for processing the sets of image frames (314 and 316). In certain embodiments, the thread controller (124) determines the number of threads based on global computation resources thresholds, namely, global CPU and memory thresholds pre-stored in the database (108), and one or more currently utilized computation resources. The one or more currently utilized computation resources, for example, include currently utilized CPU and memory resources. Specifically, the thread controller (124) determines a first difference between the global CPU threshold and currently utilized CPU as corresponding to available CPU resources. The thread controller (124) also determines a second difference between the global memory threshold and currently utilized memory as corresponding to available memory. The thread controller (124) then determines the number of threads based on the available CPU resources and memory.
[0069] For example, a global CPU threshold pre-stored in the database (108) corresponding to 70% would mean that only a maximum of 70% of the CPU resources available with the test automation system (100) is to be utilized for all processing performed by the test automation system (100). Similarly, a global memory threshold pre-stored in the database (108) may correspond to 5 gigabytes (GB). Thus, in a particular scenario, the thread controller (124) determines available CPU resources as 50% when currently utilized CPU corresponds to 20%. Further, the thread controller (124) determines available memory as 3GB when currently utilized memory corresponds to 2GB.
[0070] The thread controller (124) determines the number of threads to be generated for processing the sets of image frames (314 and 316) based on one or more parameters that aid in determining optimal usage of the available CPU resources and memory. Examples of the parameters include, but are not limited to, a total number of image frames in each of the sets of image frames (314 and 316). The parameters may also include a number of image processing algorithms involved in frame analysis, complexities of such image processing algorithms, and a rate at which video quality issues in a particular image frame are to be identified.
[0071] For example, the thread controller (124) determines the number of threads to be generated for processing the sets of image frames (314 and 316) as five based on one or more of the previously noted parameters. Accordingly, the thread controller (124) may generate five threads including a first thread (317A), a second thread (317B), a third thread (317C), a fourth thread (317D), and a fifth thread (317E). Post generation of the threads (317A-E), the frame analyzer (122) divides each of the sets of image frames (314 and 316) into multiple subsets. In one embodiment, a number of subsets generated by the frame analyzer (122) are equivalent to the number of threads generated by the thread controller (124). With reference to one of the previously noted examples, the frame analyzer (122) divides the sets of image frames (314 and 316) into five subsets when the number of threads generated by the thread controller (124) corresponds to five.
[0072] Specifically, the frame analyzer (122) divides the set of image frames (314) into a first subset (318) and a second subset (320), as depicted in FIG. 3. In addition, the frame analyzer (122) divides the set of image frames (316) into a third subset (322), a fourth subset (324), and a fifth subset (326). The frame analyzer (122) then places the first subset (318) of image frames in the first thread (317A) and the second subset (320) of image frames in the second thread (317B). Similarly, the frame analyzer (122) places the third, fourth, and fifth subsets (322, 324, and 326) of image frames in the third, fourth, and fifth threads (317C, 317D, and 317E), respectively.
[0073] The first subset (318) of image frames placed in the first thread (317A) may include twelve image frames starting from first image frame to twelfth image frame. The second subset (320) of image frames placed in the second thread (317B) may include thirteen image frames starting from thirteenth image frame to twenty-fifth image frame. Similarly, the third, fourth, and fifth subsets (322, 324, and 326) of image frames placed in the third, fourth, and fifth threads (317C, 317D, and 317E), respectively, may include corresponding ten image frames, as further depicted in FIG. 3.
[0074] In one embodiment, the frame analyzer (122) analyzes the subsets (318, 320, 322, 324, and 326) of image frames placed in the threads (317A-E) simultaneously. When the frame analyzer (122) simultaneously processes image frames in the subsets (318, 320, 322, 324, and 326), the thread controller (124) continuously monitors currently utilized CPU and memory to prevent the currently utilized CPU and memory from exceeding global CPU and memory thresholds. Further, the thread controller (124) directs the frame analyzer (122) to control processing of subsets of image frames placed in the threads (317A-E) when the currently utilized CPU and memory approach the global CPU and memory thresholds.
[0075] For example, the global CPU threshold and the global memory threshold pre-stored in the database (108) may correspond to 70% and 5 GB, respectively. In this example, the thread controller (124) identifies that the currently utilized CPU is 69% and the currently utilized memory is 4.9 GB when the frame analyzer (122) simultaneously processes image frames in the subsets (318, 320, 322, 324, and 326). In addition, the thread controller (124) identifies that further simultaneous processing of the image frames in the subsets by the frame analyzer (122) is expected to cause the currently utilized CPU and memory to exceed the global CPU and memory thresholds. Hence, in this example, the thread controller (124) may provide a recommendation to the frame analyzer (122) to process only two subsets (318 and 320) in lieu of processing all five subsets (318, 320, 322, 324, and 326) simultaneously to restrict utilization of the currently utilized CPU and memory within the global CPU and/or memory thresholds. Accordingly, the frame analyzer (122) only processes image frames in the two subsets (318 and 320) based on the recommendation.
[0076] An exemplary method used by the frame analyzer (122) to simultaneously analyze the subsets of image frames (318 and 320) placed in the threads (317A and 317B), respectively, is described in the following sections. However, it is to be understood that the frame analyzer (122) may similarly analyze the other subsets of images frames (322, 324, and 326) placed in other threads (317C, 317D, and 317E).
[0077] In one embodiment, the frame analyzer (122) analyzes the subsets of image frames (318 and 320) using one or more image processing algorithms that identify one or more specific video quality issues in the image frames. For example, the frame analyzer (122) executes an image-processing algorithm on the first image frame placed in the first thread (317A) to identify whether the first image frame includes a macroblock. Simultaneously, the frame analyzer (122) executes the image-processing algorithm on the twenty-fifth image frame placed in the second thread (317B) to identify whether the twenty-fifth image frame includes a macroblock.
[0078] Further, in one example, the frame analyzer (122) identifies that neither first image frame nor twenty-fifth image frame includes any macroblocks. The frame analyzer (122) then selects a subsequent image frame from the first subset (318) in a forward order and a subsequent image frame from the second subset (320) in a reverse order for frame analysis. Specifically, the frame analyzer (122) selects the second image frame from the first subset (318) and the twenty-fourth image frame from the second subset (320) for frame analysis. The frame analyzer (122) then re-executes the image-processing algorithm on both the second and twenty-fourth image frames to identify if either of the second and twenty-fourth image frames includes macroblocks.
[0079] For instance, the frame analyzer (122) identifies that macroblocks are absent in second image frame but are present in twenty-fourth image frame. In this scenario, the frame analyzer (122) refrains from processing other image frames in the second subset (320). However, the frame analyzer (122) iteratively re-executes the image-processing algorithm on subsequent image frames in the first subset (318) until identifying an image frame with a macroblock, or until the end of the subset (318).
[0080] In one example, the frame analyzer (122) identifies that a macroblock issue starts at the third image frame and ends at the twenty-fourth image frame. Thus, the frame analyzer (122) skips processing intermediate image frames, that is, from fourth the image frame to the twenty-third image frame (not shown in FIGS). Accordingly, the frame analyzer (122) identifies the macroblock issue in the set of image frames (314) by merely processing five image frames instead of processing all twenty-five image frames in the set (314). Thus, the frame analyzer (122) uses significantly fewer CPU resources associated with the test automation system (100).
[0081] In certain embodiments, the frame analyzer (122) is also capable of identifying more than one specific type of video quality issue in each image frame in the set (314). To that end, the frame analyzer (122) simultaneously executes multiple image processing algorithms that are capable of identifying different video issues. For example, the frame analyzer (122) includes a first image-processing algorithm for identifying black frames and a second image-processing algorithm for identifying pixelations. The frame analyzer (122) simultaneously executes both algorithms on every subsequent image frame in the first subset (318), in a forward order, until identifying specific image frames with black frames and/or pixelations, or until the end of the first subset (318).
[0082] Similarly, the frame analyzer (122) executes both algorithms on every subsequent image frame in the second subset (320), in a reverse order, until identifying specific image frames with black frames and pixelations, or until the starting point of the second subset (320). In one example, the frame analyzer (122) identifies that first image frame has no black frames and pixelations. Further, the frame analyzer (122) identifies that both the second image frame and the twenty-fifth image frame have only black frames. In this example, the frame analyzer (122) identifies that black frames start at the second image frame and end at the twenty-fifth image frame. Accordingly, the frame analyzer (122) refrains from executing the first image processing algorithm on intermediate image frames. However, the frame analyzer (122) may execute the second image-processing algorithm on the intermediate image frames to identify image frames that indicate the beginning and end of a pixilation error.
[0083] In certain scenarios, the video (302) rendered by the test device (104A) may have one or more issues starting from a particular image frame associated with a first one-second video portion and ending at another image frame associated with a second one-second video portion. For example, the video (302) rendered by the test device (104A) includes macroblocks that start from the thirteenth image frame in the memory buffer (310A) and end at the second image frame in the memory buffer (310B). In the previously noted example, the frame analyzer (122) identifies all image frames starting from thirteenth image-frame to twenty-fifth image-frame in the memory buffer (310A) as having macroblocks. The frame analyzer (122) further identifies the first and second image frames in the memory buffer (310B) as having macroblocks. Based on the custom header information included in the identified image frames having macroblocks, the frame analyzer (122) identifies that the macroblock issue started from the thirteenth image-frame in the memory buffer (310A) and ended at the second image-frame in the memory buffer (310B).
[0084] In certain embodiments, the frame analyzer (122) analyzes image frames in the first subset (318) and in the second subset (320) in any order. For example, the frame analyzer (122) analyzes image frames in both the first subset (318) and the second subset (320) in the same order. In this example, the frame analyzer (122) initiates processing of the first subset (318) from first image frame onwards and initiates processing the second subset (320) from thirteenth image frame onwards.
[0085] Additionally, in one embodiment, the frame analyzer (122) analyzes image frames positioned only at a designated interval of frames in the first subset (318) and in the second subset (320). For example, the frame analyzer (122) sequentially analyzes every image frame in the first and second subsets (318 and 320) when the designated interval of frames corresponds to one. Particularly, the frame analyzer (122) sequentially analyzes every image frame until identifying image frames from the first subset (318) and the second subset (320) that have the same video quality issue.
[0086] In another example, the frame analyzer (122) skips every alternative image frame while sequentially analyzing image frames in the first subset (318) and in the second subset (320) when the designated interval of frames corresponds to two. For example, post analyzing the first image frame, the frame analyzer (122) may skip analyzing the second image frame and may analyze the third image frame to expedite testing of the video application (102) and to use fewer CPU resources associated with the test automation system (100). In certain embodiments, the frame analyzer (122) may also analyze image frames in the first and second subsets (318 and 320) up to a predefined number of image frames. In certain scenarios, it is possible that neither image frames in the first subset (318) nor image frames in the second subset (320) include video quality issues. In those scenarios, analyzing image frames up to the predefined number of image frames would result in lower CPU resource utilization.
[0087] For example, in one embodiment, the predefined number of image frames corresponds to seven. In this example, the frame analyzer (122) analyzes only seven image frames positioned first in the first subset (318) and seven image frames positioned last in the second subset (320). When the frame analyzer (122) identifies that neither the first seven image frames nor the last seven image frames include any video quality issues, the frame analyzer (122) skips analyzing intermediate image frames based on the assumption that intermediate image frames are unlikely to have video quality issues. Thus, the frame analyzer (122) allows for a further reduction in use of CPU resources associated with the test automation system (100).
[0088] In addition, the frame analyzer (122) deletes the set of image frames (314) from the first memory buffer (310A) post analyzing the set of image frames (314) to reduce usage of memory associated with the test automation system (100). Subsequently, the frame analyzer (122) analyzes sets of image frames in other memory buffers (310B and 310C) in a similar manner by dividing image frames into subsets and by processing image frames in the subsets in the same or reverse order. The previously described embodiments provide details of a method used by the frame analyzer (122) to analyze image frames captured from the test device (104A). However, it is to be understood that the frame analyzer (122) may similarly analyze image frames captured from the test device (104N) in parallel to analyzing image frames captured from the test device (104A).
[0089] Post analyzing the desired image frames captured from the test devices (104A and 104N), the report generator (126) of the test automation system (100) generates a performance evaluation report. In one embodiment, the report generator (126) generates the performance evaluation report based on a list of video quality issues identified by the frame analyzer (122) in the first and second videos (302 and 304) rendered on the first and second test devices (104A and 104N), respectively, using the video application (102). The report generator (126) also uses information related to the videos (302 and 304), image frames having the video quality issues, and the test devices (104A and 104N) and video quality parameters for generating the performance evaluation report. The video quality parameters, for example, include a number, type, severity, and frequency of occurrence of the identified video quality issues.
[0090] As noted previously, the present test automation system (100) is capable of testing one or more video applications (102) simultaneously across multiple test devices (104A-N) without needing significant computational resources. To that end, the test automation system (100) divides image frames associated with every designated segment of a video into multiple subsets. Subsequently, the test automation system (100) processes only certain image frames instead of processing all image frames in the subsets for efficiently identifying video quality issues, which reduces the need for associated CPU and memory resources, and in turn, the time needed for performance testing of the video application (102) across the test devices (104A-N).
[0091] Further, unlike conventional test automation systems that place all image frames captured from a test device in a single queue, the test automation system (100) places image frames captured from different test devices in different buffer queues and processes the image frames simultaneously to expedite the performance testing of the video application (102). Additionally, the test automation system (100) converts all image frames captured using different types of the frame-capturing units (114) into the same format such that the converted image frames occupy lesser memory than the original image frames in the buffer queues (128). Moreover, post analyzing image frames, the test automation system (100) deletes the image frames from buffer queues (128), which further reduces associated memory usage.
[0092] Furthermore, the thread controller (124) of the test automation system (100) continuously monitors currently utilized CPU and memory to prevent the frame analyzer (122) from utilizing CPU and memory resources above global CPU and memory thresholds while processing image frames in the subsets. The thread controller (124) also provides recommendations to the frame analyzer (122) on a number of subsets of image frames to be processed simultaneously without exceeding the global CPU and memory thresholds.
[0093] In contrast, conventional test automation systems do not have such control mechanisms implemented to control processing of image frames. Hence, the conventional test systems often utilize all available CPU and memory resources for analyzing image frames in the single queue and for processing other applications running on the conventional test systems. A prolonged usage of all available CPU and memory resources may cause deadlocks, excessive heating, and/or shut down, thereby impeding the performance testing of the video application (102).
[0094] The present test automation system (100) automatically feeds the performance evaluation report as feedback to an automated video quality management system (132), for example, for identifying one or more correction algorithms. Examples of the one or more correction algorithms include one or more of automatic selection and/or adjustment of video encoding or decoding methods and network parameter configuration method. The automated video quality management system (132) uses the identified correction algorithms to automatically update the videos (302 and 304). Use of the test automation system (100), therefore, allows for faster testing of the video application (102) without requiring significant computing or memory resources, thereby significantly reducing the time needed for development, enhancement, testing, and release of the video application (102) in the market.
[0095] Although specific features of various embodiments of the present systems and methods may be shown in and/or described with respect to some drawings and not in others, this is for convenience only. It is to be understood that the described features, structures, and/or characteristics may be combined and/or used interchangeably in any suitable manner in the various embodiments shown in the different figures.
[0096] While only certain features of the present systems and methods have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the claimed invention.

Documents

Application Documents

# Name Date
1 201941035588-POWER OF AUTHORITY [04-09-2019(online)].pdf 2019-09-04
2 201941035588-FORM 3 [04-09-2019(online)].pdf 2019-09-04
3 201941035588-FORM 18 [04-09-2019(online)].pdf 2019-09-04
4 201941035588-FORM 1 [04-09-2019(online)].pdf 2019-09-04
6 201941035588-ENDORSEMENT BY INVENTORS [04-09-2019(online)].pdf 2019-09-04
7 201941035588-DRAWINGS [04-09-2019(online)].pdf 2019-09-04
8 201941035588-COMPLETE SPECIFICATION [04-09-2019(online)].pdf 2019-09-04
9 201941035588-FER.pdf 2021-10-21
10 201941035588-FORM-26 [10-03-2022(online)].pdf 2022-03-10
11 201941035588-FORM 3 [10-03-2022(online)].pdf 2022-03-10
12 201941035588-FER_SER_REPLY [10-03-2022(online)].pdf 2022-03-10
13 201941035588-CLAIMS [10-03-2022(online)].pdf 2022-03-10
14 201941035588-PatentCertificate16-06-2023.pdf 2023-06-16
15 201941035588-IntimationOfGrant16-06-2023.pdf 2023-06-16

Search Strategy

1 SearchStrategyE_29-09-2021.pdf

ERegister / Renewals