BrightScript Profiler is a tool created for collecting and analyzing channel metrics that can be used to determine where performance improvements and efficiencies can be made in the channel.
Table of Contents
The BrightScript profiler provides the following metrics for a channel:
Each of the above metrics can be used to diagnose problems and provide guidance to the channel developer to improve channel performance.
The workflow of the BrightScript Profiler is as below:
Below is the list of manifest keys used by the profiler:
If this entry value is
If this entry value is
|boolean||0||Yes||Turns on BrightScript Profiling when the channel is running. This is the master flag and must be set to 1 for any other profiling options to take effect.|
|boolean||0||If using memory profiling|
Turns on memory profiling. Only has effect if
If this is enabled,
Immediately after launching the channel, profiling is paused until manually resumed with the
This is useful for profiling isolated parts of a channel's UI or operations, rather than profiling the entire startup sequence of the channel.
Sets how often profiling samples are taken, while the channel is running. Only has effect if
A sample ratio of 1.0 causes every BrightScript statement to be measured and integrated into the profiling data for the channel. Unfortunately, a sample ratio of 1.0 may cause some complex channels to run very slowly when profiling is enabled, making them difficult to test. Choosing a lower sample ratio for such channels can make them more usable while profiling is enabled. Although, a higher ratio yields more accurate data, it is therefore recommended that the ratio must be set as 1.0 whenever possible.
bs_prof_sample_ratio can be adjusted from
1.0. A sample ratio of
1.0 is the default and measures every BrightScript statement. A sample ratio of
1.0 has some performance impact, but in most cases, it doesn’t affect the usability of the channel and provides the most accurate data. However, if the channel is overly sluggish with a ratio of
1.0, reduce the ratio to lessen the profiler’s overhead.
To initiate the memory profiler, sideload, run, and then exit the channel. The profiling data is complete only after the channel exits. Note that memory data can be streamed to a network. The advantage of streaming the data to a network is that it consumes significantly less memory on the device while the channel is running.
Pausing and resuming profiling
Channel profiling can be paused and resumed at any time. Use the following commands on port 8080 to either pause or resume the memory profiler:
If the profiler is paused, very little data is written regardless of the data destination. This allows the profiling data (generally, the data relevant to specific parts of a channel's UI or other operation) to be collected and analyzed later. These two commands are particularly useful when combined with the
bsprof_pause_on_start manifest entry.
For example, if starting video playback is slow or seems to cause memory leaks, the
bsprof_pause_on_start=1 entry can be set in the channel's manifest. After the channel is launched, but prior to video playback, execute the
bsprof-resume command on port 8080 to begin collecting profiling data. After performing the UI operations to be profiled, execute the
bsprof-pause command to suspend the storing operation of the profiling data. Then, exit the channel to make the profiling data available for analysis. In this scenario, the profiling data will pertain specifically to the operations performed between
These profiling commands exist on port 8080 (Roku OS Versions 9 and later):
|Get the status of BrightScript profiling|
|Pause the generation of profiling data|
|Resume the generation of profiling data|
The channel's manifest entry
bsprof_data_dest determines how the profiling data is retrieved from the device. The data can be stored locally on the device and downloaded after the channel finishes running and exits, or it can be streamed over a network connection while the channel is running.
Local data storage is the default storage for profiling, though it can be explicitly selected by adding
bsprof_data_dest=local to the channel's manifest. When using this destination, the data becomes available on the device's Application Installer after the channel exits:
Click Profiling Data to generate a
.bsprof file and a link to download the data from your Roku device.
Available since firmware version 9
In order to stream a channel's profiling data to a network while the channel is running, add
bsprof_data_dest=network to the channel's manifest. Streaming data over the network is especially useful when profiling a channel's memory usage because all memory operations are included in the profiling data, and the amount of space necessary to store the data can be very large. By streaming the data to a network, the data size is limited primarily by the host computer receiving the data, and not by the available memory on the device itself. Even while streaming the profiling data to the network, there are still additional demands placed on the device's resources while profiling as compared to running a channel without profiling. However, the use of resources on the device is significantly reduced.
When this feature is enabled, the start of the channel is delayed until a network connection is received by the device, which is the destination for the data. When the channel is launched, a message similar to the following appears on the port 8085 developer console:
08-31 23:15:29.542 [scrpt.prof.connect.wait] Waiting for connection to http://192.168.1.1:8090/bsprof/channel.bsprof
The URL is used with wget, curl, or a web browser. Once a connection is received from one of those programs, the following message appears on the developer console:
08-31 23:15:38.939 [scrpt.prof.connect.ok] profiler connected
When the channel exits, the following message appears on the developer console:
08-31 23:16:04.774 [scrpt.prof.save.ok] Profiling data complete, sent via network
Once that message is seen, the profiling connection is closed by the device and the remote file is populated with profiling data.
After downloading the
.bsprof file, the data can be viewed using the BrightScript Profiler Visualization Tool.
BrightScript Profiler Visualization Tool - CPU output view
BrightScript Profiler Visualization Tool - Memory output view
The profiling data is divided into 5 main sections:
The CPU time and wall-clock time sections are further divided into separate sections for
selfrefers to the CPU/wall-clock time the function consumes itself
calleesis the amount of time consumed by any functions called by the original function
totalis the amount of time consumed by the original function (
self) and any
This section of the profiling data contains the function calls in each thread. For SceneGraph applications, each thread corresponds to either the main BrightScript thread or a single instance of a
For example, if you have a Task node that is instantiated multiple times, each instance will appear as a separate thread. The results are the same for any custom
<component> in the channel that is instantiated multiple times. The main BrightScript thread (
Thread main) is also represented as a single thread even though it has no
The first 3 columns of the visualization tool lists:
|CPU time refers to the number of operations each function takes to complete and this number should be equal on the low end and high-end Roku devices.|
The wall-clock time lists:
|Wall-clock time refers to the real world time that a function takes to complete. This value can vary across different Roku devices. For example, a function may take an equal number of operations to complete across different Roku devices but low-end Roku devices can take more real-world time to complete one operation than a high-end Roku device.|
Function call counts lists the number of times the functions were called when the channel ran with profiling enabled.
Number of times a function was called
CPU* used in a function, itself
CPU* used in functions called by a function
Cpu.self + cpu.callees
Memory allocated within a function itself, but not freed (leaks)
Memory allocated by functions called by a function, but not freed (leaks)
Mem.self + mem.callees
Real (wall-clock) time spent on a function, itself
Real (wall-clock) time spent on functions called by a function
Tm.self + tm.callees
Average of the metric, over the number of calls (e.g., if cpu.self=100 and calls=2, avg_cpu_self will be 50)
A “memory leak” is simply any memory that is allocated, but not freed while the profiler was running. If memory is freed while profiling is paused, the free memory is not tracked and the memory may show up as “leaked.”
Time is measured as if a stopwatch were used to time the action. For example, if a function makes a network call, there may be very little CPU time used, but a significant amount of time waiting for the network response.
If any of these metrics appear in a call path, they are specific to that call path. For example, in this call path:
The metrics for func2() are specific to when it is called from func1().
However, in the table below:
The metrics displayed are the totals for all calls to each function, on any call path.
Here are a few key points on how to use this data to improve channel performance:
|Data Type||Definition and Best Use|
|High wall-clock time but low CPU time||This pattern shows a function is consistently waiting, whether it be for input or a response from an external source. These functions are best suited for Task nodes so that it doesn't block the main thread.|
|Complex functions||Try to simplify the functions as much as possible. If a function handles multiple tasks, consider breaking it out into several functions to further isolate how much CPU or wall-clock time is consumed by each task.|
|Functions that consume a large amount of CPU or wall-clock time||Try to reduce the number of calls to these functions as much as possible. Move functions to Task nodes, if they are consistently waiting. A function can be determined to be waiting if it's wall-clock time is high, but its CPU cost is low.|