built-in React performance tools were used to profile component render timing. We took several steps to improve render times:
- React Best Practices: We ensured that the UI components were implementing best practices when using React, i.e. using the shouldComponentUpdate lifecycle where necessary.
- Less HOCs: Where possible, we migrated away from using higher order components, by transitioning to using utility functions, or moving logic into a parent component
- No Prop Spreads, and Collapsing Props: Spreading props causes time to be spent iterating through objects. Collapsing multiple props into a single object where possible helps reduce comparison time in the shouldComponentUpdate lifecycle.
- Observability: Taking a page out of our custom framework’s playbook, we introduced observability of video player state into components that need to be re-rendered most often. This helped reduce render cycles at our root component.
With the visual design and performance changes made, a new AB test was launched. After patiently waiting, the results were in, another drumroll please… members streamed the same amount with the React playback UI compared to the custom framework! In the Summer of 2017, we rolled out using React in playback for all members .
Under the Hood: Simplifying Playback Logic
In addition to using React to make the UI component layer more accessible and easier to develop across multiple teams, we wanted to do the same for the player-related business logic. We have multiple teams working on different kinds of playback logic at the same time, such as: interactive titles (where the user makes choices to participate in the story), movie and episode playback, video previews in the browse experience, and unique content during Post Play
We chose to use Redux in order to single-source and encapsulate the complex playback business logic. Redux is a well-known library/pattern in web UI engineering, and it facilitates separation of concerns in ways that met our goals. By combining Redux with data normalization, we enabled parallel development across teams in addition to providing standardized, predictable ways of expressing complex business logic.
Separating Video Lifecycle From UI Lifecycle
Allowing the UI component tree to control the logic concerning the lifecycle of the actual video can result in a slow user experience. UI component trees usually have their lifecycle represented in a standardized set of methods, such as React’s componentDidMount, componentDidUpdate, etc. When the logic for creating a new video playback is hidden in a UI lifecycle method that is deep inside of a component tree, the user must wait until that specific component is called before the playback can even be initiated. After being initiated, the user must wait until the playback is sufficiently loaded in order to begin viewing the video.
When the UI is rendered on the server, the initial DOM is shipped to the client. This DOM doesn’t include a loaded video or any buffered data needed to start playback. In the case of React, the client UI needs to rebuild itself on top of this initial DOM, and then go through a lifecycle sequence to begin loading the video.
However, if the logic for managing the video playback exists outside of the UI component tree, it can be executed from anywhere inside of the application, such as before the UI tree is rendered during the initial application loading sequence. By kicking off the creation of a video in parallel with rendering the UI, it gives the application more time to create, initialize, and buffer video playback so that when the UI has finished rendering, the user can start playing the video sooner.
Standardizing the Data Representation of a Video Playback
Since video playback is composed of a series of dynamic events, it can pose a problem when there are different parts of an application that care about the state of a video playback. For example, one part of an application may be responsible for creating a video, another part responsible for configuring it based on user preferences, and yet another responsible for managing the real time control of playing, pausing, and seeking.
In order to encapsulate knowledge of a video playback, we created a standardized data structure to represent it. We were then able to create a single, central location to store the data structure for each video playback so that both the business logic and the UI could access them. This enabled intelligent rules governing video playbacks, multiple UIs that operate on a single set of data, and easier testing.
The standardized playback data structure can be created from any source of video: a custom video library, or a standard HTML video element. Using the normalized data frees the UI from having to know about the specific video implementation.
Adding Support for Multiple Video Playbacks
When we have the playback data for every existing video single-sourced in the application independent of the UI, it allows the application to define business logic rules that coordinate single, or multiple video playbacks. If each video was hidden inside a particular instance of a UI component, and the components existed across completely different areas of the UI, it would be difficult to coordinate and would force the UI components to have knowledge of each other when they probably shouldn’t.
Some areas of logic that become easier with the UI-independent playback data and multiple players are:
- Volume & mute control.
- Play & pause control.
- Playback precedence for autoplay.
- Constraints on the number of players allowed to coexist.
An Implementation of Application State
In order to provide a well-structured location for the UI-independent state, we decided to leverage Redux again. However, we also knew that we would need to allow multiple teams to work in the codebase as they added and removed logic that would be independent and not required by all use cases. As a result, we created an extremely thin layer on top of core Redux that allowed us to package up files related to specific domains of logic, and then compose Redux applications out of those domains.
A domain in our system is simply a static object that contains the following things:
- State data structure.
- State reducer.
- Actions.
- Middleware.
- Custom API to query state.
An application can choose to compose itself out of domains, or not use them at all. When a domain is used to create an application, the individual parts of the domain are automatically bound to its own domain state; it won’t have access to any other part of the application state outside of what it defined. The good thing is that the final external API of the application is the same whether it uses domains or not, thanks to the power of composition.
We empower two use cases: a single-level standard Redux application where each part knows about the entire state, or an application where each domain is enforced to only manage its own piece of the application’s substate. The benefit of identifying areas of logic that can be encapsulated into a logical domain is that the domain can easily be added, removed, or ported to any other application without breaking anything else.
Enabling Plug & Play Logic
By leveraging our concept of domains, we were able to continue working on core playback UI features while other teams implemented custom logical domains to plug into our player application, such as logic for interactive titles. Interactive titles have custom playback logic, as well as custom UIs to enable the user to make story choices during playback. Now that we had both well-encapsulated UI (via React) and state with associated logic (via Redux and our domains), we had a system to manage complexity on multiple fronts. Since we continuously AB test a lot of features, the consistent encapsulation of logic makes it much easier to add and remove logic based on AB test data or feature flags. Having an enforced and consistent structure by thinking in terms of logic domains also helped us identify and formalize areas of our application that were previously inconsistent. By adding structure and predictability and giving up the absolute freedom to do anything in any way, it actually freed us and other teams to add more features, perform more testing, and create higher-quality code.
A New Coat of Paint with A Better Engine
With new and improved state management and development patterns for fellow engineers to use, our final step in the modernization journey was to update the visual design of the UI.
From our previous learning about change isolation, the plan was to create an AB test that only focused on updating the UI in the video playback experience, and not modifying any other canvases of playback, or the architecture.
By utilizing our implementation of Redux and extending existing React components, it was easier than ever to turn our design prototypes into production code for the AB test. We rolled out the test in the Summer of 2018. We got back the results in the Fall, and found that members preferred the modern visual design along with new video player controls that allowed them to seek back and forth, or pause the video by simply clicking on the screen.
This final AB test in the modernization journey was easy to implement and analyze. By making mistakes and learning from them, we built intuition and best practices around ensuring you are not trying to do too many things at once.

Leave a Reply