As a UK Public Service Broadcaster, ITV is regulated by Ofcom, which oversees the provision of accessible content on our broadcast platforms. One of the ways that we make our content more accessible is by providing content that has been translated into British Sign Language, or BSL. BSL is the primary language of around 150,000 people in the UK, and given that the grammar, lexicon and structure of BSL is very different to spoken English, providing content that has been well translated can be much more engaging than just adding subtitles.
As it stands, IMF for Broadcast & Online doesn’t support our signing workflow, so, in the DPP working group, we’ve been working towards creating a plug-in that could bring the benefits of IMF to the creation, storage and distribution of signed content.
IMF is all about components
It’s fair to say though that the primary purpose of IMF is to simplify and automate the way we handle multiple versions, enabling content businesses to store only the master content and not all of the derivatives.
But that only works if you have one video track as the master. Historically, what IMF hasn’t done is allow you to store additional video components that are intended to be overlaid onto the main content. So in order to add a signed video track into IMF, you would need to render it over the programme content, and therefore create another version of the content, thus defeating the purpose of using IMF in the first place!
The need for a plug-in
The DPP has been working with its members to publish a plug-in that optionally extends SMPTE ST 2067 (the core IMF Standard), SMPTE TSP 2121-1 (the DPP ProRes Application), and SMPTE TSP 2121-4 (the DPP JPEG2000 Application Constraints) to allow for multiple concurrent video tracks. ‘Optionally’ is the key here, by their nature, plug-ins in IMF guarantee backwards compatibility. If the receiver of an IMF package doesn’t understand the plug-in, it can safely ignore it and process the programme anyway.
The primary element of the plug-in is the AuxImageSequence, which extends IMF by allowing multiple video tracks. The AuxImageSequence sits alongside the MainImageSequence, and allows the CPL to reference more than one video track, allowing the user to stack up different components of the image. Embedding an alpha channel into the AuxImageSequence is a key part of this, allowing each track to control how much of the image is shown and how much is transparent.
The AuxImageSequence is definitely part of the solution, but we still need to define how to take the multiple layers of video and render them into a single image. To help facilitate immediate implementation, this will need to be based on a common, agreed approach, so any device will be able to directly layer the AuxImageSequence on top of the MainImageSequence with no other metadata or external control. The DPP working group has proposed a set of initial constraints. These constraints will simplify the job of the downstream processor that is rendering the image. For example, by allowing only one AuxImageSequence we don’t need to worry about track hierarchy, and ensuring that the tracks are the same resolution removes the need to independently scale them. Once OPLs are a bit more mature (read more about those here), and MetaRes is adopted, then you’ll be able to have more active control of the presentation rather than relying on these constraints.
The use cases
BSL translation of content is just one use case to which we believe this plug-in is applicable. Localising graphics, masking parts of a complied version and flexible branding are also areas that could benefit. In theory, any use case where the content owner wants to overlay any part of the image could benefit.