With industry buzzwords like “software-defined” and “IP” and “COTS” flung around at present, “service-oriented architectures” seem like a throwback to a simpler era. In truth, these are all parts of the same process, and the service-oriented architecture is actually the ultimate goal.
Computers talk to each other using IP – the internet protocol – over local and wide area networks, wired and wireless. So it makes sense to pass signals between computers performing video functionality using IP, rather than having video I/O boards in each device just so we can carry on using the co-ax cables we happen to have lying around.
If our processes are all running as software applications on standard computers, then we can save costs by loading all those processes we do not use constantly onto a smaller number of boxes. We spool up the software as we need it, stop it when we do not. Now our processing is running on virtual machines, either in a data centre in our own building or even in the cloud.
As well as saving cost, it allows us to set up the processes we need for each specific task. We are not limited by what hardware is plugged into what other hardware: we can link from software application to software application as we need to. If the next workflow requires a completely different configuration, that can be set up at the click of a mouse. We no longer have a fixed architecture, we have a software-defined infrastructure: much more flexible and agile. Which brings us to the real heart of the matter. The biggest benefits come when even tasks can be largely automated.
Service-oriented architecture
For all the economic reasons around greater competition and tighter budgetary constraints, media enterprises now have to be run on business lines. Each operation must deliver a demonstrable return on investment. Capital investments have to be clearly justified. We are seeing more of a move from capex to opex, which can be more closely linked to individual operations.
The boardroom wants to have a closer handle on what is happening in the organisation. So it needs to have a clearer sense of what can be achieved and what it will cost, and make decisions and take actions on that data.
Ultimately, it should be possible to have a single computer running SAP or some other enterprise management decision which can be queried on the practicalities and costs of starting a new service. That same computer should allow the service to be initiated, if needed, and it should report back on the effect on costs and operational efficiency, to allow the board to continue to make informed decisions.
What we have defined is another layer, above the software-defined technology. This layer abstracts the technicalities – transcoder settings, caption generation and so on – from the business requirements. The user, who may well be at board level, simply defines the service to be provided. The workflows are now service-oriented.
Metadata enrichment
A great deal of vertical integration is required: from the enterprise-level decision-making through the workflow orchestration to the individual devices and processes. Information has to flow, in appropriate form, up and down this vertical integration. The chief engineer needs to know that the transcoder farm is a bottleneck, for example; the chief finance officer needs to know what impact this bottleneck is having on revenues, and what the cost of relieving it will be.
We call the raw information on which these decisions are made metadata. This is not new – we have been talking about metadata as the heart of asset management for decades. But it does point to a new direction, a new way of thinking about metadata.
Some approach metadata as a necessary evil, wanting to get away with as little as possible. In truth you can never have too much metadata, provided you have a comprehensive way of handling it.
Some metadata will come from the content: in the DPP wrapper if it is bought-in content; or as shooting data if you are in production. And there are the familiar metadata categories which are added as you process it, things like reference numbers, programme titles, precise durations, compliance edits and so on.
But you can enrich the metadata in many more ways. The idea of tagging – attaching descriptive metadata to a specific timecode – is extending beyond sport to other programme genres. Some technical metadata is generated as it is processed through internal workflows: transcoding formats, for example.
Metadata can be enriched through queries of external data sources. If you buy in a movie, you could harvest further information from websites like IMDB and Rotten Tomatoes. If you are covering a golf tournament, you could collect course statistics from the club’s website, and player rankings from the PGA.
In the not too distant future we may well see services which automatically extract descriptive metadata from the content. Speech to text could transcribe the script and facial recognition could associate the right characters (and actors) with each line. Pattern recognition suggest where a scene is taking place. You, and your subscribers, could search for, say, a scene with Tom Hanks on a running track.
External systems
It is tempting to think of metadata as a single collection of information. But in a software-defined architecture, as well as moving content between computers there is no reason why we should not share metadata between systems. For example, the playout software needs a set of key information but it does not need access to all the metadata, which is why it is common to develop simple transfer protocols between asset management and playout.
Intellectual property rights management is an immensely complex subject and its users tend to maintain very sophisticated systems to ensure they comply precisely. But some information from it is valuable to other users, and the actions of other users are vital to the rights management team.
On the most obvious level, if you have the rights to show a piece of content four times in a year, then the scheduler needs to be prevented from showing it a fifth time. On the other hand, if the content has only been shown three times, then the scheduler needs to be prompted to put it in before the rights run out.
Bringing us back to our earlier example of board level supervision, again there is a need to exchange metadata. The enterprise management system will hold an asset register; the technical metadata can be queried to determine how intensively an asset is used; a simple Excel spreadsheet can then work out the cost of providing a service.
Workflow orchestration
The modern, service-oriented architecture therefore depends upon good metadata, enabling automated workflows to ensure staff are best employed on creative and selective tasks that best use human abilities, while tracking process usage to determine costs and availability.
A well-developed workflow orchestration layer will feature a powerful and intuitive user interface. At TMD, we see this as a defining part of the Mediaflex-UMS solution. Whereas other large-scale asset and workflow management systems expect workflow designers to be coders, we developed a graphical user interface which allows a system designer to pull processes into a workflow quickly and reliably.
Sitting underneath the graphical user interface is a workflow validator which guides the designer towards a right-first-time outcome, ensuring no steps are missed, no branches are overlooked and errors are trapped.
Our sophisticated workflow design engine depend upon metadata to automate workflows and metadata is the province of the asset management system. Many workflows will need to make decisions on metadata, and will in turn generate more metadata.
Take the idea that a programme, once broadcast, should be made available on the online catch-up service. The simple workflow will take the content, run it through a transcoder for each of the offered formats, and put it in the content delivery system. In reality, though, most workflows require intelligent decision-making
To sure you have the rights to show this online, the workflow engine needs to check with rights management. Is the content available before transmission? Create the different versions when the transcode farm is not busy, but do not release the programme to the content delivery network until transmission is confirmed. Need to get content onto catch-up as quickly as possible? Push other transcode jobs down the list, and prioritise this content in order of most popular devices.
All of these decisions can be automated, based on metadata. So, too, can be reporting to the enterprise management system, so that the board knows what is happening without the need to be bogged down in detail. If the engineering team says they need more processing power to deliver to new platforms, the finance department can see the scale of the problem.
Service-oriented architectures are without doubt the way of the future. They provide more efficient ways of working, better management and more responsive customer service.
The technology and techniques exist today to achieve highly sophisticated, seamless service-oriented working. The key factor in making it work is metadata. The metadata schema must be flexible and readily scalable, allowing for each individual organisation to establish its own way of working.
The metadata engine – the asset management system – must also be capable of interfacing with external systems, whether that is a secure link to the enterprise management system or harvesting information from multiple relevant websites. It stores much of the metadata, and knows where to get the rest.
So the asset management system is only logical place to put the workflow orchestrator: it makes little sense to introduce yet another software layer to be integrated. With TMD Mediaflex, the asset management structure is scalable to any metadata schema and the workflow engine is simple and intuitive, allowing designers to both create workflows and to create the building blocks which can be used to create workflows in response to service requests.
Asset management should not be regarded as one element in a service-oriented architecture. It should be viewed as the core, the means by which metadata is harnessed to deliver the services, however they are defined.
Recent Comments