OpenMediaPlatform PROJECT FINAL REPORT Final publishable summary report Grant Agreement number: 214009 Project acronym: OMP Project title: OpenMediaPlatform Funding Scheme: STREP Period covered: 1st January 2008 to 31st December 2009 Name of the scientific representative of the project's co-ordinator, Title and Organisation: Prof. Stefano Crespi-Reghizzi, Politecnico di Milano Tel: 0039 02 23993518 Fax: 0039 02 2399.3411 E-mail: [email protected] Project website address: www.openmediaplatform.eu

OpenMediaPlatform 214009 – February 2010

1

Publishable summary Title and Acronym Project number Objective of the Project Workprogramme Funding Scheme

OpenMediaPlatform (OMP) 214009 This proposal addresses the ICT2007.1.2 objective: Service and Software Architectures, Infrastructures and Engineering. The OMP project is developing the necessary tools, components, algorithms, methods and advances in standards essential to meet this objective. ICT - INFORMATION AND COMMUNICATION TECHNOLOGIES Collaborative Project (CP) funding scheme for STREP type project

Beneficiaries The OpenMediaPlatform (OMP) consortium has a unique mix of experts in advanced compilation toolchains, component based programming and efficient runtime engines with experts in the multimedia domain. The consortium comprises silicon manufacturers, independent software vendors, and academia:

STMicroelectronics

COUNTRY Italy

Incoras Solutions Ltd.

Ireland

NXP Semiconductors Belgium NV

Belgium

NXP Semiconductors Netherlands BV

The Netherlands

Fraunhofer-Institut für Nachrichtentechnik, Heinrich-HertzInstitut (HHI) (Embedded Systems Group)

Germany

MOVIAL Creative Technologies Inc.

Finland

Institut National de Recherche en Informatique et Automatique (INRIA) (Alchemy group and Sardes group)

France

Politecnico di Milano (Formal Languages and Compilers Group)

Italy

PARTNER

Contact details: Website: www.openmediaplatform.eu

OpenMediaPlatform 214009 – February 2010

2

The Challenge Facebook, Bebo, YouTube, MySpace & Flickr are a few examples of hugely successful ‘socialnetworking’ and ‘content sharing’ web-applications, offering exciting ways for millions of PC users to connect and share multimedia content. The huge success of Facebook and other sites within Europe demonstrates the readiness of the European community to embrace these new ideas and technologies. With more than 1 billion mobile devices sold in 2009 alone, the success of these services is set to be repeated on a much larger scale in the mobile space. The networks and mobile terminals of tomorrow will need to support the Facebook’s and YouTube's of tomorrow! Users will need to create, edit, enhance and share their own mobile content, and interact with the content and with each-other in novel ways. Mobile devices will need to intelligently operate with an array of networks and web services to dynamically compose content that scales with device and network capability to deliver the most engaging experience possible. This new enriched network environment will facilitate creative business models for users, and service providers alike. Supporting this next generation of content-rich networked services on mobile devices has presented major technical challenges for Europe’s semiconductor and Consumer Electronics industries. Unlike the standard PC architecture, these devices have huge variation in multimedia capability and must operate under widely varying network conditions. Software complexity growth on these devices has outpaced software productivity improvements by a factor of 4 to 1. Limited by cost and time-to-market constraints, current practices result in client services that are statically defined and non-optimal within the context of operation.

Addressing the ICT Challenges OpenMediaPlatform (OMP) defines an open service infrastructure for media-rich end user devices that address software productivity and optimal service delivery challenges. OMP innovatively combines two main streams of modern software engineering: 1. Split compilation techniques that leverage component based SW engineering and provide binary portability (CLI standard) with minimal performance penalty; 2. Standardized and open application programmers’ interfaces (API) for advanced multimedia stacks that support resource management and context awareness at component level. Together, these two technologies deliver a computing tools infrastructure and a media infrastructure to ensure optimal deployment of dynamically composed media services that scale with device and network capabilities. The OMP approach is demonstrated using state of the art Scalable Video Coding (SVC) on two different System On Chip platforms. OpenMediaPlatform provides the architecture and the necessary tools to compose media services that can run on many platforms. The OMP architecture enables components from various vendors to be instantiated and swapped dynamically; they can run according to different Quality of Service (QoS) demands. Suitable Software interfaces are defined to allow any component to communicate with a platform specific resource manager at runtime. For example, a Scalable Video Coder (SVC) component can optimize its execution for minimal power consumption or for optimal playout, after negotiation with the resource manager. OMP provides the infrastructure to adjust the component QoS execution to user profiles, varying network characteristics, different platform constraints or changes in ambience conditions.

OpenMediaPlatform 214009 – February 2010

3

OMP uses (and extends) the FRACTAL component based methodology to clearly specify component interfaces and adopts standardized OpenMAX API defined by Khronos. Based on the experience gathered during OMP execution, proposals for OpenMAX API extensions have been submitted along with new SVC standard component definitions. Although OMP focuses on multimedia as the main use case, the software methodologies used (FRACTAL, ILDJIT) can be generalized to other domains. OpenMediaPlatform has developed three classes of development tools. First, a new system integration tool, MediaDesigner, enables early design exploration of media services composed of multiple standardised components. In this mode of operation the integration toolkit allows a user to predict how mobile devices will perform when dynamically composing media services on the enduser device . This tool uses XML information models for components and platforms plus profiling metadata gathered during component creation. Second, a static compilation toolchain that leverages FRACTAL, THINK and CAPSULE component based approaches to produce a portable binary (beyond .NET) through a modified gcc compiler. Third, OMP has extended an existing industrial dynamic JIT toolchain with 1. runtime optimization based on semantical component annotations and 2. dynamic interaction with a platform resource manager for optimal resource allocation. OpenMediaPlatform has leveraged and contributed to open source projects (FRACTAL) and provides for future involvement of the open source development community where results can be shared. Combining ISA virtualization and component based deployment of modules, the target Open Media Platform can better guarantee homogeneous user experience across multiple hardware devices, as well as manage trusted third-party software without prejudice to the core functionalities of the system. OpenMediaPlatform contributes to end-to-end quality of service by providing scalable execution of media component, under resource management control. OMP provides a SW infrastructure that lends itself to implementing distributed systems. In fact, being components portable and interoperable, they can be executed in different devices concurrently. OMP has delivered full demonstrators of distributed OpenMAX implementations, in particular a new componentised SVC decoder executed on the ST 8815NHK platform utilising the FRACTAL and ILDJIT technologies.

OpenMediaPlatform 214009 – February 2010

4

The OMP Tool and Service Pipeline The diagram below shows the context of OpenMediaPlatform with end-user devices accessing multimedia content and services offered through different networks.

In the application design phase OMP allows flexible composition of media services. Improves qualification, by: • avoiding over-dimensioning the end-user device; • enriching media components with QoS meta-data, with compiler exploitation producing efficient executables, • early verifying complex media use cases. In the component implementation and deployment phases OMP uses a split-compilation approach: a static C compiler produces a portable binary (CLI standard), with additional meta-data. At deployment time, dynamic compilation efficiently maps media components to parallel end-user device architectures (from multi-core to hardware accelerated ones). In the normal service operation phase, adaptability to different QoS levels - based on user requirements, platform or network constraints - is supported by a novel interface towards a platform

OpenMediaPlatform 214009 – February 2010

5

resource manager. This allows implementing efficient trade-offs between media service quality and power consumption – a must in tomorrow’s mobile platforms. Once a media service is defined by dynamic combining portable components with standardized interfaces, the same binary can be optimally executed on platforms with completely different architectures, similarly to the Java approach, but without the performance penalty. OpenMediaPlatform allows complex media services to be composed by combining software components with standard OpenMAX interfaces. In turn, such coarse grained components are further decomposed into elementary “atoms” or FRACTAL components which are easily reusable. Once the service is defined according to the above steps, the same portable binary is optimally executed on platforms with different architectures, similar to the .NET approach, but with no significant performance penalty. The OMP approach therefore delivers a solution that guarantees: 1. the same service can execute on a large variety of platforms; 2. service components efficiently utilize platform resources at runtime; 3. components can be easily mixed and matched, thanks to standardized SW interfaces; 4. component execution adapt to varying QoS demands at runtime, without service disruption. OpenMediaPlatform covers the steps of media service creation (based on components with standard interfaces and tools for system integration), deployment (through binary portability) and efficient execution (through optimized runtime systems). The overall approach is summarized in the OpenMediaPlatform Tool and Service ‘pipeline’ as shown in Figure 0-1.

OpenMediaPlatform 214009 – February 2010

6

Figure 0-1: OpenMediaPlatform Tool and Service Pipeline

Components are coded using beyond-state-of-the-art Component Based Software Engineering (CBSE) techniques (a) and given a standardized interface (1). Complex media usage ‘modes’ can be architected, and system performance can be estimated using the MediaDesigner integration tool that accesses a library of media components (c). Components are compiled with a modified gcc compiler to produce portable binaries in the Common Language Infrastructure intermediate format (b). Such portable binaries can be made available in suitable deployment repositories (d). A multimedia use case can dynamically combine a set of such portable components and execute them on the end platform, after dynamic compilation (e). Components run as native code on the target architecture (g), where they interact with the rest of the media stack through the standard interface (1), with a platform resource manager (2) and with a flexible runtime system. This latter module can use component annotations to optimize their execution on available platform resources (CPU’s, DSP, HW accelerators). The media stack accesses platform resources (I/O, networking, windowing) through a standardized interface (3), which makes it portable as well. Profiling information is exported using a standardized format (i) to be used by the system integration tool. Throughout the tool and service pipeline, the tools, media components and networks utilise component and system information models to exchange the information needed to optimally compose dynamic services for the target device. It is important to notice that OMP has significantly advanced the state-of-the-art in a number of technology areas namely: •

OMP combines state-of-the-art component based software engineering techniques with binary portability to deliver true code portability;



OMP uses standardized API’s to allow media components to interoperate and extends such interfaces to accommodate advanced use cases;



OMP delivers advanced design automation tools to assist in design, test, integration and analysis of media services on distributed platforms.



OMP defines interfaces to resource managers so that components can adapt their execution to different QoS requirements and contexts;



OMP drastically improves the performance of state-of-the-art CBSE runtime environments by integrating mechanisms for (a) merging the code of several components and (b) dynamically adjusting the mapping of execution resources to components;



OMP provides a dynamic compilation framework that enables flexible runtime adaptation of services.

OpenMediaPlatform 214009 – February 2010

7

Work Performed The OpenMediaPlatform (OMP) project was defined with 3 principle objectives as shown below In summary the three main objectives of the OMP project are to deliver advances in Computing Tools Infrastructure, Media Infrastructure and Reference Platform Implementations. As shown these objectives are mapped to different project outcomes covering Open-source projects as well as commercially available products.

OpenMediaPlatform 214009 – February 2010

8

Objective 1: Computing Tools Infrastructure OUTCOMES AT A GLANCE Objective 1: Computing Tools Infrastructure To develop and disseminate an innovative and extensible computing tool infrastructure consisting of static and dynamic composition tools and methodologies Main outcomes: • Open-source component-based system engineering tools, open source split compilation tools, and commercial toolkits and methodologies that facilitate component design, integration and deployment of complex media-rich services on non-PC networked devices. Included are the ILDJIT and CECILIA Open source projects and the Incoras MediaDesigner commercial system integration toolkit. •

An integrated set of CBSE tools and static/dynamic compilers



A sample Scalable Video Codec component that is portable across a wide range of devices and that optimally exploit available resources



A standard component schema for information exchange between tools and run-time information exchange between media components

OUTCOMES DETAIL Component Based System Engineering Tools The following list shows some of the successful outcomes from the OMP project • • • •

New Fractal toolchain and libraries, including support for asynchronous and RPC bindings New extension of the Comete middleware to support deployment of parallel, eventdriven component-based applications on shared-memory multiprocessors using the SVC decoder as a case study Extension of the Comete middleware to support dynamic migration of event-driven components on shared-memory multiprocessors using the SVC decoder as a case study Improvements into the toolchain including bug fixes, portability, documentation

Functional extensions and performance improvements were integrated into the Comete middleware during the last six months of the project. As a result, a new release of the Fractal tools was uploaded in December 2009. The outcome is a FRACTAL toolchain suitable for multimedia use cases on embedded platforms. OUTCOMES DETAIL Split Compilation tools (static and dynamic compilers) Traditional JIT compilers leverage continuous, feedback-directed (re-)compilation frameworks to select hot functions (frequently executed) for online optimizations.

OpenMediaPlatform 214009 – February 2010

9

These online optimizations must make important trade-offs in terms of reducing compilation time for decreased generated code performance. Reducing compilation overhead has two main benefits, low-complexity algorithms simultaneously increase the amount of code being optimized while reducing the compilation time for hot functions. In practice, (quasi-)linear complexity is the rule for JIT compilation. This severely impacts what kind of optimizations are admissible and how aggressive they may be. Compilation through bytecode intermediate languages distribute roles among offline and online compilers. Verification and code compaction are typically assigned to offline compilation, while target-specific optimizations are performed by online compilation. Split compilation reconsiders this notion: it allows optimization algorithms to be split into an offline and an online stage, transferring the semantic information between those stages through carefully designed bytecode annotations. The general goal of split compilation is to run expensive analysis and make as many optimization decisions as can be ``safely'' done offline. Most JIT compilation efforts tried to leverage the accuracy of dynamic analysis and the transformation expressiveness of static code generation to outperform native compilers; but split compilation is a concrete path to get the best of both worlds. OMP focused on three software packages: a suite of compilers to perform split (static and dynamic) compilation of programs written in the C language, and a prototype of split register allocation implemented on Java bytecode, but which could be easily ported to the CIL-based framework. Static Compiler (GCC4NET) The GCC4NET compiler has been developed and extended to support C language translation to CIL bytecode, as the first step of the toolchain. Dynamic Compiler (ILDJIT) ILDJIT is a dynamic compiler able to translate and execute CIL bytecode on x86 compatible machines. The OMP release of ILDJIT contains the following modules • The ILDJIT dynamic compiler, which is used to compile CIL bytecode to native code • and control its execution. • The ILDJIT optimization modules library. • AIKA, the ILDJIT installer tool (GUI for ILDJIT packages installation and updating). • The ILDJIT test suite, used for running ILDJIT regression tests. Advanced Just-in-Time Compilation with Split Register Allocation. To make a concrete case for split compilation, we selected the (spill-everywhere) register allocation problem. Register allocation is an ideal candidate to demonstrate how split compilation impacts the design of future bytecode languages and compilers, and how it differs from annotation-enhanced JIT compilation. Our method is implemented in the JikesRVM open source JIT compiler for Java, and evaluated on x86 and PowerPC architectures. The following list shows some of the successful outcomes from the OMP project

OpenMediaPlatform 214009 – February 2010

10

Static compilation • An enhanced static front-end compiler, based on GCC, GCC4NET. The compiler is currently able to compile most C code, but it still lacks support for native libraries related to signals and networking. • Full implementation of the dynamic bytecode compiler, ILDJIT, supporting the ECMA-335 bytecode representation produced by GCC4NET for C language programs. • New optimization techniques for dynamically compiling in multi-core architectures, implemented in ILDJIT. • A new split compilation technique for register allocation . • Definition of a programming model to support the identification of parallelism in source and intermediate code and development of a prototype. • Completion of a prototype for portable parallelism that automatically specializes a single source to multiple parallel hardware platforms. • Definition and implementation of the collection and encoding of specific information related to code specialization in ILDJIT and GCC4NET. • Definition and implementation of the collection and encoding of specific information related to register allocation, vectorization. Dynamic Compilation and IS Virtualization support • • •

Deployment-time compilation support within the dynamic compiler ILDJIT. A large set of optimization and profiling plug-ins for ILDJIT, including support for code specialization. An ILDJIT dynamic compiler to fully support the execution of C programs compiled using GCC4NET

OUTCOMES DETAIL Integrated Tool flow The outcome of OMP research into Split Compilation tools resulted in 3 toolchains described as Toolchain, Alpha, Beta and Gamma. •

Toolchain Alpha relies on stable technologies (static compilers from the C language to the target platforms) and human interaction (for the deployment phase), while supporting the componentbased programming model required to develop OMP multimedia applications. It does not address portability or component optimization issues.



Toolchain Beta provides a full toolchain supporting CLI based portable binaries, and allows the exploration of Split Compilation to provide optimizations not usually available in dynamic compilers. Moreover, it serves as a research platform to explore new dynamic compilation techniques suited for the typical platform targeted by OMP applications.

OpenMediaPlatform 214009 – February 2010

11



Toolchain Gamma defines a research platform to explore parallelization of source and intermediate code.

A full report on these toolchains can be www.openmediaplatform.eu. Here we present the architecture of the Toolchain Beta

found

on

the

OMP

website

The following outcomes were achieved: • • • •

Full integration of Toolchain Alpha (Cecilia and Comete) Toolchain Gamma integration of BLIS specialization with Toolchain Alpha Integration of Toolchain Beta (gcc4net and ILDJIT, with ability to executed C code generated from componentized code). Integration of MediaDesigner with the Scalable Video Coder application.

Toolchain Beta is able to execute small componentized programs, but, due to limitations in GCC4NET, it was not able to produce ECMA-335 bytecode for larger benchmarks.

OpenMediaPlatform 214009 – February 2010

12

OUTCOMES DETAIL System Integration Tools and MemTrace The System Integration toolset comprises of two parts: a front-end application based on the Eclipse framework, and a target-side media application built around the OpenMAX standard. The diagram below shows the front end running on a PC connected to the target side MDServer, comprising test framework and Bellagio OpenMAX implementation running on a Beagleboard.

MediaDesigner; the Front-End. This is a graphical environment for assembling media-systems from pre-designed components (eg the Scalable Video Codec SVC). Based on the OpenMAX IL standard for components OMP delivered a platform independent and neutral tool. This allowed assembly of demonstrations based on a variety of Open-Source media components. The demonstrations rely heavily on the Bellagio open-source library of components and its underlying Interface Layer which ran on the Ubuntu Linux OS. This provided a platform which could be downloaded from the web together with MediaDesigner, so giving users an environment where they could evaluate plug-and play OpenMAX components, albeit in a generalised noncustom environment.

OpenMediaPlatform 214009 – February 2010

13

0-1. MediaDesigner Front End Architecture

The front-end is highly modular; various modules provide for the various tasks undertaken by each actor in the media development chain. For media system developers, the front-end tool consists of several main elements: component design and testing, design exploration/platform profiling, repository management, system architecture design, system generation and system test. MDServer; the Back-End. This is where the MediaDesigner (MD) gets customised to integrate with a user’s specific development platform. Incoras developed MDServer, a targetside run-time component which allows MediaDesigner to communicate with the target device. The communication between MD & MDServer allows a user to control and observe the dynamic run-time behaviour of their media system, as it executes on their target platform. Target performance and trace statistics are collected by MDServer and reported back to MD for display and analysis, so allowing the user to study the operation of the media System. Metrics collected included memory usage, CPU cycles, buffer usage, flags and semaphores and time tracing. Incoras developed custom target implementations for several combinations of different OS’s, IL implementations, Media Component libraries & target platforms. The technology was proven to be readily portable and robust. Multimedia Design Test and Integration Flow Multimedia components are designed at an interface level and where applicable code is automatically generated.

OpenMediaPlatform 214009 – February 2010

14

The components are functionally tested and then profiled for a range of design and performance characteristics which could include Program Interfaces, Static and Dynamic Memory, MIPS, DMA usage and Power requirements. This information is captured as metadata in a repository, and validated against an XML schema defining the semantics of the data files. This XML schema is determined from the underlying information model for multimedia components. Any platform resource meta-data is captured and validated against a platform XML schema.

Testing is hierarchical with individual test cases grouped into scenarios and groups of scenarios applied as batch tests.

OpenMediaPlatform 214009 – February 2010

15

The system integrator, using component information and performance estimations, creates an architectural layout and defines the multimedia use cases including component configurations. The tool facilitates this by presenting the user with a palette of components drawn from the repository of component descriptors for the selected platform. The user may choose between different platform representations within the tool, allowing him to assess the suitability of his selected use-cases on a variety of platform configurations.

The diagram shows how the user made use of MediaDesigners graph design feature to graphically describe the media graph. The graph description is stored as an XML file on the file system. Having described the graph the user connected to the target, created the graph and then used the run-time controls to transition the graph into an executing state and measure performance.

OpenMediaPlatform 214009 – February 2010

16

Having carried out this basic interactive design exploration with the reference AMR codec the user then switched the AMR ref codec to the NEON accelerated codec from a pull down list of available AMR decoders within the graphical design window. The user can create the new graph on the target and carried out the same process within a matter of minutes and then compare performance results.

Further to the successful outcomes above, the Memtrace Profiler from HHI was extended for component profiling based on the FRACTAL environment enabling profiling of the FRACTAL sub-components of the SVC decoder. To achieve this it was necessary to modify the FRACTAL framework for profiling the inter-component communication. Profiling support on component as well as application level has now been established. An interface to the Incoras Media Desginer has been defined, which allows the import of profiling data into the Media Designer for annotating component descriptions with instruction, invocation and memory accesses count information. The main outcome is a design, test and integration methodology that can save up to 90% of the effort of more traditional methodologies. OUTCOMES DETAIL OpenMediaPlatform 214009 – February 2010

17

Portable SVC Codec The SVC codec has been used heavily within OMP as the reference codec for verifying the full methodology on the target platforms. The codec has been used with ILDJIT and FRACTAL technologies and results are presented with the reference platforms below. With Scalable Video Coding (SVC) the Joint Video Team (JVT) has published an amendment to the existing H.264/AVC video coding standard. The new SVC standard provides scalability at the bit-stream level and supports functionalities such as bit-rate, format and power adaption as well as graceful degradation in poor transmission environments. An SVC encoder is capable of creating a scalable bit-stream, which contains multiple representation of the same video source, in one single run. In dependency to the transmission conditions or the device capabilities parts of this bit-stream can be removed, so that the SVC decoder finally reconstructs one of the previously encoded video representations.

With temporal scalability, spatial scalability and peak signal to noise ratio (PSNR) scalability the SVC standard currently provides three different modes of scalability. Temporal scalability supports multiple frame-rates, spatial scalability multiple video resolutions and PSNR/fidelity scalability multiple visual fidelities within the same scalable video data bit-stream. Of course all possible combinations of these three modes can be used for the encoding of the video data source. SVC uses for the delivering of different video representation a layered approach. To reduce the cost of the data transmission new inter-layer encoding techniques have been introduced by SVC. This implies that for the decoding of one special video representation the corresponding SVC layer and all lower dependency layers must be processed by the SVC decoder. Therefore every additional SVC enhancement layer increases the amount of data that has to be parsed and decoded and increases accordingly the device resources needed for the video playback The implementation of the OMP OpenMAX IL SVC decoder component is based on the Bellagio open-source project. Bellagio is a C implementation of the OpenMAX IL APIs that runs on Linux PC systems and provides a shared library with the IL core and a number of OpenMAX IL components. The Bellagio implementation of the SVC decoder component consist of an OpenMAX IL wrapper bridging all OpenMAX IL calls to a core SVC decoder library, which by itself offers a classical video decoder interface

OpenMediaPlatform 214009 – February 2010

18

SVC decoder is then further decomposed into sub components using the OMP FRACTAL toolchain. The underlying component model FRACTAL is a modular, extensible and programming language agnostic component model. Based on this component model CECILIA provides a development environment for implementing FRACTAL components on top of the C programming language. This means the CECILIA toolset reads a set of FRACTAL ADL architecture and IDL interface description files, generates the C component glue code and compiles these glue code together with the component implementation using an external C compiler. A description of the FRACTAL components of the core SVC decoder can be found in the following sections. Among other important features FRACTAL is a recursive component model that allows the creation of composite components. The implementation of the component based core SVC decoder library uses this feature to benefit from the encapsulation, structuring and hierarchical ordering of the components. As shown in Figure 14 the composite SVC decoder component consist of a primitive SVC decoder component containing the implementation of the SVC decoder functionality and additional composite components for side information processing and slice data processing.

OpenMediaPlatform 214009 – February 2010

19

The SVC decoder component operates on a NALU level. All incoming NALUs will be analysed in the primitive SVC decoder component and scheduled afterwards according to their NALU type to a separate processing unit. The outcome is a portable SVC codec implemented using the ILDJIT and Fractal technologies. Having carried out this work OMP partners have presented to the OpenMAX Khronos working group a proposal for the standardization of a new standard component defining a SVC decoder. This proposal has been documented and presented to the group in December 2009 an outline of which is shown below. The OpenMAX working group will review it and discuss further in 2010. It is anticipated that it will apply the new standard component definition in a future version of the standard. Name

video_decoder.svc

Description

Decodes the given compressed video stream into an uncompressed video stream.

Ports

Index

Domain

Direction

Description

VPB+0

Video

input

Consumes compressed video content.

VPB+1

Video

output

Produces uncompressed raw video.

Port Index

VPB+0

Description

Consumes compressed video content.

Required

Index

OpenMediaPlatform 214009 – February 2010

Access

Description

20

Parameters/

OMX_IndexParamPortDefinition

r/w

Configs

Specify/query the video port settings. nFrameWidth = 176 nFrameHeight = 144 nBitRate = 64000 xFrameRate = 15 eCompressionFormat = OMX_VIDEO_CodingSVC eColorFormat = OMX_COLOR_FormatUnused

OMX_IndexParamVideoPortFormat

r/w

Specify/query the video format. eCompressionFormat = OMX_VIDEO_CodingSVC eColorFormat = OMX_COLOR_FormatUnused

OMX_IndexParamVideoSvc

r

eProfile = OMX_VIDEO_SVCProfileSca lableBaseline eLevel = OMX_VIDEO_SVCLevel1

OMX_IndexParamVideoProfileLev elQuerySupported

r

Query supported profile/level pair by index.

OMX_IndexParamVideoProfileLev elCurrent

r

Query current profile/level pair.

OMX_IndexConfigVideoSVCRecLay erRequest

r/w

Configure the reconstruction layer.

nDependencyId = 0 nQualityId = 0 nTemporalId = 0 OMX_IndexConfigVideoSVCRecLayerCur rent

r

Request the currently by the decoder used reconstruction layer.

OMX_IndexConfigVideoSVCLayerInfo

r

Get scalable layer information of the currently processed bitstream.

Port Index

VPB+1

Description

Produces uncompressed raw video.

Required

Index

Access

Description

Parameters/

OMX_IndexParamPortDefinition

r/w

Specify/query the video port settings.

Configs

nFrameWidth = 176 nFrameHeight = 144 eCompressionFormat = OMX_VIDEO_CodingUnused eColorFormat = OMX_COLOR_FormatYUV420P lanar

OpenMediaPlatform 214009 – February 2010

21

OMX_IndexParamVideoPortFormat

r/w

eCompressionFormat = OMX_VIDEO_CodingUnused eColorFormat = OMX_COLOR_FormatYUV420P lanar

OUTCOMES DETAIL Component Schema The view of a media processing component as a standalone entity is a key concept in the OpenMAX standard and to Objective 1 of the OMP project (Computing Tools Infrastructure). A component ultimately comprises of the executable code which performs the function of the component in a target architecture, plus sufficient documentation to allow the developer to incorporate the component correctly. The Component Schema provides a machine-readable, standardised form for documenting a media component for information exchange between tools, allowing future advances in the dynamic creation of services utilizing innovative tool ‘pipelines’. In the developed system, each component has a component descriptor which accompanies the actual implementation. This descriptor is held as XML in a standard format, the format being described by a schema which forms the subject of this deliverable. The schema itself is in XSD (XML Schema Description) and is available to all tools and processes that use the component descriptors. The schema describes the kind of data that may be found in a component descriptor, the types of data held within, and any relationships between data in the descriptor. The schema itself can be used directly by any standard XML entry tool to assist the user in creating XML component descriptors (and is used extensively within the Media Designer tool). The schema can also be used to validate unknown component descriptors to identify and eliminate inconsistencies in the presentation before they become part of a design. The schema is a collection of XML files (in XSD format). There are currently 21 schema files in the complete set, some relate specifically to the component, others allow information about designs, links between components and resources to be stored in the same way. Component.xsd in the associated archive contains the top-level component object; omxil.xsd contains the top-level description for all schemas in the set. The following diagram illustrates one section taken from the various schema files.

OpenMediaPlatform 214009 – February 2010

22

OpenMediaPlatform 214009 – February 2010

23

Objective 2: Media Infrastructure OBJECTIVES AND OUTCOMES AT A GLANCE: Objective 2: Media Infrastructure To define, implement a novel media infrastructure consisting of enhanced standard APIs to support resource and context awareness Main outcomes: Advances in the Khronos OpenMAX and standards Extensions include: • Interfaces to support smart cooperation between advanced QoS management protocols and media components; • Tools to describe QoS metadata • Resource Management framework • Proposals for standardisation of QoS and Resource Management methodologies into the OpenMAX specification. • Open source implementations of QoS and RM

OUTCOMES DETAIL Quality of Service Management (QoSManager) The QoSManager middleware sits between the application and the OpenMAX IL core, and moderates demand for resources taking account of demand estimates and other existing resource consumers. Through a new interface added to the IL-core, the QoS manager is able to determine the potential resource demand for each of the Quality-enabled components in the system. From the user the QoS Manager receives qualitative selection information which allows the demand to be tailored to the availability of resource in the underlying system at the time when the request is made. Quality means here the quality of the service performance that the end user can observe. The service is the media device functionalities for the end user. In OMP we have focused in two areas: highest quality for media playback and low power mode for obtaining long battery life. In high quality mode media playback is done with as high perceived quality as possible. It assumes that we can use all available capability and some graceful resource degradation is done. Power management assumes there are free resources available and resource capacity can be reduced. The quality used in low power mode will be the lowest acceptable quality. QoSManager uses policies, quality settings and resource estimates for managing the media stream parameters. This is done for obtaining the following: · User's best perceived quality · Lowest acceptable quality for user · Perform inside resource restrictions of the device Media stream parameters are controlled by device's own and user's own policies and settings. QoSManager uses this stored information as base for it's decisions when handling media stream requests. QoSManager also uses the Resource Manager to see what resources can be

OpenMediaPlatform 214009 – February 2010

24

used. The media graph parameters are then adjusted based on the previously mentioned information. The parameters can affect one or more components in the media graph. E.g. In high quality mode the end user might as one thing prefer high resolution in video playback. This affects at least the video decoder and video sink resolution parameter.

OUTCOMES DETAIL Quality of Service Information Entry Incoras Solutions MediaDesigner supports Quality of Service management in a general purpose way through the definition and use of Quality Levels. The various editors in MediaDesigner support the use of Quality Levels for Quality of Service management. Each media component can offer one or more Quality Levels. Each Quality level is characterised for the device in isolation in terms of the resources that the component uses in that level. This resource information is held in an XML file which is specific to the platform on which the component was characterised, and the information is available to the user through a dedicated editor within Media Designer. Additionally the same XML-based information can be used as the basis for the component code itself, allowing the component Quality Levels and resource information to be queried by the target-side QoS management system platform at runtime.

When components are selected into a graph, it is also possible to characterise one or more graph Quality of Service levels for the graph as a whole. Each one selects one or more Quality Levels for the individual components in the graph. This selection information is held in the graph descriptor, another XML file, and is presented back to the user through the graph editor. The QoSManager uses rule sets to define how different quality parameters relate to each other. This way it is possible to make decisions when selecting which parameters to use with

OpenMediaPlatform 214009 – February 2010

25

each media graph. The context aware components support quality levels to imply what different parameter settings they support and how much resources they consume on each level. The rule sets define the basis of decisions made for selecting a graph's components quality levels. The rules define how a graph's different components quality levels are combined to obtain the optimal solution for the selected quality policy mode. The rules are based on priority ordering. So the top rule is preferred before the rules coming after it and so on. There are rules that concern the whole graph and others that can be pointed directly to specific components. The rule sets are stored as XML in a text file, so that they can be updated easily. The outcome is a set of QoS data entry features allowing a user to easily capture and present information about the QoS levels and the associated characteristics of components. The data is all available at media use-case design time and during design exploration phases to assist users in making informed decisions about QoS tradeoffs in resource limited target devices.

OUTCOMES DETAIL Resource Management API Extensions for OpenMAX The generic Resource Management (RM) API corresponds to a platform specific, streaming framework independent resource manager. The resource manager is responsible for checking and reserving CPU and memory resources for a given (collection of) Linux threads. There is one resource manager per platform that keeps track of all platform users. OpenMAX extensions were created and proposed to enable resource management. The method used complies with OpenMAX-IL standardization guidelines such as: • Solution does not expose platform specific resources to the IL Client (Media Application) since this may be forbidden by the platform vendor if it exposes some Intellectual Property (IP). For this reason, resource estimates should not be accessible from IL-Client, but handled by the Resource Manager. • In OpenMAX version 1.2 there is no concept of atomic operation on a set of components and therefore it is not possible to check a set of resource budgets together. In OMP we use the group ID to indicate to the RM which components belong together in a chain and therefore know whether there is budget for the whole chain. The figure below shows the class diagram for the OpenMAX core extensions to achieve RM support and the underlying RM APIs

OpenMediaPlatform 214009 – February 2010

26

The core extension APIs to support RM are ustilised in the Bellagio OpenMAX IL implementation described later. Functions OMX_ERRORTYPE

IRMCoreExt_resourceEstimatesCheck (const QosComponent *components, int no_of_components) Checks if the RM can, at time of request, reserve the resources for the requested quality levels.

OMX_ERRORTYPE

IRMCoreExt_resourceEstimatesReserve (const QosComponent *components, int no_of_components, int *pBudgetIDList) Instructs the underlying RM to reserve the estimates for the components based on the requested quality level.

OMX_ERRORTYPE

IRMCoreExt_reservedResourcesAdjust (const QosComponent *components, int no_of_components, int *pBudgetIDs) Adjusts the already reserved resources for a graph if they change.

OMX_ERRORTYPE

IRMCoreExt_addComponentResources (const QosComponent *components, int no_of_components, int *pBudgetIDs) Adds the extra resources of a component inserted into an already created and/or running graph.

OMX_ERRORTYPE

IRMCoreExt_init (void) Handles any RM or system initialisation.

int

IRMCoreExt_queryNumberOfResourceBudgets (void) Queries the resource manager for the number of resource budgets the QoS

OpenMediaPlatform 214009 – February 2010

27

should be prepared to handle. In the case of OMP it is 2 - HOST_CPU budgets and HOST_MEMORY budgets.

OMX_ERRORTYPE

ERRORTYPE IRMCoreExt_addUserToBudgets (int *pBudgetIDs, int nBudgets, int pid) Adds a process id (User) to a list of budgets.

OMX_ERRORTYPE

IRMCoreExt_deleteBudgets (int *pBudgetIDs, int nBudgets) Instructs the RM to delete budget IDs

The outcome is a proposal for an open extension to the OpenMAX IL standard to facilitate smart resource management. The solution also delivers an underlying RM framework that demonstrates this advanced RM feature. The solution has been used in the Bellagio OpenMAX implementation and released for general public use within the Bellagio project under Sourceforge.

OUTCOMES DETAIL Bellagio Open Source Implementation Bellagio is the Open Source implementation of an OpenMAX IL core and sample components originally developed by ST, with contributions from Nokia, Motorola, Incoras and others. It is only fully open source implementation available and is widely used by the OpenMAX community and by newcomers. OpenMAX Integration Layer Bellagio base class component interface with the QoS interface. Accordingly the version 0.9.2.1 of Bellagio framework was released with the optional support for quality levels. Any component developed in this framework could optionally support quality levels setting. Any change in quality level is reflected in a change in the OpenMAX standard parameters and any notification to the application of the automatic change is notified thought the standard OpenMAX callbacks. This prototype can be found in the public Bellagio open source project hosted on Sourceforge at the URL http://sourceforge.net/projects/omxil Quality levels and associated resources descriptors are conveyed as meta-data alongside the component itself. In the Bellagio implementation, such information is attached by using similar mechanism to the ones that underlie the component registration process. A resource management subsystem was contributed to the Bellagio project. The resource manager retains the underlying cgroups mechanism for enforcing resource limitations on the system. Since then, the system has been augmented with a Quality of Service Manager layer which permits resources to be determined and allocated for groups of components together, as per the requirement to manage graph-level quality. The architecture which was initially envisioned for the resource manager and QoS manager has been significantly influenced by feedback from the OpenMAX IL standardisation committee. In

OpenMediaPlatform 214009 – February 2010

28

particular, the IL core now provides interfaces to both the resource manager and to the components QoS extensions (rather than the QoS manager accessing the RM directly). This allows all of the detail associated with resource management to be hidden behind the IL interface, and allows vendors to provide implementations which do not reveal commerciallysensitive information about actual component resource usage to the rest of the system. In addition, a prototype implementation of a quality policy manager has also been developed for Bellagio. The policy manager allows the system to select an appropriate quality level for the components in a graph based on the available resources and the user’s preferences.

The outcome is a fully Open Source implementation of the QoS manager and RM concepts within the most widely used of OpenMAX implementations. OUTCOMES DETAIL Gstreamer Integration Octopus is a media engine for controlling audio and video streams. Media streams can be local files or actual streams over the network. Octopus provides a higher level API for the end user applications to manage multimedia content. Target applications are e.g. media players, voice and video call applications. Octopus itself works as a background service that several applications can use at the same time. The client API is currently a DBus API. For media content operations Octopus uses GStreamer and OpenMAX IL components. Use of QoS in Octopus. The QoS Module (qosm) QoS module is a Octopus module that works between the GStreamer backend and QoS Manager. Its purpose is to communicate with the QoS Manager when GStreamer event occurs. When Octopus starts to play a media stream it builds a GStreamer pipeline by creating needed GStreamer elements inside it. After creating the pipeline, GStreamer changes the state of the each element to “ready”. The qosm module listens GStreamer bus and gets message when the state of the element changes. When the state indicates that the element is ready, qosm will check if the element is a gst-openmax plug-in element. If the element is a gstopenmax plug-in element qosm asks QoSManager if there's enough resources to use the element. If there's no enough resources available, the playback is stopped and the pipeline is destroyed. After all elements are checked and the needed resources are reserved by QoS Manager the playback is processed normally. References [1] http://sandbox.movial.com/gitweb?p=octopus.git;a=summary [2] http://freedesktop.org/wiki/GstOpenMAX The outcome is a full implementation of the OMP QoS method in a widely deployed Media player (Octopus) and the number 1 open source media engine (Gstreamer). This approach is an enabler for widespread adoption of OMP QoS and further development of the concepts.

OpenMediaPlatform 214009 – February 2010

29

Objective 3: Reference Platforms OBJECTIVES AND OUTCOMES AT A GLANCE: Objective 3: Reference Platforms To prototype a standards-based run-time environment and methodologies Main outcomes: The main goal of the integration and validation phase OMP was to bring together both the OMP tools and methodologies with the multimedia software stacks namely: • • • • •

Scalable Video Codec Bellagio OpenMAX implementation Fractal Component Based Software Engineering tools ILDJIT Just In Time dynamic compiler Media Designer OpenMAX deployment and analyzer tool

The end outcome is fully functional industrial prototypes, together with their assessment and benchmarking reports, which demonstrate the realistic feasibility of the OMP objectives on actual hardware platforms. Parts of the methodologies and software stacks have been in fact integrated on three different platforms coming from different partners: Nomadik Hardware Kit NHK15 from STMicroelectronics, Qemu virtualization tool used by Movial, and HEP-II FPGA platform from HHI. Particular attention has been put in ensuring that the OMP software technologies integrate smoothly within the standard for multimedia primitives’ OpenMAX Integration Layer. Finally, actual results on OMP performance metrics have been accomplished for both OMP tools and OMP software stacks in terms of software productivity figures and runtime overhead assessments with respect to conventional existing tools and software frameworks, when applicable. In summary the outcomes are •

Media Designer OpenMAX Fractal SVC decoder on NHK15 board prototype



Scalable Video Codec using ILDJIT on a Qemu ARM emulation prototype



Scalable Video Codec using ILDJIT on an ST Nomadik platform



An OMP prototype utilising the ST Nomadik Multiprocessing Framework



A DKU research prototype utilising the deblocking component of the SVC



A FRACTAL SVC decoder prototyped on the HEP-II FPGA platform

OUTCOMES DETAIL Media Designer OpenMAX Fractal SVC decoder on NHK15 board prototype This prototype demonstrates a real-time component based scalable video decoding OpenMAX

OpenMediaPlatform 214009 – February 2010

30

application on the NHK15 board, which is an ARM based STM development kit for the Nomadik STn8815 System on Chip. The application is governed by Incoras Media Designer tool from a developer’s Windows machine. The prototype aims at demonstrating on a real ARM-based embedded platform the integration efforts among Incoras Media Designer tools, INRIA component based software technology, STM Bellagio OpenMAX reference implementation, and HHI Scalable Video Decoder technology. The prototype of the video decoder on the ST Nomadik NHK15 board is composed by the following components: • Fractal based Scalable video decoder (SVC) software library • OpenMAX SVC decoder wrapper on top of the decoder library. This component is compatible to the OpenMAX standard version 1.1.2 and it is integrated in the public OpenMAX implementation named Bellagio version 0.9.2 • An OpenMAX component that performs a color conversion from YUV to RGB • Two possible OpenMAX components for video rendering This prototype is capable to switch between three different general quality levels for the output video stream that are characterized by different resource usages. The full OpenMAX chain is composed by the following elements:

The color converted from YUV to RGB is based on a free multimedia library called FFMPEG. The two possible video renderers are based on X11 window system or directly on frame buffer. On the NHK 15 running demo the frame buffer direct rendering has been chosen for optimize execution on embedded platform, since the X11 renderer is used for the execution on x86. The execution of the Bellagio SVC Fractal application is either governed using board’s keypad (standalone version) or by Incoras Media Designer tool by means of an Ethernet connection. MediaDesigner was used in the design exploration mode to connect to the target via a socket interface, create the media graph and control the media graph at run-time as shown below

OpenMediaPlatform 214009 – February 2010

31

The outcome is a fully integrated demonstration show casing the developed OMP technologies.

. OUTCOMES DETAIL Scalable Video Codec using ILDJIT on a Qemu ARM emulation prototype Scalable Video Codec using ILDJIT on an ST Nomadik platform The second prototype scenario in OMP successfully demonstrated the integration of Toolchain Beta onto an ARM emulation platform (Qemu) and then onto the ST NHK15 platform. The following software is included in the prototype: • Qemu system emulator • GCC4NET static compiler for C language ( http://gcc.gnu.org/cli ) • ILDJIT dynamic compiler for ECMA-335 bytecode ( http://ildjit.sourceforge.net ) • SVC encoder/decoder available from OMP Subversion repository: http://info.openmediaplatform.eu/svn/WP3/Prototype/trunk/CbSvcDec

OpenMediaPlatform 214009 – February 2010

32

The same prototype scenario was developed on the ST NHK15 platform. The following software is included in the prototype: • NHK15 board • GCC4NET static compiler for C language ( http://gcc.gnu.org/cli ) • ILDJIT dynamic compiler for ECMA-335 bytecode ( http://ildjit.sourceforge.net ) • SVC encoder/decoder The Nomadik hardware kit, NHK-15, is a comprehensive hardware environment enabling full STn8815 Nomadik family advantages for media convergence devices. It allows easy and fast development of applications and products such as: internet tablets, portable multimedia and TV players and personal navigation devices. The NHK-15 is the ST reference hardware delivery for STn8815 Linux software offerings and the application development tool for Linux and WinCE developer's community. The NHK-15 board is a highly integrated board, embedding the main media convergence features while allowing a custom and expandable feature set. The NHK-15 kit encompasses among others: • • • • • •

Nomadik STn8815 multimedia application processor: STn8815A 12 mm x 12 mm package 1-Gbit DDR mobile SDRAM 4.3" WVGA RGB 24-bit LCD display STw5095 stereo audio codec supporting 2 loudspeakers, 1 headset stereo, dual stereo microphones and 1 external microphone Memory card interface for SD/MMC A series of peripherals and connectivity among which Keypad and Ethernet

The following picture is intended to give you a more plastic idea of the form factor of the development board we are referring to and have used throughout the experiments described below.

OpenMediaPlatform 214009 – February 2010

33

Figure 2 - NHK15 hardware kit

The media processor mounted on this development board, i.e. the Nomadik STn8815, comprises among others the following functional blocks, which play an important role in our benchmarking activities: • ARM926EJ-S host processor: a cached ARMv5 CPU including Memory Management Unit (MMU), 16 KBytes of level 1 instruction cache, 16 KBytes of level 1 data cache and 128KB of level 2 cache (data and instruction) • Multichannel Serial Port (MSP) • Smart Video Accelerator (SVA): a programmable DSP with dedicated hardware acceleration blocks for ◦ H264/AVC baseline Profile level 2 codec, ◦ MPEG4 Simple Profile video codec, ◦ H263 Profile 3 video encoder/decoder/codec, ◦ Picture-Post-Processor, ◦ etc. • Smart Audio/Imaging Accelerator (SAA/SIA): a high-performance block which performs as an audio & imaging hardware accelerator based on a programmable audio DSP with 24-bit data path and ultra low power implementation.

. OUTCOMES DETAIL An OMP prototype utilising the ST Nomadik Multiprocessing Framework The main purpose of the Nomadik Multiprocessing Framework is to provide a programming

OpenMediaPlatform 214009 – February 2010

34

model and an associated runtime environment for easy development of software on a Multiprocessor System-on-chip (Mp-SoC) distributed environment. This programming model targets the Nomadik processor family where a host (host) controls several sub-systems (DSPs), but it can be easily ported on other Mp-SoCs. NMF is a modular and extensible component model that can be used with various programming languages to design, implement, deploy and reconfigure various systems and applications. The NMF component model is a refinement of a more general component model called Fractal (http://fractal.objectweb.org). This component model heavily uses the separation of concerns design principle. The idea of this principle is to separate into distinct pieces of code or runtime entities the various concerns or aspects of a system: implementing the service provided by the system, but also making the system configurable, secure, reliable, etc. In particular, the NMF component model uses three specific cases of the separation of concerns principle: namely separation of interface and implementation, component oriented programming, and inversion of control. The figure below summarizes a typical use case scenario and the central role the NMF plays in interfacing between the host processor and the available DSPs for video (SVA) and/or audio and imaging (SIA) acceleration.

Kernel space NMF summary The figure below summarizes the host side software stack as it has been developed within the OMP project, highlighting in “yellow” the two different main components put in place, i.e.: • Component Manager library (CM lib): aka libcm.[a,so] • CM character driver (/dev/cm) and its kernel module: aka cmld.ko

OpenMediaPlatform 214009 – February 2010

35

Application 1 stub

Application 2

skel

stub

CM lib CM user proxy

CM lib CM user proxy

OSAL

CM syscall wrapper

OSAL

CM syscall wrapper /dev/cm

User space Kernel space

skel

OMP

CM character driver CM engine

NMF OSAL

Demo

The components with an “orange” background refer to elements not explicitly developed within the OMP project but reused as they were delivered by the STM internal NMF development and integrated into the Linux operating system, while the “blue” boxes represent the application(s) which have been realized within the OMP context for demonstration and evaluation purposes of the technology The architecture of the Linux kernel space driver allows for multiple client applications to concurrently access the Component Manger engine and such to call all the NMF services in parallel without any kind of coordination. In a real life system this is not acceptable because resources are limited and the different client applications might interfere with one another thus compromising the overall system behaviour. So there is the need for platform-wide resource and policy management. This need was investigated in depth in the OMP project and so the task it was to integrate the RM with the NMF/DSP framework as shown below.

OpenMediaPlatform 214009 – February 2010

36

Extended OMP resource manager concept

To demonstrate the OMP solution with NMF the following demo architecture was used:

NMF prototype demo

The demo prototype is made up by two applications running on the host processor. One is responsible for the audio part of the setup and the second for the video part. Both retrieve their media data from the file system which can be hosted on any kind of storage media supported by the NHK-15 board. Typically this will be an external MMC card, but it can also be hosted on a

OpenMediaPlatform 214009 – February 2010

37

remote NFS server and loaded over an Ethernet connection. The applications are implemented using the NMF programming model in order to offload as much as possible the media processing to the available DSPs while acting basically as user interface and coordinators of the ongoing activities. Two distinct DSPs are used for the respective components of the audio and of the video chain in order to achieve a high degree of parallelism. While the audio chain consists of four purely hand-crafted NMF components, the video chain is made up only by two of them, but due to their high complexity these video components are further hardware accelerated beyond the simple exploitation of the programmable MMDSP core. The audio chain is composed by an mp3 decoder component, followed by a data format converter, which transforms 24-bit PCM data into 16-bit PCM data suitable for the Alsa sink. Before sending the data to the Alsa sink device two additional components have been inserted, which realize respectively an audio effect (the so called “flanger distortion”) and a volume controller. Both can be directly controlled by the audio application through the NMF data path. The video chain is composed by a mp4 decoder and a picture-post-processor component. This last component transforms the video frames coming out of the mp4 decoder component from YUV into RGB, which is the format required by the frame buffer device. Furthermore, the PPP component performs a resizing function on the video frames in order to adapt them to the screen size and put them in the center of the display. OUTCOMES DETAIL A DKU research prototype utilising the deblocking component of the SVC DKU is included in the OMP project to meet the goal of increased coding productivity, when creating media libraries on multi-core heterogeneous SoCs. The description of work specified that a single source be able to run efficiently on multiple mobile hardware platforms that have a variety of multi-core SoCs. DKU attempts to meet the goal of single source, multiple hardware, efficient on all by isolating the application from the hardware. It isolates with an interface: on one side of the interface is the application code; on the other side is hardware code. The interface is used by infrastructure that bridges between the application and the hardware. The interface that DKU defines is a pattern in the code. The application programmer follows the pattern when they write their source. This lets infrastructure find the portions of the source that it needs, for making the application efficient on the hardware. Hence, formally, DKU is a programming pattern for expressing data parallelism, plus infrastructure that recognizes and uses the pattern. "DKU" stands for "Divider Kernel Undivider" because it has three elements to the pattern: a Divider that divides work into pieces, a Kernel that says how to do the work, and an Undivider that collects the small answers into the larger. DKU's infrastructure recognizes the D, the K, and the U pieces of code, then uses them to

OpenMediaPlatform 214009 – February 2010

38

make the code run efficiently on particular hardware. The infrastructure is organized into an array of modules, each module is written for one kind of hardware. One of these modules takes the D, and uses it for making the right amount of work for one processing element. Each machine has a different number and size of processing elements, so the same D is re-used in each module for making different size and number of work pieces. The module likewise uses the K, making it be sent to a processing element together with a piece of work. The module arranges for the processing element to run the K on the work piece. And, the module arranges for the U to receive the result of each piece of work. The modules are called "specializers". The infrastructure has one specializer for each kind of hardware that will run an application. Each specializer produces an executable image of an application. That executable runs on the one kind of hardware the specializer was written for. To use the infrastructure, application developers send their code to the infrastructure. Then users who want to run the application download an executable from the infrastructure. Inside, the infrastructure has many executables for the same application. It chooses which of the executables the user gets. For OMP, the application is an OMX component. OMX developers write a component and send the source to the infrastructure. Inside the infrastructure, each specialization module is applied to the source code of the OMX component. A typical specializer first examines the source code, finds the D's the K's and the U's, then performs a transform on the source. After the transform, it compiles the code to an executable. Inside the executable is typically a runtime scheduler, that the specializer inserted into the source. The run-time scheduler is linked to the D, K, and U code. It uses the D to divide work up, during a run, send the work pieces to processing elements, along with the K, and makes the processing element perform the K on the work piece. The run-time then causes the result to be sent back to the scheduler, which then runs the U on the result to reassemble it. .

OUTCOMES DETAIL A FRACTAL SVC decoder prototyped on the HEP-II FPGA platform This prototype demonstrates the componentized SVC decoder running on HHI’s HEP-II FPGA platform as shown below. The platform itself is designed as motherboard for a PXA270 processor module with an Intel XScale processor and up to two additional FPGA boards. For the demonstration one Altera Stratix-III FPGA board is used. This prototype is capable to demonstrate hardware accelerated FRACTAL components using the FPGA platform board to implement the hardware accelerators. The reconstructed YUV 4:2:0 video signals can be shown on the display via the YUV frame buffer overlay of the XScale processor or via the YUV video output unit residing on the FPGA board.

OpenMediaPlatform 214009 – February 2010

39

The SVC decoder is made of eleven components. Five of them have a critical role and an intensive computing footprint during the decoding process. Thanks to Cecilia, the Fractal C implementation framework, the SVC decoder can be compiled either as a native standalone executable or as a set of independent binary objects representing each component. The latter binary components may be deployed in a distributed environment using the Comete middleware.

OpenMediaPlatform 214009 – February 2010

40

Benefits and Potential Impact OMP BENEFITS The OMP project has resulted in a suite of tools, components, algorithms, methods and the advances in media API standards. The results show how the combination of the OMP component based methodologies together with QoS & RM extensions to open and standardized media APIs can enable improvements in flexible delivery of dynamic services as required by the service, constrained by platform resources or dictated by network characteristics. The OMP project results in a is chapter reports OMP tools performance figures and compares them, where applicable, to legacy methodologies in terms of software deployment productivity, portability, and commercial modes. OMP BENEFITS: PRODUCTIVITY Media Designer To identify the potential productivity benefits of MediaDesigner in the design life-cycle Incoras took a case study on a commercially available Beagleboard platform that included an ARM Cortex A8 with NEON VLIW acceleration. The use case was carried out with a reference AMR audio and a NEON accelerated AMR decoder decoder from a 3rd party software vendor who was not familiar with OpenMAX IL. The case study went through the entire process of componentising the AMR algorithm as an OpenMAX component; Installing the component on the Beagleboard; Testing the OpenMAX interface; Re-testing the OpenMAX wrapped codec with AMR compliance tests; Building the AMR codec into a real-world use case; Comparing performance results for the reference a NEON accelerated components. The overall result was a 90% reduction in effort However the case study was limited to design, test and basic integration. The diagram, below depicts an extension to this method where the system integrator can experiment with the codec running in more complex use cases and try out ‘what-if’ scenarios on SoCs with multiple programmable and acceleration nodes. In the example shown below a graph has been deployed across multiple service nodes on an OMAP4 device.

OpenMediaPlatform 214009 – February 2010

41

For platforms with 50 or more codecs and effects from internal suppliers and 3rd party suppliers, that need to be integrated, tested, and experimented with in what-if scenarios the potential saving in a development runs into several man-years of effort. In conclusion the approach gained considerable time savings versus traditional approach and enabled an interactive design methodology for ‘what-if’ scenarios that is not possible with the more traditionally employed approach. Cecilia Deploying distributed application using Comete requires a compiled binary version of each component. Moreover, for each interface with a different signature, a pair of communication handling components (stub & skeleton) must be written and compiled. Finally an applicationloading component (the loader) must be designed to give deployment instructions to the runtime in terms of instantiations and bindings. The distributed version of the SVC decoder consists in 84 components, including thirty-six pairs of stubs and skeletons, eleven primitives and one loader. It would be a rather painful work to write the code for each individual "glue" component. Likewise manually writing the application loader may turn to be tricky and the deployed application might be incoherent if some bindings are incorrect or missing. Fortunately, the Cecilia toolchain automates all these steps. All you have to supply is the code for each primitive component and an architecture description of the application in terms of components and bindings (ADL). Based on the constraints expressed in the ADL, Cecilia is able to assert the application architectural consistency (respect of the role, the type (signature/inheritance), the cardinality and the mandatory presence of each components'interfaces). The toolchain is also in a position to generate an application loader component from an extended ADL describing mappings between components and processing elements (Deployment ADL). Thus, the Cecilia framework allow programmers to design and deploy distributed applications in a convenient and effective way. DKU For parallelism frameworks, the major quantities of interest are: 1) Productivity gained by using that framework to express the parallelism

OpenMediaPlatform 214009 – February 2010

42

2) Degree of parallelism exposed 3) Degree of portability Standard ways of measuring these quantities have not yet arisen. We measured them by: 1) the variety of machines that can use the same source 2) the highest percent of ideal speedup achievable on each machine by the same source 3) the break-even size of work-unit on each machine with the same source The variety of machines that can profitably use the same source indicates the degree of portability, as well as the savings of the labour that would otherwise be required to re-write the code for each of those machines. The percent of ideal speedup achievable on a given machine indicates both whether a sufficient degree of parallelism is exposed in the problem and how much overhead the parallel framework adds. Finally, the break-even size of workunit on a machine indicates the amount of overhead imposed by the parallel framework, which in turn affects the degree of portability. Portability must be both high, meaning many machines, and efficient on each, in order for the productivity gains to be realized. If an application runs on a parallel machine but doesn't show a high percentage of the machine's peak performance, then it will have to be hand-tuned, which reduces the productivity gain of using the parallel framework. At the moment, DKU is the only known parallel framework capable of running a single source on a variety of machines ranging from embedded controllers running signal processing codes, to shared memory desktops to GPUs to on-chip distributed processors like the Cell and Larrabee, up to distributed Grid and tightly coupled supercomputers, and a heterogeneous collection of all of these combined. At the time of this writing, it has been demonstrated on in both Java and C, and on single-socket multi-core, multi-socket multi-core, the Cell BE, on a heterogeneous collection of shared-memory machines, and demonstration on NVidia GPUs is in progress. DKU enhances productivity by hiding machine details from the application programmer. The machine details have been the most difficult aspect of parallel programming, historically. Hence, DKU effectively eliminates the most difficult parts of parallel programming. We have not measured the productivity gains, but anecdotal evidence suggests that DKUizing code increases by 50% to 100% the time required for the parallelized portions. However, it is only applied to loop-nests, which account typically for less than 5% of total lines of code, but roughly 99% of running time. The percent of an application comprised of loop nests that are amenable to the DKU pattern varies, however the percent of running time should stay consistently high. Debugging time figures The Cecilia framework for Fractal/C doesn’t provide specific debugging tools. The conventional GDB or any debugger and relevant front ends (Eclipse, DDD,…) supporting threaded process debugging should be sufficient to debug component based applications compiled with Cecilia, even when deployed with Comete.

OMP BENEFITS: PORTABILITY ILDJIT and GCC4NET offer bytecode portability across different platforms. In the prototypes, portability is demonstrated between Intel (x86) and Arm processor architectures.

OpenMediaPlatform 214009 – February 2010

43

The decoupling of code generation from bytecode loading, analysis and optimization is performed in ILDJIT by means of the intermediate representation (IR). Thus, it is sufficient to provide a retargeting of the code generation library to a new architecture to obtain the full features of ILDJIT and GCC4NET, without need of recompiling the existing code. This way, executables can be downloaded, installed and run regardless of the target platform.

OMP BENEFITS: FLEXIBILITY The underlying CBSD technique of the FRACTAL based CECILIA toolchain provides a high degree of flexibility in the software design and implementation process. As an example the SVC decoder is made of several components for the various signal processing routines of the SVC standard. Due to the computational complexity of these signal processing routines specialized hardware accelerators are available which can be easily used to exchange a pure software component with a hardware accelerated component. This component substitution is not restricted to the static use case, but can also be realised on the fly whilst the SVC decoder is running. The code changes necessary for a component exchange are limited to the architecture description of the final application and do not involve any modification of the implemented C routines. Following the OMP programming rules has enabled the componentized SVC decoder to be executed within a synchronous execution model as well as an asynchronous execution model. Therefore the SVC decoder can be flexible mapped on different single/multi-core platforms, limiting the synchronization overhead especially between components on the same core to a minimum. Again the modification of the execution model and/or the modification of the component mapping to the different processing elements do not involve any modification of the implemented C routines and is handled completely by the architecture description of the final application.

OMP BENEFITS: USABILITY The test methodologies typically employed within an engineering organisation are integrated as part of a larger regression test system. This approach is applicable for internal development but does not lend itself to collaboration with suppliers and customers. To ensure collaboration for multimedia development it is important that the silicon vendors and handset OEMs can deploy their tests to suppliers or customers. In the case of developing multimedia functionality for a consumer device there are a large number of stakeholders who need to collaborate in the design process. The figure below gives a simplified view of the value chain in which there is a strong need to communicate design and test information at various stages in the multimedia design life-cycle. Historically the silicon vendor has been at the centre of the ecosystem; responsible for delivering SoC specific software productivity tools to their 3rd party partners and then putting provisions in place (typically web sites) where OEM customers can access software from the silicon vendor and 3rd party partners alike. The OEM is then responsible for integrating the software into a suitable end user product.

OpenMediaPlatform 214009 – February 2010

44

The landscape has changed somewhat in the past few years for smartphones largely due to the dominance of high end operating systems such as Symbian and more recently Google Android. This has had the effect of pushing the Silicon vendor out of the centre position and instead the ecosystem has become OS centric. The importance of collaboration is evident in the wide adoption of these OS’s under and open source model. Here all stakeholders access or make some contribution to the open source initiative.

MediaDesigner’s role in the ecosystem is to provide an end to end solution that supports • Component design • Component test • Interactive use-case design exploration and performance analysis The solution has been built with many design and test views enabling users to work at an intuitive level with the designs and tests. Underlying the solution is a set of schemas for component descriptions, test descriptions and use case descriptions. These schemas validate the information that is then stored as XML files. These XML files are easily shared within the ecosystem. As a whole MediaDesginer provides the complete environment where information can be shared which removes the need for stakeholders to create additional documentation and spreadsheets used historically in communicating with suppliers and customers. The test system employed within MDserver is flexible enough to load any C-based test harness. MediaDesigner has a description of each test stored in an XML test descriptor file allowing a user to build up test scenarios and complex batch tests. This C-based test approach ensures that any tests used as part of the MediaDesigner collaboration toolkit can also be integrated into the larger regression test flow of the silicon vendors and handset OEMs.

OpenMediaPlatform 214009 – February 2010

45

OMP BENEFITS: INTEROPERABILITY The MediaDesigner workbench is a collection of Eclipse plug-ins. These plug-ins work on a set of Java data models that are derived from the XML schemas for component, test and usecase descriptions. The solution has made extensive use of the Eclipse Modelling Framework (EMF) and Graphical Editing Framework (GEF) to ensure the solution integrates cleanly. The use of Eclipse ensures a route to integration with other commercially available toolkits e.g. ARM Realview, TI Code Composer Studio 4, Nokia Carbide

With OpenMAX IL gaining widespread adoption within the semiconductor and consumer electronic industries (Smartphone sector), the potential impact of the OpenMediaPlatform (OMP) project is very high! Many Tier-1 semiconductor vendors have already publically disclosed their use of OpenMAX. Public demonstrators, Press announcements or publically available implementations have been given by Nvidia, ST, NXP, Broadcomm, Texas Instruments to name a few. However the more significant factor at play is the shift towards OS centric Ecosystems. Firstly there is the emergence of Google Android as a real force in the Smartphone OS market. Secondly Nokia’s acquisition of Symbian and the decision to Open Source Symbian has opened up the Symbian ecosystem. Thirdly the LiMO foundation initiative to provide an Open Linux mobile stack has gained significant momentum in the Asia Pacific region. All these OS’s are utilising OpenMAX as the standard interface for multimedia components. Coupled with the emergence of very powerful multicore heterogeneous platforms (Multicore, ManyCore) there is a clear need to simplify the development flow for distributed and complex media use cases that must operate under a wide range of ambient conditions. OMP’s combined tools technologies for productivity improvement, code portability, componentisation, advanced split compilation techniques and parallelisation directly address the problems that the industry is facing with the aforementioned advances in embedded hardware architectures. OMP’s media stack enhancements for dealing with the complex issues of Quality of Service management and platform neutral Resource Management pave the way for OpenMAX to deliver some of the key functionality it has been lacking for more advanced platforms and more complex use-case scenarios. The technologies developed and demonstrated under the OMP project have been delivered as Open Source projects and commercially available toolsets. These technologies can be readily exploited and extended for use on the next generation platforms. Indeed OMP partners have been working closely with the ‘king-pins’ of the newly emerged Open OS ecosystems with a view to leveraging OMP technologies.

OpenMediaPlatform 214009 – February 2010

46

Ultimately the adoption and extension of OMP provides a springboard for creation of a new mediaservice ecosystem with the potential to radically change current design practices and significantly improve software productivity throughout the consumer device value chain. The following groups of users can benefit: USER Handset suppliers IP suppliers Network providers Application providers Community users

BENEFITS Get to market quicker, at lower development cost with more capable and interoperable products which use less power Prototype and develop scalable components more rapidly knowing they are portable across different handset devices Offer more personalised, more robust media delivery and content, exploiting dynamic QoS Creation of new business models the supply and distribution of enriched media experiences Ability to create, distribute and access engaging services and content anywhere.

In summary, the impact of this project will achieve these goals; 1. Significant reduction in product development effort & time-to-market: Supply of new tools, methods and API standards will improve development efficiency and quality. 2. Reduction in market fragmentation in client devices, advancing Europe’s move towards building the networks of tomorrow: By delivering improved support for open-platforms, OMP supported open-platforms will be programmable by the wider open source community – without sacrifice of performance or functionality. Independent SW Vendors, OEMs and network operators benefit from improvements in component re-use and deployment across the whole network of client user devices. The diagram below shows how groups of users will directly and indirectly benefit from the objectives and outcomes of the OMP project.

OpenMediaPlatform 214009 – February 2010

47

OMP

Application suppliers • Improved development flows • Increased software re-use • Support for diverse range of platforms

Tools & Methods Advanced Standards

Service Creators

Users have new ways to create and share content

Handset Supplier • Reduced development effort • Reduced time to market • Greater opportunities for product differentiation • Reduced risk in bringing new devices to market

Reference Platforms Network Providers • Innovative user services • Optimised network usage • New business models for independent service creators • Increased flexibility in provision of devices to users

Community

Creation of new markets for supply and distribution of enriched media experiences

Every user device has access User experience is optimised to the device and network capabilities

At the time of publishing the OMP consortium is unaware of any other open solution focused on an integrated solution to the problems addressed by OMP. OMP has the European industry at the forefront in tools and multimedia development for next generation platforms.

OpenMediaPlatform 214009 – February 2010

48

Dissemination and Exploitation of Results The following deliverables are publicly released and available from the OpenMediaPlatform website www.openmediaplatform.eu REPORTS D2.1 OMP Component Based SE Tool Specification (Dec. 2008) D2.2 OMP Split Compilation Tools (static and dynamic compilers) Specification (Dec. 2008) D4.4 OMP Prototype Evaluation report (Dec. 2009) D5.4 ICT Collaboration Report 1 (Dec. 2008) D5.6 ICT Collaboration Report 2 (Dec. 2009) PROTOTYPES D2.4 OMP Component Based SE Tool (June 2009) D2.5 OMP Split Compilation Tools (static and dynamic compilers) (June 2009) D2.7 OMP/OpenMAX Component Schema (June 2009) D3.4 Bellagio OpenSource Implementation version 1 (Dec. 2008) D3.5 Integration to Gstreamer multimedia framework version 1 D3.6 Bellagio OpenSource implementation version 2 (Dec. 2009) D3.7 Integration to Gstreamer multimedia framework version 2 (Dec. 2009 OMP has contributed results to a number of open-source projects results, to increase the role of European developers in such strategic free-software initiative. These projects are: • the ILDJIT project, an optimized dynamic compiler for the CIL byte code, • the FRACTAL/CECILIA project, an component model and its development environment on top of C, • the Bellagio project, an open-source implementation of the OpenMAX IL API • and the GstOpenMAX project, a GStreamer plug-in which allows the communication of OpenMAX IL multimedia components inside the GStreamer multimedia framework. ILDJIT

OpenMediaPlatform 214009 – February 2010

http://sourceforge.net/projects/ildjit/

49

Intermediate Language Distributed Just-In-Time (ILDJIT) is a virtual machine for CIL bytecode. It is a parallel, easily extensible Dynamic Compiler by means of a plugins internal structure. ILDJIT includes a JIT, a DLA and an AOT compiler.

The ILDJIT dynamic compiler can be installed via the AIKA GUI tool, available from the following URL: http://prdownloads.sourceforge.net/ildjit/AIKA-0.0.4.tar.bz2 Direct download of ILDJIT is available at: http://ildjit.sourceforge.net

FRACTAL/CECILIA

http://fractal.ow2.org/cecilia-site/current/

Cecilia is a development environment for programming Fractal components on top of the C programming language.

Cecilia environment includes a toolchain for building software systems from their architecture descriptions. This toolchain is implemented on top of the original Fractal ADL toolchain, adding the possibility to perform sophisticated code generation tasks as part of the toolchain execution.

OpenMediaPlatform 214009 – February 2010

50

BELLAGIO

http://omxil.sourceforge.net

Bellagio is an opensource implementation of the OpenMAX IL API that runs on Linux PC, including: •

A shared library with the IL core and a "reference" OpenMAX component



A number of OpenMAX components which pass Khronos conformance tests

It is intended to show the usage of the IL API and to allow people to start developing components.

OMP signifincatly contributed to 0.9.1.1 through 0.9.2.1 releases. The main addition to 0.9.2.1 comes directly from OMP: An optional support to components quality levels has been added. The global setting of quality level for a component has been added. Optionally a component can provide support for Set/GetQualityLevel functions. These functions will globally set a group of Parameters that represents a certain quality level. If this setting causes a change in any output port parameter the PortSettingsChange event is generated. The quality levels allow the IL Client to handle the resource usage of components given a quality of service policy.

GSTOPENMAX

http://freedesktop.org/wiki/GstOpenMAX

A GStreamer plug-in that allows communication with OpenMAX IL components

OPENMAX

OpenMediaPlatform 214009 – February 2010

http://www.khronos.org/openmax/

51

OMP has collaborated inside the Khronos OpenMAX standardisation group through participation at OpenMAX working group conference calls and face-to-face meetings. Submissions have been made in relation to component QoS and standardised SVC component definitions. OpenMAX standard can be downloaded from the Khronos website. Participation in standardisation activities is open to companies and academic partners through membership to the Khronos organisation. http://www.incoras.com/ MEDIADESIGNER Incoras MediaDesigner is a commercially available Eclipse-based tool for OpenMAX IL design test and integration. The product was formally launched in 2009 with a number of revisions released during the year with new contributions coming from the OMP project. Khronos news: Incoras announces MediaDesigner www.incoras.com The OpenMAX Schema developed under the OMP project is openly available and describes a machine-readable, standardised form for documenting media components, and represents a major step forward in the information exchange between tools, so allowing future advances in the dynamic creation of services utilizing innovative tool ‘pipelines’.

OpenMediaPlatform 214009 – February 2010

52

OpenMediaPlatform PROJECT FINAL REPORT Use and dissemination of foreground Grant Agreement number: 214009 Project acronym: OMP Project title: OpenMediaPlatform Funding Scheme: STREP Period covered: 1st January 2008 to 31st December 2009 Name of the scientific representative of the project's co-ordinator, Title and Organisation: Prof. Stefano Crespi-Reghizzi, Politecnico di Milano Tel: 0039 02 23993518 Fax: 0039 02 2399.3411 E-mail: [email protected] Project website address: www.openmediaplatform.eu

OpenMediaPlatform 214009 – February 2010

53

Publications at conferences and journal papers have been one of the major instruments to disseminate OMP research results to the scientific community. This guarantees on the one hand that the research of OMP goes beyond the current state of the art and facilitates on the other hand the possibility that the research can be carried on even after the project’s completion. In particular OMP research results in the area of the tools technology like dynamic compilation, component based software design and advanced parallelization techniques as well as OMP research results in the area of embedded video signal processing have been published at different conferences and journal papers. Finally, some theoretical mathematical results on program transformations for automatic parallelization of sequential programs have been triggered by OMP activities. JOURNAL PAPER PUBLLICATIONS Heiko Hübert, Benno Stabernack,"Profiling-Based Hardware/Software Co-Exploration for the Design of Video Coding Architectures", IEEE Transactions on Circuits and Systems for Videotechnology, Special Issue on Algorithm/Architecture Co Exploration of Visual Computing, Volume 19, Issue 11, Nov. 2009, pages: 1680 - 1691 Simone Campanoni,: “A parallel dynamic compiler for CIL bytecode”. ACM SIGPLAN Notices 43(4): 11-20 (2008). S. Campanoni, A. Di Biagio, G. Agosta and S. Crespi Reghizzi. ILDJIT: A parallel Virtual Execution System. To appear in Software - Practice & Experience (accepted June 2009). CONFERENCE PUBLLICATIONS Simone Campanoni, Stefano Crespi-Reghizzi:, “Traces of Control-Flow Graphs”. Developments in Language Theory 2009: 156-169. S. Campanoni, M. Sykora, G. Agosta and S. Crespi Reghizzi. “Dynamic Lookahead Compilation”. In Proceedings of the 12th Compiler Construction conference (CC 2009), York, March 2009. M. Tartara, S. Campanoni, G. Agosta and S. Crespi Reghizzi. “Just-In-Time compilation on ARM processors”. In proceedings of the fourth workshop on the Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems (ICOOOLPS 2009), Genova, July 2009. Vijay Janapa Reddi, Simone Campanoni, Meeta Sharma Gupta, Michael D. Smith, Gu-Yeon Wei, David M. Brooks: “Software-assisted hardware reliability: abstracting circuitlevel challenges to the software stack”. DAC2009: 788-793. Benno Stabernack, Jens Brandenburg, Heiko Hübert, Jan Möller, "An Experimental Mobile Terminal for Scalable Video Coding Applications using a H.264/AVC Decoder SOC", Proc. 13th IEEE International Symposium on Consumer Electronics May 25-28, 2009, Mielparque-Kyoto, Kyoto, Japan. Fabien Gaud, Sylvain Genevès, Fabien Mottet. Vol de tâches efficaces pour systems évenementiels multi-coeurs. Conférence Française en Systèmes d'Exploitation (ACM SIGOPS French Conference on Operating Systems). Toulouse, France, September 2009. Best paper award. WHITE PAPER PUBLLICATIONS Incoras presented a white paper at the ARM Techcon3 conference, the details are listed here, and the presentation of the paper is available as a video from the URL listed below; Title: Multimedia Development....Easy as 1.2.3! OpenMediaPlatform 214009 – February 2010

54

Date: Wednesday, October 21, 2009 Time: 9:00 AM PDT Duration: 00:38:36 Link: http://www.armtechcon3.com/2009/ THESES Michele Tartara. “ARM Code Generation and Optimization in a Dynamic Compiler”, Master Thesis in Ingegneria Informatica, Politecnico di Milano, April 2009. Stefano Anelli, “Method specialization for the Common Intermediate Language in a dynamic compiler” Master Thesis in Ingegneria Informatica, Politecnico di Milano, July 2009. Ettore Speziale “Multi-threading support in ILDJIT dynamic compiler” Master Thesis in Ingegneria Informatica, Politecnico di Milano, July 2009. Massimiliano Nanni and Roberto Molteni, “progetto e implementazione di librerie interne per il supporto dello standard ECMA-335 nel compilatore dinamico ILDJIT”, Bachelor Thesis in Ingegneria Informatica, Politecnico di Milano, July 2008. Marcello Boiardi, “Scelta automatica di algoritmi di ottimizzazione di codice all’interno del compilatore dinamico ILDJIT” Bachelor Thesis in Ingegneria Informatica, Politecnico di Milano, April 2009. Massimiliano Grandi. “Supporto delle caratteristiche di introspezione dello standard ECMA335 nel compilatore dinamico ILDJIT” Bachelor Thesis in Ingegneria Informatica, Politecnico di Milano, July 2009. Demonstration and presentation of OMP technology at product shows, exhibitions and workshops offers the possibility to introduce OMP project results to a broader audience with the aim to encourage the interest in new products and tools provided by OMP. The following list contains all OMP activities at product shows, exhibitions and presentations to different industrial and academic partners PRODUCT SHOWS/EXHIBITIONS AND PRESENTATIONS HHI presented OMP project results at the IBC2009 [5] from 10th to 14th September 2009. The IBC is Europe’s largest professional trade show for content provider, equipment manufacturer, technical associates and others in the broadcasting industry. Therefore the OMP presentation focussed especially on the developments in the area of embedded video signal processing. Incoras rented a booth at ARM Techcon3 conference in Santa Clara, California, USA 26-28 October 2009. This is a key technical conference for embedded systems as it focuses primarily on the ubiquitous ARM family of processors, which dominate the mobile computing market. At this conference we presented and demonstrated the MediaDesigner system level design tools as developed under the OMP project. The demonstration focused on the integration and test of media components on ARM’s Neon processor. The key message we presented at the conference was that savings in development time of up to 90% were now possible using OMP technology and MediaDesigner. Jean-Bernard Stefani. Operating Systems for Multicore Platforms. STMicorelectronic/INRIA Platform 2012 workshop, Grenoble, France, October 19th 2009 The ILDJIT compiler has been presented to both industrial and academic audiences, including ALCATEL Italy, Unicredit (bank), Università di Palermo, Harvard University (CS Dept.). The presentation of ILDJIT at Politecnico di Milano in June 2008 by S. Campanoni has won the first award at the Ph.D. day. INRIA Alchemy presented its OMP related research on advanced parallelization techniques and the BLIS framework to different academic and industrial partners: OpenMediaPlatform 214009 – February 2010

55

o Presented parallel syntax at ACACES 08 poster session. o Presented the OMP approach to colleagues from U. Delaware, working on adaptive parallelization on a manycore distributed memory platform (Cyclops64). o Presented DKU pattern at ACACES 09 poster session. o Presented BLIS framework to STMicroelectronics' AST division in the context of ST's Plateform2012 project. o Presented BLIS framework to Prof Keutzer and students at ParLab at UC Berkeley.

OpenMediaPlatform 214009 – February 2010

56

OpenMediaPlatform PROJECT FINAL REPORT Societal implications Grant Agreement number: 214009 Project acronym: OMP Project title: OpenMediaPlatform Funding Scheme: STREP Period covered: 1st January 2008 to 31st December 2009 Name of the scientific representative of the project's co-ordinator, Title and Organisation: Prof. Stefano Crespi-Reghizzi, Politecnico di Milano Tel: 0039 02 23993518 Fax: 0039 02 2399.3411 E-mail: [email protected] Project website address: www.openmediaplatform.eu

OpenMediaPlatform 214009 – February 2010

57

Report on societal implications Replies to the following questions will assist the European Commission to obtain statistics and indicators on societal and socio-economic issues addressed by projects. The questions are arranged in a number of key themes. As well as producing certain statistics, the replies will also help identify those projects that have shown a real engagement with wider societal issues, and thereby identify interesting approaches to these issues and best practices. The replies for individual projects will not be made public.

A

General Information (completed automatically when Grant Agreement number is entered.

Grant Agreement Number: Title of Project:

214009 OpenMediaPlatform

Name and Title of Coordinator:

B

Ethics

1.

Did you have ethicists or others with specific experience of ethical issues involved in the project? 2. Please indicate whether your project involved any of the following issues (tick box) : INFORMED CONSENT • Did the project involve children? • Did the project involve patients or persons not able to give consent? • Did the project involve adult healthy volunteers? • Did the project involve Human Genetic Material? • Did the project involve Human biological samples? • Did the project involve Human data collection? RESEARCH ON HUMAN EMBRYO/FOETUS • Did the project involve Human Embryos? • Did the project involve Human Foetal Tissue / Cells? • Did the project involve Human Embryonic Stem Cells? PRIVACY • Did the project involve processing of genetic information or personal data (eg. health, sexual lifestyle, ethnicity, political opinion, religious or philosophical conviction) • Did the project involve tracking the location or observation of people? RESEARCH ON ANIMALS • Did the project involve research on animals? • Were those animals transgenic small laboratory animals? • Were those animals transgenic farm animals? • Were those animals cloning farm animals? • Were those animals non-human primates? RESEARCH INVOLVING DEVELOPING COUNTRIES • Use of local resources (genetic, animal, plant etc) • Benefit to local community (capacity building ie access to healthcare, education etc) DUAL USE • Research having potential military / terrorist application

OpenMediaPlatform 214009 – February 2010

No

YES No No No No No No No No No No No No No No No No No No No

58

C

Workforce Statistics

3

Workforce statistics for the project: Please indicate in the table below the number of people who worked on the project (on a headcount basis).

Type of Position Scientific Coordinator Work package leader Experienced researcher (i.e. PhD holders) PhD Students Other

4

Number of Women

Number of Men

0 1 0 0 0

2 7 13 8 12

How many additional researchers (in companies and universities) were recruited specifically for this project?

Of which, indicate the number of men: 7 Of which, indicate the number of women: 0

OpenMediaPlatform 214009 – February 2010

59

D Gender Aspects {

No

5

Did you carry out specific Gender Equality Actions under the project ?

6

Which of the following actions did you carry out and how effective were they? Not at all effective

‰ ‰ ‰ ‰ { 7

Design and implement an equal opportunity policy Set targets to achieve a gender balance in the workforce Organise conferences and workshops on gender Actions to improve work-life balance

Very effectiv e

{{{{{ {{{{{ {{{{{ {{{{{

Other:

Was there a gender dimension associated with the research content – i.e. wherever people were the focus of the research as, for example, consumers, users, patients or in trials, was the issue of gender considered and addressed? { Yes- please specify No

E

Synergies with Science Education

8

Did your project involve working with students and/or school pupils (e.g. open days, participation in science festivals and events, prizes/competitions or joint projects)? { Yes- please specify No

9

Did the project generate any science education material (e.g. kits, websites, explanatory booklets, DVDs)? { Yes- please specify No

F

Interdisciplinarity

10

Which disciplines (see list below) are involved in your project? {

G

Main discipline1: 2.2 Associated discipline1:

{

Associated discipline1:

Engaging with Civil society and policy makers

11a

Did your project engage with societal actors beyond the research community? (if 'No', go to Question 14)

1

{

Yes No

Insert number from list below (Frascati Manual)

OpenMediaPlatform 214009 – February 2010

60

11b If yes, did you engage with citizens (citizens' panels / juries) or organised civil society (NGOs, patients' groups etc.)? { No { Yes- in determining what research should be performed { Yes - in implementing the research { Yes, in communicating /disseminating / using the results of the project {

Yes

11c In doing so, did your project involve actors whose role is mainly to { No organise the dialogue with citizens and organised civil society (e.g. professional mediator; communication company, science museums)? 12 Did you engage with government / public bodies or policy makers (including international organisations) { { { {

No Yes- in framing the research agenda Yes - in implementing the research agenda Yes, in communicating /disseminating / using the results of the project

13a Will the project generate outputs (expertise or scientific advice) which could be used by policy makers? { Yes – as a primary objective (please indicate areas below- multiple answers possible) { Yes – as a secondary objective (please indicate areas below - multiple answer possible) { No 13b If Yes, in which fields? Agriculture Audiovisual and Media Budget Competition Consumers Culture Customs Development Economic and Monetary Affairs Education, Training, Youth Employment and Social Affairs

Energy Enlargement Enterprise Environment External Relations External Trade Fisheries and Maritime Affairs Food Safety Foreign and Security Policy Fraud Humanitarian aid

Human rights Information Society Institutional affairs Internal Market Justice, freedom and security Public Health Regional Policy Research and Innovation Space Taxation Transport

13c If Yes, at which level? { Local / regional levels { National level { European level { International level

OpenMediaPlatform 214009 – February 2010

61

H

Use and dissemination

14

How many Articles were published/accepted for publication in peer-reviewed journals?

3

To how many of these is open access2 provided? How many of these are published in open access journals?

0

How many of these are published in open repositories?

0

To how many of these is open access not provided? Please check all applicable reasons for not providing open access: ‰ publisher's licensing agreement would not permit publishing in a repository ‰ no suitable repository available ; no suitable open access journal available ‰ no funds available to publish in an open access journal ‰ lack of time and resources ‰ lack of information on open access ‰ other: ……………

15

3

How many new patent applications (‘priority filings’) have been made? ("Technologically unique": multiple applications for the same invention in

0

different jurisdictions should be counted as just one application of grant).

Indicate how many of the following Intellectual Trademark Property Rights were applied for (give number Registered design in each box).

16

Other

17

How many spin-off companies were created / are planned as a direct result of the project?

0 0 0 0

Indicate the approximate number of additional jobs in these companies:

18 Please indicate whether your project has a potential impact on employment, in comparison with the situation before your project: ‰ In small & medium-sized enterprises ‰ Increase in employment, or ‰ Safeguard employment, or In large companies ‰ ‰ None of the above / not relevant to the project ‰ Decrease in employment, Difficult to estimate / not possible to quantify

19

‰

For your project partnership please estimate the employment Indicate figure: effect resulting directly from your participation in Full Time Equivalent (FTE = one person working fulltime for a year) jobs:

Difficult to estimate / not possible to quantify 2

;

Open Access is defined as free of charge access for anyone via the internet.

OpenMediaPlatform 214009 – February 2010

62

I

Media and Communication to the general public

20

As part of the project, were any of the beneficiaries professionals in communication or media relations? No { Yes

21

As part of the project, have any beneficiaries received professional media / communication training / advice to improve communication with the general public? No { Yes

22

Which of the following have been used to communicate information about your project to the general public, or have resulted from your project? ‰ ‰ ‰ ‰

23

Press Release Media briefing TV coverage / report Radio coverage / report Brochures /posters / flyers DVD /Film /Multimedia

‰ ‰ ‰ ‰

Coverage in specialist press Coverage in general (non-specialist) press Coverage in national press Coverage in international press Website for the general public / internet Event targeting general public (festival, conference, exhibition, science café)

In which languages are the information products for the general public produced? ‰ ‰

Language of the coordinator Other language(s)

English

Question F-10: Classification of Scientific Disciplines according to the Frascati Manual 2002 (Proposed Standard Practice for Surveys on Research and Experimental Development, OECD 2002): FIELDS OF SCIENCE AND TECHNOLOGY 1. 1.1 1.2 1.3 1.4 1.5

2 2.1 2.2 2.3.

NATURAL SCIENCES Mathematics and computer sciences [mathematics and other allied fields: computer sciences and other allied subjects (software development only; hardware development should be classified in the engineering fields)] Physical sciences (astronomy and space sciences, physics and other allied subjects) Chemical sciences (chemistry, other allied subjects) Earth and related environmental sciences (geology, geophysics, mineralogy, physical geography and other geosciences, meteorology and other atmospheric sciences including climatic research, oceanography, vulcanology, palaeoecology, other allied sciences) Biological sciences (biology, botany, bacteriology, microbiology, zoology, entomology, genetics, biochemistry, biophysics, other allied sciences, excluding clinical and veterinary sciences) ENGINEERING AND TECHNOLOGY Civil engineering (architecture engineering, building science and engineering, construction engineering, municipal and structural engineering and other allied subjects) Electrical engineering, electronics [electrical engineering, electronics, communication engineering and systems, computer engineering (hardware only) and other allied subjects] Other engineering sciences (such as chemical, aeronautical and space, mechanical, metallurgical and materials engineering, and their specialised subdivisions; forest products; applied sciences such as geodesy, industrial chemistry, etc.; the science and technology of

OpenMediaPlatform 214009 – February 2010

63

food production; specialised technologies of interdisciplinary fields, e.g. systems analysis, metallurgy, mining, textile technology and other applied subjects) 3. 3.1 3.2 3.3 4. 4.1 4.2 5. 5.1 5.2 5.3 5.4

MEDICAL SCIENCES Basic medicine (anatomy, cytology, physiology, genetics, pharmacy, pharmacology, toxicology, immunology and immunohaematology, clinical chemistry, clinical microbiology, pathology) Clinical medicine (anaesthesiology, paediatrics, obstetrics and gynaecology, internal medicine, surgery, dentistry, neurology, psychiatry, radiology, therapeutics, otorhinolaryngology, ophthalmology) Health sciences (public health services, social medicine, hygiene, nursing, epidemiology) AGRICULTURAL SCIENCES Agriculture, forestry, fisheries and allied sciences (agronomy, animal husbandry, fisheries, forestry, horticulture, other allied subjects) Veterinary medicine SOCIAL SCIENCES Psychology Economics Educational sciences (education and training and other allied subjects) Other social sciences [anthropology (social and cultural) and ethnology, demography, geography (human, economic and social), town and country planning, management, law, linguistics, political sciences, sociology, organisation and methods, miscellaneous social sciences and interdisciplinary , methodological and historical S1T activities relating to subjects in this group. Physical anthropology, physical geography and psychophysiology should normally be classified with the natural sciences].

6. 6.1

HUMANITIES History (history, prehistory and history, together with auxiliary historical disciplines such as archaeology, numismatics, palaeography, genealogy, etc.) 6.2 Languages and literature (ancient and modern) 6.3 Other humanities [philosophy (including the history of science and technology) arts, history of art, art criticism, painting, sculpture, musicology, dramatic art excluding artistic "research" of any kind, religion, theology, other fields and subjects pertaining to the humanities, methodological, historical and other S1T activities relating to the subjects in

OpenMediaPlatform 214009 – February 2010

64