Bangalore
Mobile & Beyond
Agenda
Click on the sessions and demos below to view the content.
Conference Hours: 8:30 AM - 5:45 PM
Exhibit Hours: 12:15 - 1:15 PM, 2:45 - 5:45 PM
27 October
T4 MIPI Interfaces for Imaging
-
15:15 | (T4) Driving 4K High-Resolution Embedded Displays in New Applications with MIPI DSI and VESA DSC (ARM & Synopsys)
Download the presentation »
Abstract: The drive for high resolution embedded displays is increasing in applications such as mobile, automotive and augmented/virtual reality (AR/VR). Automotive infotainment and display-based advanced driver assistance systems (ADAS) are more sophisticated and require high-resolution displays to mimic high-end mobile-quality user experience. For AR/VR, high frame rates and multiple display architectures are deployed to meet the unique characteristics of these applications such as avoiding motion sickness and eye-glass form factor. Due to increase in display quality and resolution, it is critical for designers to access solutions with more efficient processors and interfaces that support high-performance, low data-transmission bandwidth and low-power consumption. This presentation describes a display solution, co-developed by Synopsys and ARM, that improves visual quality and reduces overall SoC power consumption while meeting the requirements of 4K high-resolution embedded displays in new applications beyond mobile.
Hezi Saar is a Senior Staff Product Marketing Manager at Synopsys and is responsible for its DesignWare HDMI, Mobile Storage and MIPI IP product lines. In addition, he co-chairs the MIPI Alliance Marketing Steering Group and sits on the MIPI Alliance board of directors. He brings more than 20 years of experience in the semiconductor and electronics industries in embedded systems. Prior to joining Synopsys, Mr. Saar was responsible for Advanced Interface IP at Virage Logic, acquired by Synopsys in 2010. From 2004 to 2009, Saar served as senior product marketing manager leading Actel's Flash field-programmable-gate-array (FPGA) product lines. Previously, he worked as a product marketing manager at ISD/Winbond and as a senior design engineer at RAD Data Communications. Mr. Saar holds a Bachelor of Science degree from Tel Aviv University in computer science and economics and an MBA from Columbia Southern University.
-
15:45 | (T4) Imaging Systems Design for Mixed Reality Scenarios (Intel)
Download the presentation »
Abstract: Mixed Reality promises to bring in the next wave of experiences to consumer and enterprise segments. Enabling this requires a combination of different types of image capture modalities from among RGB, Depth and Beyond Visible cameras with rolling/global shutters and different FOV requirements, both on a head mounted device and in the environment. This talk will tie the different usage opportunities to imaging requirements for a Mixed Reality system for both consumer and enterprise markets in the coming years and dive into system design aspects like placement, location, types of image sensors, bandwidth requirements for tethered and wireless HMD scenarios and the processing pipeline architecture with the critical technology building blocks from multiple camera sources distributed between a HMD and a host system. Example end usages like Obstacle Avoidance and Avatar Navigation with Mixed Reality headsets will be used to provide a use case decomposition view from capture to application. Finally, the talk will address some of the opportunities for the MIPI community to drive the next wave of experiences with advanced image capture modalities and the challenges to be addressed to achieve them.
Prasanna Krishnaswamy is a Platform Architect in the Client Computing Group at Intel. His expertise is on imaging and computer vision systems architecture, tying imaging system designs with algorithmic image processing and vision blocks on the SOC and platforms, to deliver end to end imaging and vision use cases for mobile and PC like form factors. At Intel, he has contributed to the development of platform imaging solutions in the areas of Depth Sensing and Array cameras. Prior to Intel, he was managing the software stack development at Aptina Imaging for their Image Signal Processor product line. Prasanna holds a Master’s Degree in Electrical Engineering from the University of Arizona and has more than ten patents granted or pending.
-
16:15 | (T4) SoundWire Linux Subsystem: An introduction to protocol and Linux Subsystem (Intel)
Download the presentation »
Abstract: SoundWire is a robust, scalable, low complexity, low power, low latency, two-pin (clock and data) multi-drop bus that allows for the transfer of multiple audio streams and embedded control/commands. SoundWire provides synchronization capabilities and supports both PCM and PDM, multichannel data, isochronous and asynchronous modes. It was ratified by MIPI in 2015. The Linux Subsystem for SoundWire is being upstreamed by presenters to Linux Kernel and we explore this new Subsystem. The SoundWire bus is explained in detail along with the core bus structures, Master(s) and Slave(s) interface (APIs, Structures) with bus and changes required by existing device drivers to add SoundWire support. We also explore the support for various architectures and underlying enumeration methods. This presentation would help people to get upto speed with this new Subsystem & protocol.
Sanyog Kale is a Software Development Engineer who works with Intel and has 7 years of industry experience. He has expertise in audio domain and has worked on Audio Firmware, Audio DSP engines, Linux Audio drivers to deliver best audio solutions for all the Intel platform(s) based on Android and chrome OSes. He is currently working on developing & upstreaming SoundWire (MIPI Standard) Linux subsystem which includes Bus framework, Master Driver and Slave driver.
Vinod Koul works in Linux Audio group for Intel. He is involved in Audio driver development and upstreaming for Intel platforms. He also wrote and maintains the ALSA compressed audio framework. Vinod is the maintainer of Linux DMA engine subsystem.
-
16:45 | (T4) Multiple CSI-2 Camera Solution Using FPGAs (Microsemi)
Download the presentation »
Abstract: There is a rapid increase in usage of multiple camera system in applications such as drones, automotive, robotics and machine vision. These applications use anywhere from 3 to 12 cameras which capture and process images and many a times also transmit these images real-time. The broad market has been leveraging on the mobile eco-system to design these systems by using application processors, image sensors, peripheral logic and popular camera interface standard such as MIPI CSI-2. In this presentation, Microsemi’s Prem Arora would discuss applications which use multi-cameras and explain how an FPGA-based solution can be used to aggregate the cameras using MIPI CSI-2 interface.
Prem Kumar Arora is the Director of Marketing, SoC and FPGA group at Microsemi. His responsibilities include product management, solutions engineering, eco-system and partner development. Prior to his current role at Microsemi, Prem was the group manager of wireless products at Cypress Semiconductor. Prem holds a BE in Electronics and Communication Engineering and is an alumni of INSEAD.
-
17:15 | (T4) Mobile Influenced Markets – Evolution of Camera and Display Uses (Lattice)
Download the presentation »
Abstract: Low-cost and low-power FPGAs with CSI2/DSI interfaces have been enabling customers to leverage mobile image sensors, displays and processors for innovative applications in mobile-influenced markets, including consumer, medical, industrial, and automotive. This presentation will highlight the evolution of these types of applications, including the unique issues faced by system and software developers in mobile-influenced markets. Most mobile components are designed for specific use cases such as smartphones, tablets and laptops. As such, the mobile system integration is generally straightforward. However, for innovative mobile-influenced applications (i.e., AR/VR and drones), the mobile components don’t always fit together nicely. For example, a drone might need many more cameras than can be directly accommodated by the mobile application processors (APs). In addition, these cameras have different resolutions and frame rates, e.g., high-frame rate and resolution for videography, and lower resolution for collision avoidance. Within these mobile-influenced use cases, there are common trends such as interfacing of consumer, industrial and automotive grade image sensors to a mobile AP, synchronizing and aggregating multiple image sensors, interfacing to multiple displays, multiplexing between display sources, and interfacing to specialty displays. Connectivity and some video processing through programmable FPGAs often aid in the development of these systems, where the functionality was unforeseen or previously couldn’t be realized. Examples of end applications and extrapolated architectural trends for several use cases will also be explored.
Tom Watzka is the Technical Mobile Solutions Architect at Lattice Semiconductor with over 20 years of experience in developing embedded products, including 7 years developing consumer mobile solutions. Currently, Watzka is the Marketing Product Manager for the CrossLink video bridge product line, focused on mobile and mobile influenced markets. He received his BS degree from the Rochester Institute of Technology, MS degree from Pennsylvania State University, and conducted his Master’s Thesis on FFT algorithms.