RESEARCH PAGE
Omesh Tickoo
Thanks for visiting this page.  I manage the Computer Vision and Deep Learning Software group at Intel . A systems arhitect by training, my passion is converting data to actionalble knowledge on opimized systems. I also love to teach and tinker with technology in my spare time. I hope you enjoy browsing my home.
omesh @ tickoo . net
Research Interests
Applied Multimodal Scene Understanding
My technology and research interests span multiple areas in the field of multi-modal understanding. I am fascinated by the idea of generating knowledge from data and the end-to-end problems for making the knowledge extraction practical. A computer systems architect by profession, I spend a lot of my time working with my team to devise and implement new and optimal ways to make machines smarter. This page outlines some of the current projects in my team at Intel. For details on these or earlier projects, please contact me.
Scene Reconstruction
This project aims to provide optimal scalable and distributed scene representations at the edge. The main idea is to provide a space optimized implementation of spaces that provide 4D representations built in real-time from mutiple cameras. The representations provide spatial information at scalable granularities (point clouds, voxels, surface) with tempral chages and surface properties embedded. Usages are in IoT application areas like autonomous systems, robotics etc and in smart spaces involving human-machine interactions.
Localization and Navigation
Our low power multi-modal localization and navigarions solutions are perfect for implementation on embedded platforms. The algorithms are designed to be hardware optimized as well as capable for multi-agent coordination 
Contextual Learning
We are building an end-to-end learning system that identifies new information based on context, learns at the edge and provides optimal algorithm implementation in hardware and software. The research aims to tackle the problem of applied unsupervised learning in the field. The output from this work is algorithms and implementations optimized for Intel based sytems.
Multimodal Interactive Scene Understanding

Interactive Scene Understanding allows artificial agents to visually convey their understanding to the humans. The components of our research include
identifying the most visually distinctive, important and informative regions in an image and dynamically configure the scene description pipeline (describing the context of the scene, rather than a single class label). Other aspects include generating visual output based on text and NLP based queries to express the agent’s understanding and enabling applications like visual training data generation and visual storybook creation.
For end-to-end deployment, we scale the granularity of scene understanding with the platform of operation.
Hardware Acceleration and Interfacing
To enable machine intelligence at the edge, it is important that the algorithms and systems are able to operate under power/compute constraints. Our hardware acceleration efforts aim to provide complex data analysis and understanding services at the edge. The research involves software-hardware codesign to enable ultra-low power operation under low power conditions and in special cases enable ultra-low power operation with power harvested devices. Methods used to optimize include low power hardware design, multimodality and hardware reuse.

On the systems side, we have developed solutions to optimie hardware accelerator interfacing with the systems end-to-end including optimized operation over the network.
E2E Optimization and Deployment
Our efforts in the are aim to provide an easy, modular and scalable methodology to enable distributed application pipe creation based modular capability blocks. The framework implements simple plug-and-play operation of application pipes with ability to execute modules over the network. Standard APIs enable different resource managers to deploy the applications optimally based on platform and traffic characteristics.