3 Major Projects:
Research Title: Advanced Video Optimiser
Description: Video has become a very important and popular type of media in world-wide platforms, including video conferencing, Smart TV, and streaming services on mobile and Web, as it contains the richest audio-visual contents. The proliferations of emerging technologies for HD video recording, faster broadband, and higher-resolution screen have made consumers becoming more acutely sensitive towards high quality perceptions. Users are less-and-less tolerant with poor quality videos, and this often is the deal-breaker for future services to win or lose the market.This research builds on from our current (patent-pending) technologies for ROI-based smart video optimisation strategies that leverage H.264/AVC state of the art encoder.
The key research for this horizon is to expand its capability with a robust framework for Smart Adaptation of video for multi-channel delivery across various types of applications and content domains, focusing on:
1) pre-processing strategies to optimise the input video quality prior to compression,
2) ROI-based quality enhancement and zooming enabling content-and-device aware adaptation to minimise bandwidth occupation, and
3) establishment of a user-acceptability-based prediction model to maximise user experience.
The advanced video quality optimisation will benefit both video providers and users through getting the most out of: storage space (i.e. minimise e-waste), network load (i.e. minimise congestions due to video streaming), and platform capabilities (e.g. for retina displays).
People: Dian Tjondronegoro, Ivan, Wei
Research Title: Multi-Modal Sentiment Analysis and Summarisation
Description: With the rapid growth of media production and user-generated contents in blogs, social media and webpage, automatic sentiment analysis can help to summarise the key topics that are most widely covered and/or associated with human, and indentify people’s feelings about these topics (e.g. what aspects make customers dislike a new product?). Current efforts in this field mainly focus on text or video alone, while it is worthwhile exploring the fusion of multi-modal information for more accurate performance, such as image and audio. This project will build on from our current technologies (“Scoop” and video-based emotion classification), expanding them with some key additional information from different modalities, including audio (e.g. pitch in voice is raised indicate anger, soundtrack music can indicate happy or exciting moments), combining them using the “video and social media analysis of players’ popularity and sentiment” framework which was built for summarising Australian Open 2010 tennis video [Tjondronegoro et al, WACV 2011]. This ultimately will lead to an integrated multi-modal system that enables more accurate sentiment analysis across a wide range of media and is ready for hot topic summarisation.
Link: Demo http://220.127.116.11/VideoLibrary/Emotion/EmotionDemo.html
People: Dian Tjondronegoro, Ligang
Research Title: Exploring Mobile Technology Solutions for Real-Time Multimodal Transport Information and Ticketing System
Description: It is desirable to develop an integrated Mobile Transit Information and Payment System (M-TIPS) that can deliver real-time transit information to passengers and support ticket payment via mobile devices. This could bring potential benefits in letting passengers know exactly the current conditions of the transit network, assisting them making timely and informed decisions about transport choice across multiple modes, facilitating their ticket purchasing process, as well as in helping transit agencies/governments achieve both economical and environmental goals. Recent progresses on developing such systems, especially in Australia , mainly focus on collecting, storing and delivering real-time data feeds of a single type of vehicle within a specific region, while very few efforts on mobile payment. Aiming at providing a proof-of-concept M-TIPS prototype for Australian transit agencies, this project will cover the following fields:
a) review of state-of-the-practice on delivering real-time traffic & travel information and supporting ticketing via mobile phones in both Australia and international nations.
b) A general framework for developing M-TIPS that investigates different components relating to the implementation of a real-world M-TIPS.
c) Demos that showcase the potential use of M-TIPS in real-world situations via mobile phones.
Link: (Demo coming soon)
People: Dian Tjondronegoro, Ligang and Wei
Current Project Fundings:
- (2009-present) Multi-channel Content Delivery and Mobile Personalisation (leading two work packages: “Rich Media Content Reuse for Multichannel Delivery” and “Mobile Personalization”. Funded by CRC Smart Services
- (2011-2012) Exploring Mobile Technology Solutions for Real-Time Multimodal Transport Information and Ticketing System. Funded by CRC Rail
- (2011-2013) Leveraging mobile phone technology to influence responsible drinking behaviours. Chief Investigators: Prof J.C. Drennan, A/Prof J.P. Connor, Prof D.J. Kavanagh, Dr D. Tjondronegoro, Dr M. Fry, Dr J.A. Previte, Dr A.M. White. Funded by ARC Discovery.
- (2011-present) CRC Youth Health and Well Being. Details TBA.
Theme 1: Multimedia Analysis, Indexing, and DeliveryProjects
- Automatic Re-Use of Rich Contents for Multi-Channel Delivery
- Web information extraction and aggregation
- Smart adaptation to optimise Video Quality for variable bit rate
- User-centered video quality assessment
- Field user study for understanding user experience of video in videos on mobile phone
- User experience modeling for achieving a high user’s satisfaction
- Video Annotation and summarization
- Facial Expression Recognition and Face Identification
- Sports video events detection
- Multimedia sensors:
- Video/Image analysis applications in multimedia environmental monitoring sensors
- Smart Video Optimiser
Venues to publish work:
- Multimedia: ACM MM, ICMR, ICONIP, WACV, ICCV, ECCV, ECAI
- Information retrieval/mining: SIGIR, WWW, CIKM, SIGKDD, SIGMOD, WISE, ICDMP
Assoc. Prof. Dian Tjondronegoro
Dr. Ivan Himawan
Ms. Wei Song
Mr. Ligang Zhang
Mr. Andy Cher-Han Lau
Mr. Moh Edi Wibowo
Mr. Kaneswaran Anantharajah
Theme 2: Mobile and Web Applications and Interaction
- Exploring utility of smart phone technologies for supporting responsible alcohol drinking behaviour
- Mobile technologies for real-time multimodal transport information and ticketing
- Enhancing the experience of public transport through mobile-mediated interactions and services
- Exploring the design and effects of gamification on context-aware mobile content and services
- Sensing User’s Contextual Information for Smarter Mobile Interactions
- Mobile services for collaborative and mobile learning
- User-centered Model of Searching Image/Video on the Web
- Modeling Web Search Behavior based on Users Cognitive Style
Venue to publish work:
- Mobile and Ubiquitous Computing: UBICOMP, MOBIHOC, WOWMOM, PERVASIVE
- Information Science and Human Interaction: CHAI, HCI, SIGIR, ASIST
Assoc. Prof. Dian Tjondronegoro
Mr. Jack Tseng
Mr. Zachary Fitz-Walter
Mr. Tony Wang
Mr. Jimmy Ti
Mr. Guo Hao
Mr. Richard Stark