|ACM SIG MM eNewsletter||ACM SIG MM webpage|
|Authors||Yennun Huang (1), Zhen Xiao (2), Yih-Farn Chen (1), Rittwik Jana (1), Michael Rabinovich (2), Bin Wei (1)|
|Affiliation||(1) AT&T Labs, USA; (2) IBM Research, USA; (3) Case Western Reserve University, USA|
The paper "When is P2P technology beneficial to IPTV services?" by Yennuan Huang et al was elected as the best paper of NOSSDAV '07 and presented in June, the same year, at the workshop held at the University of Illinois at Urbana-Champaign. The paper can currently be found at the NOSSDAV '07 web site ( http://www.nossdav.org/2007), and will, in time, also be available from the ACM digital library.
In this paper, the authors investigate the value of using peer-to-peer techniques for the distribution of IPTV in provider owned networks. Through their work, they have been able to put much of the recent research in this area into perspective. They have accomplished this by looking at peer-to-peer distribution within a provider owned network, and demonstrate how this leads to a totally different understanding of system efficiency. Their unique investigation, when looking at system efficiency, breaks with the regular assumptions: that the interconnecting Internet is a problem-free cloud, and that only the resources of access networks are troublesome. As such, this paper is both and interesting read and relevant for the times.
The authors perform cost-performance analysis for various experimental scenarios, where they consider the relative capacities of links in provider networks. They are, in that way, able to show that bottlenecks are likely to appear in provider networks when the population of peer-to-peer users in the network is dense and peer-choice is random. In such a scenario, resource constraints in the provider network affect the performance on inner links of the distribution topology, voiding the assumption that only access links limited the performance of P2P systems.
In summary, this is a paper that is very appropriate for the time, and puts recent work within this research area into perspective. In addition, it will provide the reader with several interesting ideas which are worth investigating further. As such, we highly recommend this paper to all our readers, though in particular to those with an interest in peer-to-peer streaming. So, in addition to winning the best paper award for the NOSSDAV '07 workshop, we have selected this paper as the first featured paper to appear in this newsletter.
|Institution||University of Klagenfurt|
|Advisors||Hermann Hellwagner, Susanne Boll|
|SIGMM member||Susanne Boll|
Multimedia content can be delivered to different terminals such as desktop PCs, PDAs, and mobile phones. There has been a significant amount of research recently on the adaptation of multimedia contents to the actual usage context to ensure Universal Multimedia Access (UMA). In many situations, the clients are unable to receive large audio-visual (A/V) data volumes in original quality because of resource limitations, e.g., limited network throughput. Most adaptive multimedia frameworks try to comply with the capabilities and constraints of the user's terminal and do not consider the user him-/herself. However, the question "How to adapt multimedia data in order to provide the best user perceived utility?" is of central relevance and needs to be addressed.
This thesis focuses to answer this question, where simultaneously technical issues such as terminal capabilities and network characteristics have to be considered. The quality of the adaptation significantly depends on the type and information content of the media as well. For example, it would be preferable w.r.t. Universal Multimedia Experience (UME) to adapt an action video in the spatial domain rather than in the temporal domain. As a consequence, the user would get a smaller video window but he/she would still be able to fully enjoy rapid motion in action scenes. Moreover, especially in utility based adaptation frameworks, the semantic experience of content should be optimized under given resource limitations. This thesis introduces a novel cross-modal adaptation decision model which uses detailed perceptual quality information and semantic quality estimation. The perceptual quality is a metric about how a user perceives the content, and refers to the human visual system (HVS). The semantic quality on the other hand includes the designated information that the medium should convey to the user, e.g., the semantic content of a news report or the motion aspect of an action video.
Related existing utility model implementations rely on adding the weighted uni-modal perceptual qualities, a multiplicative term (multiplication of uni-modal qualities), and specific constants in order to fit the subjective impressions of a group of test persons. The result of a detailed analysis of this approach is that the implementation of the model itself as well as the weights and constants are strongly dependent on the genre and the subjects participating in the test. For this reason, there is a lack of a more generic model for estimating the total audio-visual utility which can be used for any genre and which takes into account the individual user's preferences. For closing this gap, this thesis introduces a hybrid recommender system, which tries to configure open parameters of the generic utility model automatically. The advantages of this strategy are that explicit (expensive) subjective tests are not required and the system is learning automatically about the users taste. This learning effect leads to a continuous improvement of the adaptation decision for a given use case.
The combination of (adapted) elementary streams which complies with the resource constraints and which provides the best audio-visual utility value is the optimal solution for the consumer. Finding this best combination for the individual user within a reasonable (non-annoying) time frame can be seen as a challenging optimization problem. Within this thesis, four different algorithms are developed and discussed for solving this optimization problem. A detailed evaluation of the success of the overall approach based on subjective tests is presented as well.
The research group "Multimedia Communication (MMC)" at Klagenfurt University was founded and is being led by Prof. Hermann Hellwagner. In addition, the group currently has three research assistants, seven project staff members, and three administrative and technical staff members.
The research activities of the group are in the areas of - multimedia communication and quality of service (QoS) provisioning, - adaptation of multimedia content w.r.t. network, device, and usage contexts, - standardization within the ISO/IEC MPEG group (MPEG-21 Multimedia Framework), and - mobile, adaptive multimedia applications.
The focus of the MMC group is clearly on adaptive delivery of audio-visual contents, taking into account, for instance, fluctuating network and environmental conditions when users are on the move. The group actively participates in several international and national research projects on all levels, ranging from basic research to application-oriented projects and direct cooperation with industry.
In teaching, the MMC group covers the technical courses of the Informatics study programme such as Computer Organization, Operating Systems, Computer Networks, Servers and Clusters, Internet QoS, and Multimedia Coding.
|Institution||University of Klagenfurt|
|Advisors||Laszlo Boeszoermenyi, Frank Eliassen|
|SIGMM member||Laszlo Boeszoermenyi, Frank Eliassen|
This PhD-thesis strives to develop methods for streaming real-time video data over best effort networks. Delivering real-time video data in the desired quality over best effort networks is a challenging task. The problem is that the quality decreases with the number of frames that are corrupted, lost or received after the playback time. The main reasons for lost, delayed or corrupted frames are overloaded streaming servers and crowded network paths.
In order to deal with overloaded streaming servers and crowded network paths, this thesis presents (1) the design and evaluation of an innovative architecture, called Proxy-to-Proxy, as well as (2) a model for describing and handling the content delivery problem of real-time data.
1. Proxy-to-Proxy Architecture
The Proxy-to-Proxy architecture is based on the combination of characteristics from classical Peer-to-Peer Systems and Content Delivery Networks. The three main components in the architecture are:
The proxies form dynamic groups that can be referred to as overlay networks. Each group has a leader that knows about the state of each proxy (load, type of shared content, etc.) in the group. Groups are formed based on the state of the network and the type of content that is available on the proxy. For example two proxies, that both share football movies and have a network connection that allows fast content replication, form a group. Every other proxy that also shares football movies and has a similar network connection joins the group.
Videos are replicated from original servers to proxies and between proxies. Replication from original servers to proxies is used to provide end-users with the best quality. Replication between proxies balances host and network load within the Proxy-to-Proxy framework. Each end-client is connected to one proxy that is typically (but not necessarily) located in the same local area network. Queries for content are sent to the home proxy and forwarded to other proxies (proxy groups) if necessary.
End-users are serviced by one or more proxies, depending on the load of the proxy and the network. The mechanisms that can be used to achieve sufficient Quality-of-Service are (1) multiple-source streaming based on multiple description coding (MDC), (2) content unaware forward error correction, or (3) a combination of both.
2. Affinity Model
The second and even more challenging aspect within this thesis is to define a model for describing and handling the content delivery problem of real-time-data. The model is based on the notion of affinity. Affinity is used to describe relationships between the three main resources (proxies, videos, end-clients) in the Proxy-to-Proxy architecture. Each resource has a certain affinity to other resources. Resources with high affinity attract each other. Attraction leads to (a) content replication from original to surrogate servers, (b) cooperation between surrogate servers and (c) proper error concealment when delivering the content.
If a new proxy participates, it selects other proxies for cooperation. The selection is based on proxy affinity to combine the aspect of (a) grouping proxies with homogenous content and (b) maximizing network throughput between collaborative surrogates.
If a video is replicated, the group of surrogate servers (proxies) with the highest replication affinity is selected. Replication affinity is used to find a tradeoff between (a) efficient use of storage space and (b) content placement in locations with high throughput to later end-clients.
If a client sends a request, it is served by the group with the highest stream affinity. Stream affinity is used to find a tradeoff between (a) error avoidance and (b) error correction for streaming the content or (c) a tradeoff between both approaches.
This way, affinity controls the whole system. The components apply their affinity function in an autonomous way. This leads to a certain level of self-organization. On the long-term this behavior converges to a global optimum. It is robust and scalable at the same time, as no central decision place is needed.
The Research Group Distributed Multimedia Systems (DMMS) belongs to the Department of Information Technology (ITEC) at the Faculty of Technical Sciences of the Klagenfurt University, Austria. The group is led by Prof. Laszlo Böszörmenyi. The group has currently 9 members. Beside an administrative person every other member is doing teaching and scientific work, funded partly by the state Austria, and partly by external sources, such as the national science foundations (FWF and FFG), the European Union and companies. In the research projects numerous master students are involved as well.
The group strives for excellence in (1) research, (2) teaching and (3) cooperation with industry. We have an expertise in a number of basic technologies of informatics, such as operating systems, compilers, networks and distributed systems. In recent years, the research has been concentrated on the issues of distributed multimedia, with special emphasis on adaptation, video delivery infrastructures, advanced video coding, scene based video adaptation, video retargeting and multimedia languages. We regularly publish papers in refereed conferences and journals. We are active in organizing conferences and workshops.
Besides project-based cooperation with a number of companies, we are supporting activities of former students to found start-up companies and transforming know-how between the university and companies.
|Institution||National University of Singapore|
|Advisors||Samarjit Chakraborty, Wei Tsang Ooi|
|SIGMM member||Wei Tsang Ooi|
Today's multimedia applications run on a wide range of devices, ranging from mobile phones to set-top boxes. Such devices are often designed using general-purpose configurable System-on-Chip (SoC) platforms due to advantages such as flexible design, lower cost, and shorter time-to-market. Determining the optimal configuration parameters (such as on-chip buffer size and bus width) for these platforms, however, is difficult due to high variabilities in execution requirements and bursty on-chip traffic. Furthermore, such configurations typically involve trading off different performance metrics, including power consumption, cost, and application-level quality-of-service.
In this thesis, we propose an analytical framework that can be used in the design space exploration and performance analysis of multimedia SoC platforms. Our analytical framework adopts, and extends on, (i) the concept of variability characterization curves (VCCs), which succinctly capture the variability of a time series (including on-chip traffic, available processor cycles, execution requirements), and (ii) the theory of network calculus, which provides a mathematical framework for analyzing the VCCs and deriving worst case guarantees and performance. These techniques have been previously used to analyze communication networks, real-time systems, network processors, and general SoC platforms. The thesis extends the theory to analyze the SoC platforms for multimedia applications.
The thesis first addresses the issue of obtaining the VCCs for a large library of multimedia streams that can potentially run on the platform being designed. As the size of this library can be too huge to analyze, we proposed a methodology to identify representative streams from the library, by clustering streams with similar variability characteristics together. The VCCs measured for these selected streams are then used to represent the workloads imposed on the platform.
The second part of the thesis proposes system-level analytical solutions for two typical cases of SoC platform design, namely, on-chip processor frequency selection and rate analysis, in the context of a multimedia decoding and playback application. In the first case, our analytical approaches can guide a system designer in identifying the frequency ranges that should be supported by the different processors of a platform architecture in order to meet the target multimedia workload without loosing intra-stream synchronization. In the latter case, we address the problem of determining tight bounds on the rates at which different multimedia streams can be fed into a given platform architecture without violating application requirements.
The thesis finally extends the concept of VCCs, which characterizes the worst-case behavior and therefore provide performance guarantees, to the concept of approximate VCCs, which tolerates some errors by discarding extreme cases in its characterization. The concept of approximate VCCs exploits the fact that multimedia data can tolerate some amount of errors -- a frame or a macroblock can be dropped from the buffer without catastrophic effect due to built-in robustness in the coding. We show that replacing worst-case VCCs with approximate VCCs in our analysis can lead to a design with huge resource savings, at the expense of rare application requirement violations. The thesis also presents preliminary error analysis algorithms to bound the quality degradation due to requirement violations.
The Networked and Embedded Media Systems (NEMESYS) research group at School of Computing, National University of Singapore, conducts theoretical and systems research with a special focus on multimedia applications. Our interest spans distributed systems, operating systems, embedded systems, and programming systems. In particular, we are studying how to provide systems support for multimedia data types (video, audio, graphics) in the context of applications such as video on demand, tele-conferencing, webcast productions, computer games, video surveillance, running on personal computers and mobile devices.
In research related to this thesis, our group focuses on modeling and analysis of multimedia applications on system-on-chip (SoC) platforms. Our analytical framework allows the designers to (i) analytically relate between the hardware configurations (e.g., buffer sizes, processor frequency), data characteristics (e.g., input rate, variability), and software configurations (e.g., playout delay, scheduling policy), and (ii) derive bounds on the related systems design parameters to meet application constraints.
|Institution||University of Ottawa|
|Advisors||Nicolas D. Georganas|
|SIGMM member||Nicolas D. Georganas|
In this work, we present a comprehensive approach to haptically render highly-detailed point clouds without first creating their corresponding polygonal mesh. We say that our approach is comprehensive because it addresses: kinesthetic and tactile rendering; static and dynamic models; collision detection and force response; force shading; deformation and stiffness; friction; and texturing. These features compromise the majority of the haptic interactions possible while using a 3-degrees-of-freedom haptic device, which is the target device for our algorithms. Furthermore, we look in this work at height fields and redefine them as a special case of point clouds for which we also present a specialized haptic rendering approach that includes all the features already mentioned for our general purpose approach.
Our work relies on redefinitions of what a point clouds surface is; for the purposes of collision detection, we look at it as a collection of touching, if not slightly overlapping, axes-aligned bounding boxes; and for the purposes of force response and special haptic effects rendering, we look at the surface as a neighborhood of points where each point knows its immediate neighbors. Our collision detection algorithms are novel, and our force response algorithms are loose adaptations to point clouds of industry-standard constraint-based approaches.
Our work is motivated by, and largely designed for, models that are the result of scanning real-life objects in 3D. These models are of great practical use in a large number of fields ranging from arts to manufacturing, and from entertainment to medicine. The scanning technology (laser or contact) is of no consequence to our work, but what is relevant is the high-density point cloud that typically results from 3D scans. When possible, we also make use of our a priori knowledge of the path the scanner takes to sample the real-life object in a novel approach to compile neighborhood information.
Finally, this work will demonstrate, through experimental results, its effectiveness in conveying haptic information, its speed even when all haptic algorithms run in a single thread on a single processor, and its insensitivity to the size of the input point clouds; factors that make the case for our approachs adaptation in place of mesh reconstruction techniques.
The DISCOVER lab disposes of $8 million in equipment. This includes an IBM DCV System with 32-Processor P595 Server, a SGI ONYX 3400 with 3 graphic pipes and 12 processors, a Mechdyne FLEX 3-panel screen, other VR and Haptic equipment.
Seven professors are associated with the lab: directors Nicolas D.~Georganas and Emil M.~Petriu, as well as Eric Dubois, Abed El Saddik, Shervin Shirmohammadi, WonSook Lee and Jochen Lang. Over 50 researchers including graduate students, postdocs and visiting researchers work in the lab as well.
The research topics of DISCOVER spans a broad area and include Digital Signal Processing, Data Compression, Image Processing and Communication, Haptic Audio Visual Environments (HAVE), Multimedia Communications, Multimedia Tele-surveillance, Knowledge Management, Interactive Media and Games, Collaborative Ambient Intelligence Systems and Applications (CAMISA), Ambient Multimedia Intelligence Systems (AMIS), Multimedia Communications, Collaborative Virtual Environments, Tele-Haptics, Web Telecollaboration Applications, Intelligent Internet Sensors and Appliances, Computer graphics and Animation, Image-based modelling, Physics-based modelling Deformable modelling, Computer vision, 3D sensing and modelling, Interactive acquisition, Navigation systems, Human modeling & animation, Face Recognition, Human-Computer Interaction, Virtual Reality in Health Care, Medical applications, Music Analysis, Computer Vision, Computer Games, Graphics related eCommerce, Intelligent Sensors, Robot Sensing and Perception, Interactive Virtual Environments, Neural Networks and Fuzzy Systems, Digital Integrated Circuit Testing, Massively Multiuser Online Gaming and Simulations, and Multimedia Adaptation and P2P Communication Protocols.
|Institution||University of Ottawa|
|Advisors||Nicolas D. Georganas, E.M. Petriu|
|SIGMM member||Nicolas D. Georganas|
Hand gestures can be used for natural and intuitive human-computer interactions. To achieve this goal, computers should be able to visually recognize hand gestures from video input. However, vision-based hand tracking and gesture recognition is an extremely challenging problem due to the complexity of hand gestures, which are rich in diversities due to high degrees of freedom involved by the human hand. On the other hand, computer vision algorithms are notoriously brittle and computation intensive, which make most current gesture recognition systems fragile and inefficient.
This thesis proposes a new architecture to solve the problem of real-time vision-based hand tracking and gesture recognition with the combination of statistical and syntactic analysis. The fundamental idea is to use a divide-and-conquer strategy based on the hierarchical composition property of hand gestures so that the problem can be decoupled into two levels. The low-level of the architecture focuses on hand posture detection and tracking with Haar-like features and the AdaBoost learning algorithm. The Haarlike features can effectively catch the appearance properties of the hand postures. The AdaBoost learning algorithm can significantly speed up the performance and construct an accurate cascade of classifiers by combining a sequence of weak classifiers. To recognize different hand postures, a parallel cascades structure is implemented. This structure achieves real-time performance and high classification accuracy. The 3D position of the hand is recovered according to the cameras perspective projection. To make the system robust against cluttered backgrounds, background subtraction and noise removal are applied.
For the high-level hand gestures recognition, a stochastic context-free grammar (SCFG) is used to analyze the syntactic structure of the hand gestures with the terminal strings converted from the postures detected by the low-level of the architecture. Based on the similarity measurement and the probabilities associated with the production rules, given an input string, the corresponding hand gesture can be identified by looking for the production rule that has the greatest probability to generate this string. For the hand motion analysis, two SCFGs are defined to analyze two structured hand gestures with different trajectory patterns: the rectangle gesture and the diamond gesture. Based on the different probabilities associated with these two grammars, the SCFGs can effectively disambiguate the distorted trajectories and classify them correctly.
|Institution||University of Ottawa|
|Advisors||Nicolas D. Georganas, E.M. Petriu|
|SIGMM member||Nicolas D. Georganas|
Conventional graphical user interface techniques appear to be ill-suited for the kinds of interactive platforms that are required for future generations of computing devices. 3D graphics and immersive virtual reality applications require interactive 3D object manipulation and navigation. Recognition-based user interfaces using speech and gestures are in high demand to provide a more natural human-computer interaction modality. The major challenge facing recognition-based user interfaces is the lack of a standard application programming interfaces capable of handling ambiguity and providing the means to include domain-specific knowledge about the context in which the user interface is used.
In this dissertation, we study generic dynamic gestures. We use a generic definition hand postures capable of covering the space of hand postures at different levels of granularity and abstraction; and we timely monitor the posture variation as it unfolds within the dynamic gesture. We also study the role of context in gesture interpretation without making assumptions about a specific application. We view the hand tracking and gesture recognition subsystem as an integral part of a larger distributed and multi-user multi-service application, where gesture interpretation plays the role of resolving ambiguity of the recognized gesture. We identify the relevant aspects to hand gesture interpretation and we propose an agent based system architecture for gesture interpretation. We finally propose a framework for gesture-enabled system design, where context is placed in a middleware layer that interfaces with all sub modules in the system and plays a dialectic role and keeping the overall system stable.
|Institution||University of Georgia|
|Lab||The Multimedia Systems Group|
|Advisors||Suchendra M. Bhandarkar and Kang Li|
|SIGMM member||Kang Li|
The use of multimedia on mobile devices is fast becoming widespread and popular. Since mobile devices are typically resource constrained in terms of network bandwidth, battery power and available screen resolution, it is often necessary to formulate special encoding techniques in order to optimize power consumption, and network bandwidth, during multimedia data playback and streaming.
This dissertation reports the design and implementation of several novel content aware algorithms for compact representation, and dissemination, of multimedia data suitable for power -and -network constrained environments. The multimedia sub domains of computer animation data, videos, and images, have been considered.
Content aware data processing is a key theme in all the proposed algorithms. Content information for animation data, represented as Motion Capture (MoCap) data, has been derived from the hierarchical structure of the virtual human associated with the data. For video sequences and images, low level content information, such as gradients, motion, curvature etc. have been detected, and exploited, in the proposed algorithms. Another key theme in the proposed algorithms is the elimination, or reduction, of spatio-temporal redundancy, occurring in MoCap and video sequences. The third key theme is the use of domain specific customization of data, in order to render the multimedia data more suited for resource-constrained environments.
Several novel algorithms, based on these three key concepts, have been proposed for MoCap data compression suitable for power-and-network constrained devices. Several content aware image and video transcoding algorithms have been proposed, which transcode images and video sequences as multi-resolution, multi-layered representations, in order to allow power - and - network bandwidth adaptive video playback and dissemination. Results have shown significant power -and -network bandwidth - adaptive capabilities of the videos, which surpass performance of existing standards of layered video encoding. Further, several caching schemes have been developed in order to disseminate videos created using the proposed technologies to power -and network bandwidth - constrained clients over the Internet, resulting in cache designs with improved performance compared to existing cache designs.
The Multimedia Systems Group at the University of Georgia carries out fundamental and applied research on various aspects of analysis, encoding, transmission and delivery of multimedia information. Our recent research projects include the design of distributed delivery architectures for video streaming and online gaming, novel schemes for energy-aware video encoding and delivery over low-bit rate wireless networks, and privacy protection in real-time video-based surveillance systems.
Siddhartha's work leverages recent advances in computer vision and image/video analysis algorithms to provide technologies for automatic content-aware encoding of video streams with little or no human intervention. The automatic content-aware encoding schemes exploit knowledge derived from predefined models for skeletal representation of the human body, human motion analysis, and semantic descriptions of high-level video content using relevant ontologies. These content-aware encoding schemes combined with carefully designed algorithms for system adaptation are shown to result in smart video-based distributed surveillance systems that are resource-efficient and also capable of providing the desired level of privacy protection.
|Institution||University of Illinois at Urbana-Champaign|
|SIGMM member||Klara Nahrstedt|
Three-dimensional tele-immersive (3DTI) environments have great potential to promote collaborative work among geographically distributed participants. However, extensive application of 3DTI environments is still hindered by the problems pertaining to scalability, manageability and reliance of special-purpose components. Most existing 3DTI systems either do not provide multi-party connectivity or require dedicated resources. Thus, one critical question is how to organize the acquisition, transmission and display of large volume real-time 3D visual data over commercially available computing and networking infrastructures so that "everybody" would be able to install and enjoy 3DTI environments for high quality tele-collaboration.
In this PhD thesis, we explore the design space from the angle of multi-stream Quality-of-Service (QoS) management to support multi-party 3DTI communication. In 3DTI environments, multiple correlated 3D video streams are deployed to provide a comprehensive representation of the physical scene. Traditional QoS approach in 2D and single-stream scenario has become inadequate. On the other hand, the existence of multiple streams provides unique opportunity for QoS provisioning. Previous work mostly concentrated on compression and adaptation techniques on the per stream basis while ignoring the application layer semantics and the coordination required among streams.
As the result of our research, we designed and validated an innovative cross-layer hierarchical and distributed multi-stream management middleware framework for QoS provisioning to fully enable multiparty 3DTI communication over general delivery infrastructure. The major contributions of our management framework are as follows. First, we introduce the view model for representing the user interest in the application layer. The design of the QoS/resource management framework revolves around the concept of view-aware multi-stream coordination, which leverages the central role of view semantics in 3D free-viewpoint video systems. Second, in the stream differentiation layer we present the design of view to stream mapping, where a subset of relevant streams are selected based on the relative importance of each stream to the current view. Conventional streaming controllers focus on a fixed set of streams specified by the application. Different from all the others, in our management framework the application layer only specifies the view information while the underlying controller dynamically determines the set of streams to be managed. Third, in the stream coordination layer we present two designs applicable in different situations. In the case of end-to-end 3DTI communication, a learning-based controller is embedded which provides bandwidth allocation for relevant streams. In the case of multi-party 3DTI communication, we propose a novel ViewCast protocol to coordinate the multi-stream content dissemination upon an end-system overlay network. Finally, we embed 3DTI session management in the framework which facilitates the session initialization, resource registration, and membership maintenance.
We have implement the prototype of our multi-stream management framework and evaluated it through both simulation and real 3DTI session among tele-immersive environments residing in different institutions across the Internet2. Our experimental results have demonstrated the implementation feasibility and performance enhancement of the management framework.
The Multimedia Operating Systems and Networking (MONET) Research Group, led by Professor Klara Nahrstedt in the Department of Computer Science at the University of Illinois at Urbana-Champaign, is engaged in various research aspects of communication and multimedia systems, including Quality-of-Service (QoS) Management, networking and distributed systems, integration of guaranteed and best effort services for audio/video/data traffic, QoS-aware resource management, QoS routing, soft real-time scheduling, middleware support for distributed multimedia applications, and multimedia security. Currently, we are working on the following projects: 3D Tele-immersive Environments (TEEVE), First Responder System, Security in Wireless Sensor Networks, Data Dissemination and Key Management in Mobile/Wireless Ad-hoc Networks, Management Overlay Networks, and Trustworthy Critical Cyber Infrastructure.
|Date||September 20-21, 2007|
|Location||Dallas, TX, USA|
|Chair(s)||Deepa Kundur, Balakrishnan Prabhakaran|
The 9th ACM Multimedia and Security Workshop ACM MM&Sec07, was held September 20-21, 2007 in Dallas, Texas. This years meeting continues a successful series of workshops started in 1998 that has become the premier forum for the presentation of cutting-edge research and demonstrations spanning the field of multimedia security. The mission of this annual meeting is to identify key future research issues in the areas of multimedia security and protection, robust media transmission and networking, manipulation and recognition, and the detection of hidden communications. We expect the workshop to motivate this research and to establish fruitful relationships with key actors from academia, industry, and government in the US, Europe and Asia. The objectives of this years workshop are to: 1) discuss emerging technologies in digital multimedia authentication, encryption, identification, fingerprinting, steganalysis, and secure multimedia networking; 2) identify, critical high impact research problems addressing specified deficiencies in the field of secure multimedia distribution and consumption; and 3) formulate target applications of identified technologies in both the commercial, civilian, and military sectors.
The call for papers attracted 52 submissions from Asia, Europe, Canada and the United States. The program committee accepted 26 papers that cover a variety of topics, including steganography and covert communication, authentication and forensics, security primitives and encryption, digital watermarking and attacks on multimedia and their systems. We hope that these proceedings will serve as a valuable reference for security researchers and developers.
The organization of ACM MM&Sec07 was only possible due to the generous time, care and effort of members of the multimedia security community. We would like to thank the many authors who submitted their work for consideration, and the program committee and external reviewers, who worked in a timely manner to review the papers and providing useful critiques. In addition, we would like to thank Xiaohu Guo our Local Arrangements Chair and Treasurer as well as Amruthraj Belaldavar who helped develop the registration website. Finally, we would like to thank ACM SIGMM for their continued support of this successful workshop series.
It is clear that multimedia security is a field of diverse activity in which innovations in applied signal processing interact with cryptography and networking. These proceedings are intended to provide an overview of the area through an exposition of timely research in the field. We hope that this collection inspires continued research, debate, and increased interaction among the diverse parties involved in its evolution. The next meeting will be the ten anniversary of the workshop and will take place in Oxford University Computing Laboratory, Oxford, UK (September 22-23, 2008). The call for papers is open until April 10, 2008, see the workshop website: www.mmsec08.com.
|Date||July 9-11, 2007|
|Location||Amsterdam, The Netherlands|
|Chair(s)||Nicu Sebe, Marcel Worring|
The CIVR was held from the 7th to the 9th of July, in Amsterdam in a 17th century church. After 5 editions, the CIVR conference has now become an official ACM Conference and an IAPR co-sponsored event. The CIVR is set up to present the state of the art in image and video retrieval by researchers and practitioners from throughout the world and to provide an international forum for the discussion of challenges in the fields of image and video retrieval. The conference is one of the most important and influential events in this area and successfully gathered the important researchers and practitioners from academia and industry.
This year 191 submissions from 41 countries were submitted and, after being reviewed by the Program Committee members, 71 were accepted for presentation (22 orals and 49 posters). Additionally, there were two excellent invited presentations given by Prof. Keith van Rijsbergen from the University of Glasgow, UK, and Prof. Andrew Zisserman, from the University of Oxford, UK, internationally renowned experts in information retrieval and computer vision respectively. With well over 150 participants the conference was a great success.
A unique feature of the conference is the high level of participation from practitioners such as content owners, producers, creators, archivists, service providers, and policy makers. This year the practitioner day was organized together with CHORUS, which is a European coordination action bringing together different European projects in the area of Audio-visual search engines. The two practitioner chairs, Jan Nesvadba and John Oomen did a tremendous job and succeeded in inviting key persons from the European Commission, academia, and industry, bringing them together in lively discussions.
The potential of academic results are best communicated through demos. This year the demo session contained 15 technical demos. In addition we had three life competitions on image retrieval, video copy detection and video retrieval, called the VideoOlympics, which were all being held at the Netherlands Institute for Sound and Vision, the national archive for all broadcasted material. The dinner was also held in this highly acclaimed building.
The VideOlympics is a competition, which had it first edition at this CIVR, in which several state-of-the-art systems are competing simultaneously on a video retrieval task. We have seen a lively event with 9 systems competing and an highly involved audience, following the searchers and their interfaces as well as their achievements which were communicated live on large scoreboards. To get a better impression of this event check out the video that was made, which can be viewed at http://www.videolympics.org.
Next years CIVR will be held in Niagara Falls, Canada. Please check out the website http://www.civr2008.org
|Date||August 14-17, 2007|
|Chair(s)||Victor Leung, Sastri Kota|
QShine 2007, the Fourth International Conference on Heterogeneous Networking for Quality, Reliability, Security and Robustness, was held on 14-17, August 2007 at the Empire Landmark Hotel in Vancouver, BC, Canada. Several workshops were held on the first day, and the remaining three days were devoted to the main conference. Following the successful QShine conferences held in Waterloo, ON, Orlando, FL and Dallas, TX, in 2006, 2005 and 2004, respectively, the 2007 conference continued to focus on heterogeneous networking, but the scope was expanded from quality-of-service (QoS) issues to include a wider range of topics including issues on reliability, security and robustness. The conference was organized under the sponsorship of ICST and CREATE-NET, with generous financial support provided by LG Eletronics and Nokia. The conference also received technical co-sponsorship from the IEEE Communications Society, and technical cooperation from ACM SIGMM, SIGMOBILE, and SIGSIM. QShine 2007 was very well received by the telecommunications industry and technical community, with close to one hundred delegates from North America, Asia and Europe participating in the conference.
The Technical Program Committee assembled a very strong technical program for the main conference. Out of the 114 papers submitted, 49 high quality submissions were accepted for presentation at the conference. Furthermore, the conference program also featured seven invited papers from leading researchers in the area. The presentations were organized in 18 parallel-tracked sessions covering the topics of QoS provisioning, QoS adaptation, modeling and measurement, design and implementation of QoS-enabled networks, scheduling and resource management, QoS in peer-to-peer and overlay networks, QoS in WLAN, WPAN, WMAN and WiMAX, pricing in wired, overlay and wireless networks, wireless sensor networks, energy efficient protocols in wireless networks, cross-layer performance optimization, and security protocols in wired, overlay and wireless networks. Two papers were selected for special recognition. The honour of QShine 2007 Best Paper Award went to Yu Cheng, Weihua Zhuang and Xinhua Ling, FBM model based network-wide performance analysis with service differentiation. The paper by Kai Zeng, Wenjing Lou, Jie Yang, and D. Richard Brown III, On throughput efficiency of geographic opportunistic routing in multihop wireless networks, received the QShine 2007 Best Paper Runner-Up Award.
The plenary keynote presentations delivered by three very prominent members of the networking community were definitely highlights of the conference. Dr. Victor Bahl of Microsoft Research gave a thought provoking talk titled Are Self-Managing Wireless Networks in Our Future? He exposed the lack of tools and techniques that would allow non-technical users or even IT staff to maintain the wireless networks that are becoming widely deployed, and challenged the audience to think about solutions that could eventually lead to self-managed networks. Prof. Kang G. Shin from The University of Michigan gave a broad presentation On QoS of Networked Embedded Systems. He stressed that security is an important part of QoS, discussed the issues of securing embedded systems and reviewed the solutions emerging from ongoing research. In his talk titled Can Clocks ever be Synchronized over Wireless Networks? Prof. P.R. Kumar from University of Illinois at Urbana-Champaign discussed the fundamental limitations to clock synchronization over wireless networks and presented a spatial smoothing approach. He also gave an interesting demonstration of a tracking application using only time measurements.
A new feature of QShine 2007 was the QShine Industrial Panel on Trends and Technical Challenges of Wireless Multimedia Communication organized and chaired by Prof. Panos Nasiopoulos. The panelists were Dr. Byung K. Yi from LG Research USA, Dr. Ed Casas from Intel, Dr. Li Deng from Microsoft, Mr. Ryan Heidari from Qualcomm, and Mr. William Mutual from ComVu. They shared with the audience the latest developments in multimedia over wireless. There were lively discussions among the panellists and with the audience.
A full-day workshop on Cognitive Wireless Networks and three half-day workshops on Mobile Content Quality of Experience, Wireless Networking for Intelligent Transportation Systems, and Satellite/Terrestrial Interworking were held on the first day of the conference. In particular, Prof. Simon Haykin from McMaster University delivered a keynote speech titled Cognitive Radio: A Way of the Future for Wireless Communications to a standing-room only audience in the Cognitive Wireless Netoworks workshop.
QShine 2007 was blessed with excellent weather, and all the attendees and many guests had a wonderful evening enjoying close interactions with colleagues and admiring the skyline and night lights of Vancouver at the dinner cruise, which was the featured social event of the conference.
QShine 2007 owes its success to the effort and support of many individuals. First and foremost, the conference is for the attendees, the speakers and the authors; we thank them for their contributions and participation, without which there would be no point in having a conference. We thank the volunteer efforts of all the Workshop Chairs, TPC members and reviewers, and the strong support and guidance of the Steering Committee, especially the Co-chairs Profs. Imrich Chlamtac, Yuguang (Michael) Fang and Xuemin (Sherman) Shen. We also thank Dr. Xi Zhang for his efforts in publicising the conference, Dr. Qiang Zhang for her efforts in managing the final paper submissions, Dr. Giovanni Giambene for organizing the workshops, Dr. Panos Nasiopoulos for arranging the corporate sponsorships and organizing the panel session, and Dr. Lin Cai for local arrangements and supervising the student volunteers. Special thanks to the student volunteers from the University of Victoria and University of British Columbia who helped with the daily chore of the conference. We are also most grateful to Ms. Kitti Kovacs and the capable staff of ICST in looking after the thousands of logistics that make the conference a reality.
QShine 2008 will be held in Hong Kong on July 28-31, 2008. We give the organizing committee of QShine 2008 our best wishes for a successful conference.
Victor Leung (University of British Columbia), General Chair, QShine 2007
Jelena Misic (University of Manitoba), TPC Co-chair, QShine 2007
Guoliang (Larry) Xue (Arizona State University), TPC Co-chair, QShine 2007
|Date||September 25-28, 2007|
|Chair(s)||Bernhard Rinner, Wayne Wolf|
The first ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC) was held in Vienna, Austria on September 25-28, 2007. The conference attracted 117 attendees mostly from North America and Europe who showed a strong interest in the intersection of computer vision, embedded computing, and distributed algorithms.
The conference started with a series of tutorials introducing various topics that serve as background for distributed smart cameras. 22 technical papers were presented at the conference, covering a range of work from architecture to algorithm design and applications of smart camera networks. A series of poster/demo sessions were also held that presented 29 posters. Some work emphasized vision algorithms, while other work emphasized distributed embedded systems. A Ph.D. forum was also held that attracted presentations from 12 Ph.D. candidates working on related topics.
Three plenary talks were given. Feng Zhao of Microsoft Research spoke on "Sensing platforms for World-Wide Sensor Web". Mubarak Shah of the University of Central Florida spoke on "Video surveillance and monitoring using distributed cameras". Wilfried Philips of the University of Ghent spoke on "Challenges for single- and multi-camera video processing". The conference closed with a panel session titled "Distributed smart cameras: research toys or practical tools?"
Vienna offered a very pleasant environment for the conference. The conference itself was held at the University of Vienna in the heart of the city. The conference dinner included a tour of the city and of a local winery.
The second ICDSC will be held at Stanford University on September 7-11, 2008. The conference Web site is http://www.icdsc.org. Deadline for paper submission is April 21, 2008. Proposals are invited for tutorials and workshops. We invite you to attend.
|Date||August 22-25, 2007|
|Chair(s)||Ilpo Koskinen, Turkka Keinonen|
In August, 2007, Designing Pleasurable Products and Interfaces was organized at the University of Art and Design, Helsinki. As before, the conference was organized as a one-track conference, but this time in cooperation with ACM. The conference had four main themes: the aesthetics of interaction, social interaction, luxury in (interaction) design, and the notion of experience in interaction. DPPI also had an industry session with papers from companies like Philips and Nokia, a student session, and a "design day" during which participants could learn design skills taught by master craftsmen.
There were several 22 full papers, 6 short papers, 14 student papers, and 6 industry papers.
Keynotes were: Dr. Katja Battarbee from IDEO, Palo Alto, who talked about extreme users in design; Dr. Stephan Wensveen from TU/Eindhoven, who focused on work on ambient intelligence in Eindhoven; and prof. Jacob Buur from Mads Clausen Institute in Sönderborg, Denmark, who made a case for studying enskillment for design, and skillfully presented his reflections on the conference. He was a marvelous closing keynote.
Papers explored many topics, but a few themes reappeared:
The next conference will be in 2009 at Technical University of Compiégne near Paris. The name of the conference will in all likelihood be changed to reflect the community's interest in the aesthetics of interaction rather than "pleasure" which, it was felt, has done its job. A conference video created by Hannah Regier and Jenny Liang from Art Center College of Design at Pasadena, California, is available at the designresearch.uiah.fi/dppi07 site.
|Date||October 21-24, 2007|
|Chair(s)||Jose Valdeni de Lima|
WebMedia (Brazilian Symposium on Multimedia and the Web) is an annual event, promoted by the Brazilian Computer Society (SBC), which has been the most important Brazilian forum for presentations, tutorials and discussions on recent advances in research and technology related to Multimedia and the Web since 1995.
In this thirteenth edition, WebMedia is held on October 21-24 in the beautiful city of Gramado, the most touristic city of Rio Grande do Sul, Brazil. The XIII WebMedia organization was under the responsibility of the Applied Informatics Department of the Federal University of Rio Grande do Sul (UFRGS) and the Informatics Center of the Federal University of Pernambuco (UFPE). The conference obtained financial and organization support from SBC, ACM, CGI.br, CNPq, and CAPES.
Besides the technical sessions with full and short paper presentations, the WebMedia 2007 program included six short-courses, three invited keynote speakers, a workshop on tools and applications, a workshop on ongoing thesis and dissertations, and an undergraduate research workshop. In 2007, WebMedia received 114 full-paper submissions: 84 in Portuguese of which 27 were accepted, and 30 in English of which 11 were accepted to be published in these proceedings.
The Program Committee made an excellent work on selecting the best papers to be published in the conference. We would like to thank not only the members of the organization committee, the program committees, and the reviewers, but also the authors, the mini-courses teachers, the invited speakers, and all of you contributing for the success of Conference. Among the accepted and presented papers we would like to emphasize then that allow to debate related topics such as Digital TV, Digital Libraries and, Multimedia Networking and Systems.
Finally, the scientific results of the Conference included recent advances in Brazilian Digital TV, Interactive Digital TV, Electronic Commerce, Internet Multimedia Services (IPTV, VoIP, etc.), Techniques for Developing Web Applications, Ubiquitous and Pervasive Multimedia Systems, Mobile Hypermedia, Semantic Web, Document Synchronization and Temporal Aspects and, Document Engineering.
|Location||Amsterdam, The Netherlands|
|Date||May 24-25, 2007|
|Chair(s)||Pable Cesar, Konstantinos Chorianopoulos, Jens F. Jensen|
The fifth edition of the European Conference on Interactive Television (EuroITV) was organized by CWI (Centrum Voor Wiskunde en Informatica), Amsterdam. EuroITV07 was done in-cooperation with Association for Computing Machinery (ACM) and co-sponsored by International Federation for Information Processing (IFIP). The aim of the conference was to bring together researchers from different regions and diverse disciplines. The conference included contributions from Europe, America, Asia, and Oceania with researchers representing disciplines such as media studies, audiovisual design, multimedia, human-computer interaction, and management. This way, the conference tried to develop a common framework for this multi-disciplinary (usability, multimedia, narrative) and new field, interactive television.
Because of the multi-disciplinary nature of the field, the conference was done in-cooperation with ACM Special Interest Group on Multimedia (ACM SIGMM), ACM Special Interest Group on Computer-Human Interaction (ACM SIGCHI), and ACM Special Interest Group on Hypertext, Hypermedia and Web (ACM SIGWEB).
This year the conference theme was "Interactive TV: A Shared Experience". The major goals of the EuroITV2007 were to consolidate the conference as the focal point of Interactive Television research and to grow in size, but most importantly, in quality and in a controlled manner.
The three keynotes of the conference were given by Maddy Janse, titled "Interactive Media: Shared Experiences in the Extended Home Environment", Luiz Fernando Gomes Soares, titled "Interactive Television in Brazil: System Software and the Digital Divide", and Matthias Rauterberg "Ambient Culture: A Possible Future for Entertainment Computing".
EuroITV2007 was considered to be a success; the major achievements included the high quality of the program, the publication of the conference proceedings by Springer Lecture Notes in Computer Sciences, the scientific organizations (ACM, Springer) involvement, the number of grants obtained (Royal Netherlands Academy of Arts and Sciences and The European Research Consortium for Informatics and Mathematics), and the number of participants (150).
|Date||August 27-29, 2007|
|Chair(s)||Tasos Dagiuklas, Nicolas Sklavos|
The Third International Mobile Multimedia Communications Conference was held at Nafpaktos-Greece, from 27th of August until 29th August 2007 (www.mobimedia.org/2007). The conference was hosted this year by TEI of Mesolonghi-Department of Telecommunication Systems and Networks (www.tesyd.teimes.gr). The conference was supported by the scientific organizations, ICST, ACM and Eurasip. The Conference attracted 80 participants from 17 countries.
The Conference included four Keynote Speeches: Prof. Mohammad Ghanbari from the University of Essex (UK) on Video Coding Standards Evolution for Mobile Multimedia, Dr Bartolomé Arroyo Fernandez from European Commission (Belgium) on Networking Media for Wireless Technologies: FP7 projects and research directions, Dr George Agapiou from OTE (Greece) on Multimedia Delivery over Emerging Wireless Networks: Real-Time Measurements and Research Directions and Prof. Thomas Kaiser (Germany), on MIMO-LTE: A relevant step towards 4G.
The Mobimedia conference attracted 120 full paper submissions. Seventy one full paper had been accepted (acceptance rate 59%) in the final programme. The technical program comprised twelve (12) sessions regarding Video Coding and Transmission over Wireless Systems, Multimedia Content Management, Cross-Layer for Mobile Multimedia, Performance Evaluation of Multimedia over Wireless Networks, Transport protocols and QoS for Wireless Multimedia and Mobile Multimedia Security. Furthermore, two (2) special sessions have been organized, Context Awareness in Ubiquitous Environment and Convergence among Mobile Multimedia Services and Fourth-Generation Wireless Networking Standards. In Parallel with Mobimedia conference, an exhibition event with booths from the Companies Intracom-Telecom (www.intracom-telecom.com) on IPTV over Wireless Networks and BeSecure (www.bescure.gr) on Wireless Multimedia Security has been arranged.
Extended version of outstanding Papers will be considered for publication at the Special Issue of the ACM/Springer MONET Journal entitled Multimedia over ad-hoc and sensor networks.
The conference local arrangements have been organized by Prof. Tasos Dagiuklas and Prof. Nicolas Sklavos from the TEI of Mesolonghi.
There was an Excellent Reception hosted by the Mayor of Nafpaktos, at the magnificent Venetican Port of the Nafpaktos Town and a tour at the local museum.
|Full name||The 18th International Workshop on Network and Operating Systems Support for Digital Audio and Video|
|Date||May 28-30, 2008|
|Chair(s)||Lars Wolf, Carsten Griwodz|
|Paper submissions||Feb 11, 2008|
As it is established practice at NOSSDAV, the 18th installment will focus on cutting-edge, state-of-the-art research in multimedia and newly emerging areas. NOSSDAV in Braunschweig will be held under the dual sign of interactivity. We encourage submissions to the topic of system support for interactive multimedia in particular, and we want to incite interactivity among the senior and junior participants.
|Full name||ACM International Conference on Image and Video Retrieval|
|Date||July 7-9, 2008|
|Location||Niagara Falls, Canada|
|Paper submissions||Feb 11, 2008|
The International Conference on Image and Video Retrieval (CIVR) series of conferences was originally set up to illuminate the state of the art in image and video retrieval between researchers and practitioners throughout the world. This conference aims to provide an international forum for the discussion of challenges in the fields of image and video retrieval. CIVR2008 is seeking original high quality submissions addressing innovative research in the broad field of image and video retrieval.
|Full name||European Conference on Interactive Television|
|Date||July 3-4, 2008|
|Paper submissions||Feb 29, 2008 (short papers)|
EuroITV is a forum for professionals not only from Europe but also from all over the world who are interested, to do research in and work on all aspects of interactive television. The annual conference features work on different aspects of interactive television, e.g. IPTV, mobile TV, digital content production, entertainment computing, usability and user experience evaluation, changes in technical requirements and infrastructure, and future technologies.
|Full name||Euro American Conference on Telematics and Information Systems|
|Date||Sep 10-12, 2008|
|Chair(s)||Leila M. de Almeida e Silva, Federal de Sergipe|
|Paper submissions||March 8, 2008|
EATIS 2008 aims, but are not limited to, the production of scientific work around e/m-Inclusion, e/m-Government, e/m-Health, e/m-Learning, e/m-Culture and e/m-Entertainment (local, regional, national and international), Web Semantic, and Web 2.0 communities, contents and technologies.
|Full name||4th International Mobile Multimedia Communications Conference|
|Date||July 7-9, 2008|
|Chair(s)||Jyrki Huusko, Tapio Frantti|
|Paper submissions||March 14, 2008|
The development and deployment of the multimedia services and applications in mobile environments requires adopting an interdisciplinary approach where both multimedia and networking issues are addressed jointly. Different type of semantic characteristics of media, human interpretation of audiovisual information, coding standards and interaction with networking, mobility and security protocols are research issues that need to be carefully examined when proposing new solutions. The efficient delivery and deployment of multimedia applications and services over emerging diverse and heterogeneous wireless networks is a challenging research objective.
|Full name||10th ACM Workshop on Multimedia & Security|
|Date||Sep 22-23, 2008|
|Paper submissions||April 10, 2008|
The objective of the 10th ACM Multimedia and Security Workshop is to identify key research issues in the areas of multimedia security, such as data protection, media forensics, covert communication, and security in biometrics. We expect the workshop to motivate such research and to establish fruitful relationships with the key actors from academia, industry, and government.
|Full name||ACM International Conference on Multimedia|
|Date||Oct/Nov 27-1, 2008|
|Location||Vancouver, BC, Canada|
|Chair(s)||Abdulmotaleb EL Saddik, Son Vuong|
|Paper submissions||Apr 18, 2008|
ACM Multimedia 2008 invites your participation in the premier annual multimedia conference, covering all aspects of multimedia computing: from underlying technologies to applications, theory to practice, and servers to networks to devices.
|Full name||2nd ACM/IEEE International Conference on Distributed Smart Cameras|
|Date||Sep 7-11, 2008|
|Location||Stanford University, CA, USA|
|Chair(s)||Hamid Aghajan, Tsuhan Chen|
|Paper submissions||Apr 28, 2008|
The conference aims to provide an opportunity for researchers investigating the areas of smart camera architectures, algorithm design, embedded vision-based processing, and smart environments to exchange their most recent results. Offering insight into the potentials and challenges of distributed vision networks and an outlook of research opportunities ahead are also the objectives of the conference.
|Employer||FX Palo Alto Laboratory, Inc.|
|Valid until||March 31, 2008|
FX Palo Alto Laboratory, Inc. has an immediate opening for a Research Scientist with expertise in immersive virtual environments. We are developing applications for virtual worlds and are seeking expertise in VR-related technologies, such as simulation, 3D modeling, procedural 3D graphics, real time motion graphics, and distributed computation. The candidate should be interested in working on practical applications in a collaborative setting. This position requires a Ph.D. in Computer Science or related field, strong development skills and excellent publication record.
|Employer||FX Palo Alto Laboratory, Inc.|
|Valid until||March 31, 2008|
FX Palo Alto Laboratory, Inc. has an immediate opening for a Research Scientist with expertise in large-scale parallel and distributed systems. We are developing distributed virtual collaboration and multimedia applications that run on everything from cellphones and PDA's to laptop and desktop computers to clusters of multicore computers. Experience with parallel programming, large-scale storage systems and multimedia databases, distributed programming tools, and network protocols desired. The candidate should be interested in working on practical applications in a collaborative setting. This position requires a Ph.D. in Computer Science or related field, strong development skills and excellent publication record.
|Employer||FX Palo Alto Laboratory, Inc.|
|Valid until||March 31, 2008|
FX Palo Alto Laboratory, Inc. has an immediate opening for a Research Scientist with expertise in computer vision or image processing and analysis to work with us on Document Image Analysis. The goal of the project is to be able to understand the structure and content of scanned documents, such as forms, brochures, and other complex office documents. We plan to use a probabilistic model-driven approach and machine learning techniques. We also want to explore parallel algorithms, since speed is critical for our task. Experience in working with document images is desirable but not required. The candidate should be interested in working on practical applications in a collaborative setting. This position requires a Ph.D. in Computer Science or related field, strong development skills and excellent publication record.
|Employer||Image and Signal Processing Department, TELECOM ParisTech (ENST)|
|Valid until||March 1, 2008|
TELECOM ParisTech is developing a new multimedia indexing and mining platform. Our goal is to allow the user to visually construct complex processing chains from the variety of available tools that are provided by the researchers.
|Employer||Image and Signal Processing Department, TELECOM ParisTech (ENST)|
|Valid until||March 1, 2008|
The main objective is to obtain an automatic segmentation of the audio track of TV shows and to automatically label the different types of segments (speech, music,?.) including mixed segments in developing new statistical approaches for novelty detection and content structuring,
|Employer||University College London|
|Valid until||March 31, 2008|
The Department of Computer Science at University College London is seeking two outstanding PhD students to join our team in the area of Information Retrieval. One of the students will be jointly supervised by Jun Wang of UCL and Stephen Robertson of Microsoft Research, Cambridge. The project will focus on developing formal probabilistic information retrieval (Web search) models. Another student will be supervised by Ingemar Cox. The project will focus on understanding resource-constrained information retrieval.
|Employer||University of Oslo|
|Valid until||March 25, 2008|
As part of the center research-based innovation Information Access Disruptions, the Networks and Distributed systems group at the University of Oslo wants to employ 1 PhD student or 1 PostDoc in the area of systems-oriented, experimental computer science including operating systems, protocols and the architecture of distributed systems.