Wolfgang Stuerzlinger's Research Projects
This page lists my current research projects with the results.
All publications are linked in square brackets and point to entries
in my publications list.
A list of older research projects can be found after that.
Please consult also the WWW pages of the students,
who are or were working on these projects with me, the
research labs where the work took place, and the pages of
agencies and companies supporting my research.
3D and Spatial User Interfaces (3D UI), Visual Analytics (VA),
Human Computer Interaction (HCI), Virtual Reality (VR),
and Interactive Computer Graphics
Most of my research focuses on better user interfaces for computer
systems in general.
The first central theme is enhanced user interfaces for 3D, Spatial, and Virtual
Reality systems and interactive Computer Graphics.
This also includes user interfaces for big spatial data.
The second theme is Human Computer Interaction research, including text entry,
pointing, and work towards better graphical user interfaces (GUIs).
Alternatives for Generative Design and Visual Analytics (SFU)
Solving difficult or ill-defined problems, such as designing a new artifact or
identifying new insights from big data sets, typically involves an exploration of alternatives.
Yet, current tools do not support alternatives well in creative and analytical work.
- Alternatives in Generative Design
We created GEM-NI (Generative Many-Nodes Interpreter) - a graph-based generative design
tool that supports parallel exploration of alternative designs.
GEM-NI enables exploration with alternatives through parallel editing,
a new form of non-destructive resurrection from history, branching, novel merging
mechanisms, comparing, and new structural Cartesian products of alternatives.
Further, GEM-NI provides a modal graphical user interface and a design gallery, which
allow designers to control and manage their design exploration.
User studies confirm that GEM-NI supports creative design work well.
- Mixed-Initiative Systems for Big Data
We did a survey on the role of visual analytics and prediction in mixed-initiative systems.
Our current work applies and extends the ideas behind GEM-NI to visual analytics.
3D UI - 3D User Interfaces (ISRG)
This area of research investigates user interfaces for interactive 3D systems,
where users can easily select and manipulate 3D content.
- 3D Pointing and Selection (ISRG)
Common 2D input devices, such as the mouse, outperform most 3D input
devices on frequently used tasks in 3D environments. This seems
counterintuitive at first. One aspect of the problem is that simultaneous
control of three degrees of freedom is more difficult for humans compared
to just two degrees of freedom. Hence, most successful systems use
3D input only rarely, see e.g. the work on
Guidelines for 3D User Interfaces,
mentioned below. The other aspect pertains to differences in 2D and
3D input technologies. Examples for differences include variations
in latency, jitter, the effect of co-location of input and output,
as well as the existence of a supporting surface. To investigate this,
we have performed a series of experiments that explore and document the
effects of each factor.
This research is part of the NSERC CREATE Program in Computational Approaches to Sensorimotor Transformations for the Control of Action.
- Slide - Easy-to-Use 3D Manipulation System (ISRG)
This system builds on the work in the SESAME system and realizes novel
techniques for 3D manipulation with 2D input devices, such as a mouse or
multi-touch tabletop system. Slide includes new methods for 3D rotations,
as well as a new method for quickly disambiguating 3D positions in perspective
viewing. It also includes known methods for quick 3D navigation and 3D selection. Moreover, the system also supports common VR input devices such as
Nintendo WiiMotes and Balanceboards.
Recent results were submitted to the IEEE 3DUI Contest 2011. Among all contest
entries, this system featured the fastest completion times for the
virtual 3D puzzle, for both novices and experts.
In the same year,
we ported Slide onto MULTI,
used its multi-touch abilities,
and enabled 3D Manipulation, see this video.
This research is part of the NSERC CREATE Program in Computational Approaches to Sensorimotor Transformations for the Control of Action and also the GRAND (Graphics, Animation and New Media) Network of Centres of Excellence.
- Easy-to-Use 3D Navigation
This work presents new, easy-to-use 3D navigation methods
- Guidelines for and Evaluations of 3D User Interfaces (ISRG)
One of the problems of VR systems is that 3D input devices are still in their
infancy. Even more importantly, the software technologies that map the
raw user movements to 3D object manipulation are also immature.
Comparing the performance of intelligent algorithms for 3D manipulation
in a desktop environment with a mouse with "traditional" VR manipulation
techniques using 3D trackers illustrates this best.
For many common tasks the desktop system with the mouse excels.
This work assembles a list of guidelines for 3D manipulation. Each
of the guidelines is targeted at making 3D and VR systems easier to use
and is based on previous research or evaluative studies.
The most recent work identifies input devices, such as the air pen, which work well for 3D manipulation.
This research is part of the NSERC CREATE Program in Computational Approaches to Sensorimotor Transformations for the Control of Action and also the GRAND (Graphics, Animation and New Media) Network of Centres of Excellence.
Interaction on Large Display Surfaces (ISRG)
This project investigates infrastructure that encourages collaborative
work on large display surfaces.
Within this project, we are working on 2D and 3D systems that are very
easy-to-use, but still retain the capability to analyze real-world problems
in collaboration with other users in the same room.
Towards this we are working on a collaborative platform, which allows multiple people
in the same location to seamlessly collaborate during a session, even
if the content is beyond arm's reach.
The project has the following subprojects:
- MULTI - Collaboration on Large Display Surfaces (ISRG)
Lasers can be used as an interaction device for large display surfaces.
This project focusses on a new kind of laser-based input device, and is
one of the few technologies, which support multiple simultaneously active
users. Evaluations show that laser pointers can serve as effective input
devices for large screens. Another part of the research focussed on a
state-of-the art evaluation of the performance of various remote pointing devices.
Our multi-user laser pointer input technology provides a basis for
collaborative, shared display groupware (SDG) and computer supported
cooperative work (CSCW) applications. Beyond that we are currently
completing a new kind of collaborative hardware setup designed to
faciltiate collaborative work by teams of 2-10 people with interdisciplinary
backgrounds. Such a seamless collaborative system can be used in design
review scenarios, which leads to better end products.
The system is called MULTI (Multi-user laser table interface)
and provides several fully interactive table and wall surfaces.
- Multi-touch interfaces for 2D and 3D Object Manipulation (ISRG)
MULTI's table surface is very large
compared to other multi-touch tabletop systems (60" diagonal).
The system supports multi-touch interaction via
fingerlings that contain LED's. To our knowledge, this makes MULTI one of
the, if not the largest, multi-touch tabletop system at its time.
The research in this project focuses on new techniques for the
manipulation of 2D and 3D content on multi-touch displays.
Moreover, we also perform careful experimental comparisons between
the new techniques and previously presented methods.
Other Human-Computer Interaction projects
- The Effects of Technology on Pointing and Tracking Performance (ISRG)
Many novel input devices have been presented for computer
systems. Beyond standard technology such as mice and pens, there are also
game console controllers (Nintendo Wii Remote, etc.) and many other
approaches. While each technology has different ergonomic aspects,
they also are based on different technical implementations.
This project investigates the
effect of technical factors such as delays (commonly called latency),
variations in delay (i.e. time jitter), spatial jitter, and several other
factors on pointing performance. The trade-offs between these factors
that are documented through our work
allow input device designers to make better choices for high-performance
Recent work investigates how pursuit tracking, i.e. the ability to follow
a moving target on screen with an input device, is affected by the
mentioned technical factors.
A side project investigates how artificially introduced movement delays
over interesting features, such as word boundaries, affect pointing
performance in text selection.
- The Effects of Errors on Human Performance in Text Entry (ISRG)
When entering text, humans make errors. However, sometimes the text entry
technology is also not perfect, such as when a button malfunctions.
In Human-Computer Interaction research, many models have been presented to
predict and understand human performance in (error-free) text entry. Almost
all models are specific to a technology or fail to account for human factors.
Moreover, the process of fixing errors and its effects on text entry
performance has not been studied.
Here, we first analyze real-life text
entry error correction behaviors. We then use our findings to develop a
new model to predict the cost of error correction for character-based
text entry technologies. We validate our model against quantities derived
from the literature, as well as with a user study. Our study shows that
the predicted and observed costs of error correction correspond well.
At the end, we discuss potential applications of our new model.
- User Interface Façades - Towards Fully Adaptable User Interfaces (ISRG, together with O. Chapuis & N. Roussel from inSitu, Paris-Sud, France)
This project presents a new
technology that lets end-users adapt the user interface of arbitrary
applications to their needs without resorting to coding. The user can
select one or more widgets and drag them into other windows to create new
GUIs (or drop them onto the desktop to create a new façade).
Alternatively, users can
replace widgets or change the mapping of mouse events to adapt any GUI
to their own requirements and patterns of usage. The current version of
User Interface Façades is based on the accessibility interface
provided by most GUI toolkits and a novel GUI server.
More information, videos, and source code, can be found on the
Façades WWW page.
Recent work builds on Façades to enhance the interaction with
common GUI elements.
- User Performance Modeling
and Cognitive Modeling for Text Entry Methods (ISRG)
One line of research in this project concentrates on models to predict text
entry rates for novice users. All other models focus on experts only,
which provides only information about peak speeds, which are often very
unrealistic. The predictions generated by the new model are
strikingly close to the results of user studies with novies.
Recent work in this project investigates new text entry methods for mobile
devices, both button-based (Less-Tap) as well as on touch screens.
Another line of research focuses on predictive models to simulate the
transition of novices to experts, i.e. learning of new text entry techniques.
- New User Interfaces for Cloning Objects (ISRG)
Cloning objects is a common operation in graphical user interfaces. One
example is calendar systems, where users commonly create and modify
recurring events, i.e. repeated clones of a single event. Inspired by
the calendar paradigm, we introduce a new cloning technique for 2D
drawing programs. This technique allows users to clone objects by first
selecting them and then dragging them to create clones along the dragged
path. Moreover, it allows editing the generated sequences of clones
similar to the editing of calendar events. Novel approaches for the
generation of clones of clones are also presented.
In a user study, the new clone creation technique has been shown to be
faster than both dialogs and smart duplication for most conditions.
For clone editing, our new technique compares also favourably against
previous work. Participants preferred the new techniques overall, too.
- Pressure-based touch interaction (ISRG)
We explore a new method for pseudo-pressure detection on standard touchscreens
and its applications.
- Perception-Based Grouping (ISRG)
The direct manipulation of objects and efficient selection of objects is an
integral part of modern user interfaces. Most systems support only
rectangle selection and shift-clicking for group selection. In this project
we investigate a new selection technique, which is based on the way human
perception naturally groups objects.
- Novel Layout Mechanisms for Graphical User Interfaces (ISRG)
The way various user interfaces elements (i.e. widgets) are placed on
inside a window is described via layout mechanisms. This becomes
particularly relevant, when the size of the window is changed, as the
layout mechanism also incorporates the resizing behaviour.
Commonly used layout methods are fairly simplistic and have their
limitations. While there are very powerful methods to define layouts,
the associated methods and programming interfaces are hard to understand and
graphical user interface builders for such layouts are difficult to use.
This work investigates a new, easy-to-understand layout mechanism and
evaluates its implementation. Part of the work focuses on a new user
interface builder systems that includes a novel form of preview window
to illustrate the design choices immediately to the user, while still
enabling easy access to all necessary parameters.
- Enhanced Methods for Document Differencing and Versioning (ISRG)
Comparing and selecting text from multiple versions of a document
is a common task in collaborative scenarios. Similarly, collaborative updating
diagrams, such as orgcharts, UML diagrams, and course prerequisite
visualizations, again involves visual comparisons of changes followed by
selection of a new "final" version.
Even single users benefit from
versioning facilities when working on any form of document as they can easily
see what they have changed previously.
Text and diagram versioning methods are not well documented in the
scientific literature, even though implementations of text versioning
are abundant in commercial and non-commercial software.
Our work presents several new methods for text and diagram versioning.
We validated the results with user studies.
This research is part of the GRAND (Graphics, Animation and New Media) Network of Centres of Excellence.
- Context-Sensitive Cut, Copy and Paste (ISRG)
Creating and editing source code are tedious and error-prone
processes. One important source of errors in editing programs is
the failure to correctly adapt a block of copied code to a new
context. This occurs because all semantic dependencies to the
surrounding code need to be adapted in the new context
and it is easy to forget some. Conversely, this also makes
such errors hard to find.
Our research investigates a new method for identifying some common
types of errors in cut, copy and paste operations. The method
analyzes the context of the original block of code and tries to
match it with the context in the new location to find such errors.
- On- and Off-Line User Interfaces for Collaborative Cloud Services (ISRG)
Cloud-based services have become prevalent on the Internet. However,
the usability aspects of these services are often in their infancy.
Here, we describe a vision for user interfaces of cloud-based systems that permit seamless collaboration and provide also on- and off-line access to data. All individual components of this vision are currently available in various systems, but the sum of the components will satisfy user needs much more comprehensively compared to the current state of the art.
- Behavioural Training with Mobile Computers (ISRG, together with P. Ritvo)
This project investigates how a mobile computing platform can be used
to help people to adhere e.g. to a diet. A new
version of this system is currently in the works. It will be based
on Web 2.0 services and feature support for offline access and data
- Pen-based Computing (ISRG)
Tablet PC's and personal digital assistants (PDA's) are becoming more and
more popular. However, interaction techniques for manipulating objects in
drawings/designs/diagrams are often based on ideas from mouse-based interfaces.
We are performing research into steering motions and explore new techniques
for the interactive selection and manipulation of arbitrary groups of objects.
This will make work on large scale diagrams/designs/drawings easier.
Other work investigated the differences between drawing with a mouse, touch, and a stylus.
Other Virtual Reality projects
- Immersive Virtual Reality Systems
We created a six-sided CAVE,
IVY, (together with M. Jenkin, R. Allison,
and others VGR, CVR)
which is a room where every side
(including the floor and the ceiling) displays computer generated
imagery. The immersive device, called IVY, was completed in 2002.
Novel aspects include a ventilation system for the enclosed space and
a novel tracking system.
Recent work resulted in TIVS at SFU, a new temporary CAVE system that does not consume permanent
floor space, while still taking less than 5 minutes to activate.
The system cost is much less than $10k, which makes this a very cheap, if not
the cheapest CAVE installation.
- 6 DOF Tracking (CVR together with R. Allison)
The pose of an object in space is often described by
six numbers, three to quantify the position, and three for the
rotation are often used to describe any potential pose of an object in space.
Following an object is thus frequently referred to as tracking
6 degrees of freedom (6 DOF).
The Hedgehog is a new kind of 6 DOF tracking device, which features
a large number of computer controlled laser diodes
pointing outwards to project unique spots onto the walls as well as
cameras outside IVY (or a CAVE) to track these spots. From these spots the
position and orientation of the tracking device is computed in real-time.
This in turn can be used to project the correct images for the current
position of the users's head, which the tracking system is attached to.
Translations can be tracked at least as accurately as current commercially
available solutions. Rotations can be tracked 10 times more
accurately than other systems. Hence, this technology greatly improves
immersion in Virtual Reality and Augmented Reality systems.
We are working to improve this technology further to make it more
generally useable, depending on the availability of funding.
- Guidelines for Evaluation and Presentation of VR systems
(together with M. Latoschik)
We analyze the presentation and evaluation of relevant scientific research in real-time
interactive systems (RIS), which includes Virtual, Mixed, and Augmented Reality
(VR, MR, and AR) and advanced Human-Computer Interaction systems.
We identify different methods for a structured approach to the description and
evaluation of systems and their properties, including commonly found best practices
as well as dos and don'ts. The work is targeted at authors as well as reviewers
to guide both groups in the presentation as well as the appraisal of system engineering work.
- Network Lag Compensation (ISRG together with R. Allison)
In collaborative Virtual Reality systems and networked games, it is
necessary to transfer information about the state of the world between
multiple systems. Such
transfer is associated with transmission time-lag, and humans are
reasonably good at dealing with a constant lag. However, freely
accessible public networks exhibit significant variation in transmission
lag due to the presence of unpredictable traffic flows. Such variations
affect human performance very strongly. This
makes public networks often unsuitable for real-time collaborative
We recently presented a new predictive lag compensation scheme, which
evens out these variations in lag in an optimal manner. Results show that
a prototype implementation performs close to the theoretical optimum.
Computer Graphics projects
- Better User Interfaces for 3D Scanning
Scanning of objects to produce 3D models is becoming more commonplace
as the required hardware is becoming more widely available.
This involves obtaining multiple scans of an object to create a complete 3D model.
In this project we investigate better user interfaces for selecting the next view
to scan from. The starting point is better visualizations of unscanned regions.
Other research interests
- Low-overhead database system (RZL)
- Fast access to compressed databases (RZL)
These are two successful projects from my commercial background. We are
evaluating the performance of these database systems and comparing them
with other implementations.
Inactive research projects
Human-Computer Interaction, 3D and Spatial User Interfaces, Virtual Reality
- SESAME - Easy-to-Use Conceptual Design System (ISRG)
This project investigates a new conceptual design system. The main goal of
is to enable even naive users to quickly create and modify 3D content to
communicate design ideas. SESAME (Sketch, Extrude, Sculpt, and
Manipulate Easily) systems is based on
the solid modeling paradigm and requires only a 2D pointing device. User
studies have shown that naive users can quickly learn to use this system
to generate interesting content. A comparison of SESAME with sketching on
paper showed no significant differences in terms of creativity, but a
comparison with a user group familiar with standard CAD tools clearly
shows that SESAME encourages more creativity than current tools.
A demo version
of the SESAME system is also available.
- Virtual LEGO - A 3D Construction System (ISRG)
Lego blocks are a simple way to create 3D shapes. The Virtual Lego system
introduces simple techniques to quickly create and manipulate
Lego models. User studies showed that users without any 3D experience
could quickly create 3D content with this system.
You can also download a demo version of
the Lego system. A user study of a haptics version of Lego has
yielded interesting results.
- HDR Systems - High Dynamic Range Video, Displays,
(CVR, ISRG, together with others)
Real scenes and real photographic images exhibit a much larger dynamic
range than current technology provides for. This project investigated
how images with high dynamic range (HDR) can be acquired and how images with
high dynamic range can be displayed on current hardware. One result is a
system that can acquire HDR images at video rates. A collaborative
project led by UBC and with G. Ward and other researchers at McGill
and York University as well as several companies resulted in a new
HDR display system (colorcoded HDR images by G. Ward, left: input data,
middle: image of standard monitor, right: image of HDR display).
The technologies were being commercialized by the startup
Dolby recently acquired this company.
Other research included:
- New high dynamic range technologies,
a first high-dynamic range projector was presented recently.
images showing details of the HDR projection are available
- User interface issues for high dynamic range displays.
- MIVE - Multi-modal User Interface for Virtual
Environments (ISRG, VGR)
The creation of object models for computer graphics applications, such
as interior design or the generation of animations is a labor-intensive
process. Today's computer aided design (CAD) programs address the
problem of creating single geometric object models quite well. But
almost all users find common tasks, such as quickly furnishing a room,
hard to accomplish.
This project investigates 3D interaction techniques that are easy to
use, yet allow users to quickly construct 3D environments. The user
interface is evaluated with user tests. The results indicate
that users take less than half the time with the new system.
Individual publications focus on the following issues:
There is also a demo version of
the MIVE system.
- [SS02] describes the
object group interaction techniques.
- [SSS01b] describes
a detailled evaluation of the interaction techniques.
- [SSS01a] describes
an evaluation with complex tasks (scene creation & modification).
- [SS01b] describes
details of how constraints work in the MIVE system and discusses also
the automatic creation of constraints
- [SS01a] describes an
evaluation of the semantic constraints.
- [SLS00a] describes
an attempt to integrate an intelligent assistant.
- [GS99] describes the
first implementation of semantic constraints.
Computer Graphics and Image-Based Modeling
- New Planning Methods for Image-Based Modeling (VGR)
Several commercial solutions exist for scanning of 3D objects. The
result is a geometric model of the object. Although impressive results
have been demonstrated, user intervention is still required to generate
complete object models.
This project addresses the problem with techniques that check during
acquisition for missing or badly sampled parts and direct the
acquisition device to capture new views of such parts. The goal is to
create an automatic acquisition system that can also be used in
applications such as 3D faxing.
Furthermore, the research is targeted towards real-time acquisition,
merging and planning. First results show that with such techniques it
is possible to process even very spacious environments (e.g. a level
of the popular game Quake) in reasonable timeframes.
- How many images are needed for Image-based Modeling? (IBR)
Today many systems exist to generate geometric models of existing
scenes and objects. One way to capture surface texture data is to record
a series of images that, collectively, captures all visible surfaces of
the object. Finding good viewpoints for this task is not easy.
This project presents a new heuristic method to find a good set of
viewpoints for a given geometric model. Taking images from the computed
viewpoints will show every visible part of every surface at least once.
- Natural Phenomena
- Editing of Fractal Terrains (VGR)
This project explored new ways to modify fractal terrains.
- Rendering Clouds (VGR) [ES01]
This project investigated new ways to render natural objects.
- Real-time Rendering
- Image-Based Rendering
- High-Quality, Real-Time Image-based Rendering (VGR)
Sergey Parilov's Master's thesis 2002,
Image-Based Rendering (IBR) uses images to create new images.
This new paradigm has demonstrated a lot of advantages over conventional
computer graphicsi methods. Based on a novel visibility method, we
created an IBR system that generates images of scenes with billions
of samples at real-time speeds (>20Hz).
Ultimately we investigated the trade-off between image quality and
rendering speed by taking the capabilities of the human visual system
- Real-Time Rendering of Penumbras (VGR)
Shadows significantly enhance the realism of
images. This project presented a new method to geometrically compute
area shadows (penumbras) in real-time.
- Massive Model
Rendering (WALK) [ACW+99]
Visualization of very complex models in real-time (a 13 million
triangle power plant @ 5-20 Hz). I designed and
implemented the distant geometry replacement technique to ensure
scalability (together with K. Hoff). The technique employs TDM's
(Textured Depth Meshes), an image-based rendering primitive.
- Planar Reflections with Image-based Rendering techniques
Image Cache (GUP) [SS96]
Scenes with very large polygon counts cannot be rendered in real-time
on current graphics hardware. This paper presents an image-based
rendering technique, where rendering effort is independent of polygon
count. An image cache stores previously rendered images of parts of the
static scene. Error bounds control the re-use of these images.
Hierarchical combination of images provides scalability.
Shade et al. developed an almost identical technique in parallel and independent to our work.
- Rendering for Multiple Projectors and Multisurface Displays (IBR) [RCWS98b][RCWS98a]
The tech report includes a performance analysis.
- Interactive Rendering of
Global Illumination Solution for Glossy Surfaces (WALK)[SB97]
This contribution introduces the first interactive display of a full
global illumination solution of an environment with glossy surfaces i.e.
surfaces that are neither diffuse nor perfectly specular. The method is
best suited for low glossy surfaces found in many office environments.
LOD's - Geometric Approximations (GUP) [SS95a]
For faster rendering a new method to compute LOD's (Levels of Detail)
is introduces which reduces the complexity of geometric models while
preserving their appearance.
- Advanced Global illumination
- Computation of a Global Illumination Solution with Glossy
Surfaces [St98c] (GUP)
- Optimized Local Pass [St96a]
The visual quality of a displayed radiosity solution often suffers from
deficiencies in the underlying mesh. The local pass technique
re-computes the illumination at each visible surface point. This avoids
visual artifacts but involves considerable computational effort. The
contribution speeds the local pass by stochastically sampling only the
most important contributors to the illumination of a surface point. One
advantage of this technique is that it generalizes to non diffuse
radiosity solutions, too.
Local Pass [SS95b] (GUP)
- Photo-realistic Rendering
Map Techniques (GUP)
- Ray tracing
- Free-Form Surfaces (APM) [St98b]
Based on previous work together with W. Barth [BS93] this paper presents the
first ray tracing method for triangular free form surfaces, which are
becoming more common in CAD applications. Another important contribution
introduces a compact and efficient description of complex trimming
curves such as those created by combining objects described by free form
surfaces. Furthermore, the paper discusses the basis for an efficient
triangulation method that converts trimmed triangular free form surfaces
to planar triangles.
- Optimizations (APM,GUP) [ST94]
This works presents an optimization for the traversal of bounding volume
hierarchies, which speeds up the traversal by at least 50%.
Another speed-up method based on subdividing direction space is also
- Parallel Radiosity, Parallel Visibility, Dynamic Load Balancing
These publications discuss a massively parallel method to compute
radiosity solutions on distributed memory machines with hundreds of
processors. The key method is a distributed visibility computation
technique that consumes less bandwidth compared to other approaches. An
efficient dynamic load balancing technique is another important aspect
of the presented approach.
- Adaptive Discontinuity Meshing (GUP) [St94b][St92b]
- Radiosity with Voronoi Diagrams (APM) [St92a]
- Point Clouds for Bounding Volumes (GUP) [St96b]
- Hemispherical Projections (GUP) [St95a]
- Object oriented rendering systems (APM, GUP): Flirt [STS93],
FXFire [St93], Generic
- Automated training of vision systems (GUP) [BBS95]
Faster training of a robot by simulating the images it sees during
Companies and Start-Ups
I have been involved in the following companies:
Please refer to the News Coverage Page.
Funding and other support was provided by the following entities:
List of Research Labs
Interactive Systems Research Group at the Dept. of Computer Science and Engineering,
York University in Toronto, Canada
(Vision, Graphics and Robotics) group at the Dept. of Computer Science and Engineering,
York University in Toronto, Canada.
(Centre for Vision Research) at York
University in Toronto, Canada.
(Walkthrough) group and IBR (Image-Based Rendering)
group at the Dept. of Computer
Science, UNC (University of North
Carolina) in Chapel Hill, USA.
(Computer Graphics and Parallel Processing) group at the Institute of
Telematics and Technical Computer Science, Johannes Kepler University in Linz,
(Algorithms and Programming Methodology) group at the Institute of Computer Graphics, Technical University of Vienna, Austria.
- RZL Software
GesmbH - software for tax consultants in Ried, Austria.