Welcome to GrUVi @ CS.SFU!

We are an inter-disciplinary team of researchers working in visual computing, in particular, computer graphics and computer vision. Current areas of focus include 3D and robotic vision, 3D printing and content creation, animation, AR/VR, generative AI, geometric and image-based modelling, language and 3D, machine learning, natural phenomenon, and shape analysis. Our research works frequently appear in top venues such as SIGGRAPH, CVPR, and ICCV (we rank #14 in the world in terms of top publications in visual computing, as of 6/2023) and we collaborate widely with the industry and academia (e.g., Adobe Research, Amazon, Autodesk, Google, MSRA, Princeton, Stanford, Tel Aviv, and Washington). Our faculty and students have won numerous honours and awards, including FRSC, SIGGRAPH Outstanding Doctoral Dissertation Award, Alain Fournier Best Thesis Award, CS|InfoGAN Researcher Award, Google Faculty Award, Google PhD Fellowship, Borealis AI Fellowship, TR35@Singapore, CHCCS Achievement and Early Career Researcher Awards, NSERC Discovery Accelerator Awards, and several best paper awards from CVPR, ECCV, SCA, SGP, etc. Gruvi alumni went on to take up faculty positions in Canada, the US, and Asia, while others now work at companies including Amazon, Apple, EA, Facebook (Meta), Google, IBM, and Microsoft.

×

NeurIPS 2023: Spotlight on GruVi Lab

November 6, 2023

NeurIPS, the premier conference on machine learning, will be held in New Orleans this year (Dec 10-16). GrUVi lab will once again have a good show at NeurIPS, with 7 technical papers and 1 dataset and benchmarks paper!

Please refer to our publication page to see more details.

NeurIPS 2023: Spotlight on GruVi Lab

November 6, 2023

NeurIPS, the premier conference on machine learning, will be held in New Orleans this year (Dec 10-16). GrUVi lab will once again have a good show at NeurIPS, with 7 technical papers and 1 dataset and benchmarks paper! Please refer to our publication page to see more details.

×

Talk by Dr. Taras Kucherenko from EA

November 3, 2023

Click this link to see the talk replay.

Title: Co-speech gesture generation

Abstract: Gestures accompanying speech are essential to natural and efficient embodied human communication. The automatic generation of such co-speech gestures is a long-standing problem in computer animation. It is considered an enabling technology in film, games, virtual social spaces, and interaction with social robots. The problem is made challenging by the idiosyncratic and non-periodic nature of human co-speech gesture motion, and by the great diversity of communicative functions that gestures encompass. Gesture generation has seen surging interest recently, owing to the emergence of more and larger datasets of human gesture motion, combined with strides in deep-learning-based generative models that benefit from the growing availability of data. This talk will review co-speech gesture generation research development, focusing on deep generative models.

Bio: Dr. Taras Kucherenko is currently a Research Scientist at Electronic Arts. He finished a Ph.D. at the KTH Royal Institute of Technology in Stockholm in 2021. His research is on machine learning models for non-verbal behavior generation, such as hand gestures and facial expressions. For his research papers, he received ICMI 2020 Best Paper Award and IVA 2020 Best Paper Award. Taras was also the main organizer of The GENEA (Generation and Evaluation of Non-verbal Behavior for Embodied Agents) Workshop and Challenge in 2020, 2021, 2022, and 2023.

Talk by Dr. Taras Kucherenko from EA

November 3, 2023

Click this link to see the talk replay. Title: Co-speech gesture generation Abstract: Gestures accompanying speech are essential to natural and efficient embodied human communication. The automatic generation of such co-speech gestures is a long-standing problem in computer animation. It is considered an enabling technology in film, games, virtual social spaces, and interaction with social robots. The problem is made challenging by the idiosyncratic and non-periodic nature of human co-speech gesture motion, and by the great diversity of communicative functions that gestures encompass. Gesture generation has seen surging interest recently, owing to the emergence of more and larger datasets of human gesture motion, combined with strides in deep-learning-based generative models that benefit from the growing availability of data. This talk will review co-speech gesture generation research development, focusing on deep generative models. Bio: Dr. Taras Kucherenko is currently a Research Scientist at Electronic Arts. He finished a Ph.D. at the KTH Royal Institute of Technology in Stockholm in 2021. His research is on machine learning models for non-verbal behavior generation, such as hand gestures and facial expressions. For his research papers, he received ICMI 2020 Best Paper Award and IVA 2020 Best Paper Award. Taras was also the main organizer of The GENEA (Generation and Evaluation of Non-verbal Behavior for Embodied Agents) Workshop and Challenge in 2020, 2021, 2022, and 2023.

×

Talk by Prof. Tat-Jun Chin from University of Adelaide

October 27, 2023

Click this link to see the talk replay.

Title: Quantum Computing for Robust Fitting

Abstract: Many computer vision applications need to recover structure from imperfect measurements of the real world. The task is often solved by robustly fitting a geometric model onto noisy and outlier-contaminated data. However, relatively recent theoretical analyses indicate that many commonly used formulations of robust fitting in computer vision are not amenable to tractable solution and approximation. In this paper, we explore the usage of quantum computers for robust fitting. To do so, we examine the feasibility of two types of quantum computer technologies—universal gate quantum computers and quantum annealers—to solve robust fitting. Novel algorithms that are amenable to the quantum machines have been developed, and experimental results on current noisy intermediate scale quantum computers (NISQ) will be reported. Our work thus proposes one of the first quantum treatments of robust fitting for computer vision.

Bio: Tat-Jun (TJ) Chin is SmartSat CRC Professorial Chair of Sentient Satellites at The University of Adelaide. He received his PhD in Computer Systems Engineering from Monash University in 2007, which was partly supported by the Endeavour Australia-Asia Award, and a Bachelor in Mechatronics Engineering from Universiti Teknologi Malaysia in 2004, where he won the Vice Chancellor’s Award. TJ’s research interest lies in computer vision and machine learning for space applications. He has published close to 200 research articles, and has won several awards for his research, including a CVPR award (2015), a BMVC award (2018), Best of ECCV (2018), three DST Awards (2015, 2017, 2021), an IAPR Award (2019) and an RAL Best Paper Award (2021). TJ pioneered the AI4Space Workshop series and is an Associate Editor at the International Journal of Robotics Research (IJRR) and Journal of Mathematical Imaging and Vision (JMIV). He was a Finalist in the Academic of the Year Category at Australian Space Awards 2021.

Talk by Prof. Tat-Jun Chin from University of Adelaide

October 27, 2023

Click this link to see the talk replay. Title: Quantum Computing for Robust Fitting Abstract: Many computer vision applications need to recover structure from imperfect measurements of the real world. The task is often solved by robustly fitting a geometric model onto noisy and outlier-contaminated data. However, relatively recent theoretical analyses indicate that many commonly used formulations of robust fitting in computer vision are not amenable to tractable solution and approximation. In this paper, we explore the usage of quantum computers for robust fitting. To do so, we examine the feasibility of two types of quantum computer technologies—universal gate quantum computers and quantum annealers—to solve robust fitting. Novel algorithms that are amenable to the quantum machines have been developed, and experimental results on current noisy intermediate scale quantum computers (NISQ) will be reported. Our work thus proposes one of the first quantum treatments of robust fitting for computer vision. Bio: Tat-Jun (TJ) Chin is SmartSat CRC Professorial Chair of Sentient Satellites at The University of Adelaide. He received his PhD in Computer Systems Engineering from Monash University in 2007, which was partly supported by the Endeavour Australia-Asia Award, and a Bachelor in Mechatronics Engineering from Universiti Teknologi Malaysia in 2004, where he won the Vice Chancellor’s Award. TJ’s research interest lies in computer vision and machine learning for space applications. He has published close to 200 research articles, and has won several awards for his research, including a CVPR award (2015), a BMVC award (2018), Best of ECCV (2018), three DST Awards (2015, 2017, 2021), an IAPR Award (2019) and an RAL Best Paper Award (2021). TJ pioneered the AI4Space Workshop series and is an Associate Editor at the International Journal of Robotics Research (IJRR) and Journal of Mathematical Imaging and Vision (JMIV). He was a Finalist in the Academic of the Year Category at Australian Space Awards 2021.

×

Talk by Prof. Leonid Sigal from UBC

October 20, 2023

Click this link to see the talk replay.

Title: Efficient, Less-biased and Creative Visual Learning

Abstract: In this talk I will discuss recent methods from my group that focus on addressing some of the core challenges of current visual and multi-modal cognition, including efficient learning, bias and user-controlled generation. Centering on these larger themes I will talk about a number of strategies (and corresponding papers) that we developed to address these challenges. I will start by discussing transfer learning techniques in the context of a semi-supervised object detection and segmentation, highlighting a model that is applicable to a range of supervision: from zero to a few instance-level samples per novel class. I will then talk about our recent work on building a foundational image representation model by combining two successful strategies of masking and sequential token prediction. I will also discuss some of our work on scene graph generation which, in addition to improving overall performance, allows for scalable inference and ability to control data bias (by trade off major improvements on rare classes for minor declines on most common classes). The talk will end with some of our recent work on generative modeling which focuses on novel-view synthesis and language-conditioned diffusion-based story generation. The core of the latter approach is visual memory that implicitly captures the actor and background context across the generated frames. Sentence-conditioned soft attention over the memories enables effective reference resolution and learns to maintain scene and actor consistency when needed.

Biography: Prof. Leonid Sigal is a Professor at the University of British Columbia (UBC). He was appointed CIFAR AI Chair at the Vector Institute in 2019 and an NSERC Tier 2 Canada Research Chair in Computer Vision and Machine Learning in 2018. Prior to this, he was a Senior Research Scientist, and a group lead, at Disney Research. He completed his Ph.D at Brown University in 2008; received his B.Sc. degrees in Computer Science and Mathematics from Boston University in 1999, his M.A. from Boston University in 1999, and his M.S. from Brown University in 2003. He was a Postdoctoral Researcher at the University of Toronto, between 2007-2009. Leonid’s research interests lie in the areas of computer vision, machine learning, and computer graphics; with the emphasis on approaches for visual and multi-modal representation learning, recognition, understanding and generative modeling. He has won a number of prestigious research awards, including Killam Accelerator Fellowship in 2021 and has published over 100 papers in venues such as CVPR, ICCV, ECCV, NeurIPS, ICLR, and Siggraph.

Talk by Prof. Leonid Sigal from UBC

October 20, 2023

Click this link to see the talk replay. Title: Efficient, Less-biased and Creative Visual Learning Abstract: In this talk I will discuss recent methods from my group that focus on addressing some of the core challenges of current visual and multi-modal cognition, including efficient learning, bias and user-controlled generation. Centering on these larger themes I will talk about a number of strategies (and corresponding papers) that we developed to address these challenges. I will start by discussing transfer learning techniques in the context of a semi-supervised object detection and segmentation, highlighting a model that is applicable to a range of supervision: from zero to a few instance-level samples per novel class. I will then talk about our recent work on building a foundational image representation model by combining two successful strategies of masking and sequential token prediction. I will also discuss some of our work on scene graph generation which, in addition to improving overall performance, allows for scalable inference and ability to control data bias (by trade off major improvements on rare classes for minor declines on most common classes). The talk will end with some of our recent work on generative modeling which focuses on novel-view synthesis and language-conditioned diffusion-based story generation. The core of the latter approach is visual memory that implicitly captures the actor and background context across the generated frames. Sentence-conditioned soft attention over the memories enables effective reference resolution and learns to maintain scene and actor consistency when needed. Biography: Prof. Leonid Sigal is a Professor at the University of British Columbia (UBC). He was appointed CIFAR AI Chair at the Vector Institute in 2019 and an NSERC Tier 2 Canada Research Chair in Computer Vision and Machine Learning in 2018. Prior to this, he was a Senior Research Scientist, and a group lead, at Disney Research. He completed his Ph.D at Brown University in 2008; received his B.Sc. degrees in Computer Science and Mathematics from Boston University in 1999, his M.A. from Boston University in 1999, and his M.S. from Brown University in 2003. He was a Postdoctoral Researcher at the University of Toronto, between 2007-2009. Leonid’s research interests lie in the areas of computer vision, machine learning, and computer graphics; with the emphasis on approaches for visual and multi-modal representation learning, recognition, understanding and generative modeling. He has won a number of prestigious research awards, including Killam Accelerator Fellowship in 2021 and has published over 100 papers in venues such as CVPR, ICCV, ECCV, NeurIPS, ICLR, and Siggraph.

×

Talk by Prof. Li Cheng from University of Alberta

October 13, 2023

Click this link to see the talk replay.

Title: Visual Human Motion Analysis

Abstract: Recent advancement of imaging sensors and deep learning techniques has opened door to many interesting applications for visual analysis of human motions. In this talk, I will discuss our research efforts toward addressing the related tasks of 3-D human motion syntheses, pose and shape estimation from images and videos, visual action quality assessment. Looking forward, our results could be applied to everyday life scenarios such as natural user interface, AR/VR, robotics, and gaming, among others.

Bio: Li CHENG is a professor at the Department of Electrical and Computer Engineering, University of Alberta. He is associate editors of IEEE Trans. Multimedia and Pattern Recognition Journal. Prior to joining University of Alberta, He worked at A*STAR, Singapore, TTI-Chicago, USA, and NICTA, Australia. His current research interests are mainly on human motion analysis, mobile and robot vision, and machine learning. More details can be found a http://www.ece.ualberta.ca/~lcheng5/.

Talk by Prof. Li Cheng from University of Alberta

October 13, 2023

Click this link to see the talk replay. Title: Visual Human Motion Analysis Abstract: Recent advancement of imaging sensors and deep learning techniques has opened door to many interesting applications for visual analysis of human motions. In this talk, I will discuss our research efforts toward addressing the related tasks of 3-D human motion syntheses, pose and shape estimation from images and videos, visual action quality assessment. Looking forward, our results could be applied to everyday life scenarios such as natural user interface, AR/VR, robotics, and gaming, among others. Bio: Li CHENG is a professor at the Department of Electrical and Computer Engineering, University of Alberta. He is associate editors of IEEE Trans. Multimedia and Pattern Recognition Journal. Prior to joining University of Alberta, He worked at A*STAR, Singapore, TTI-Chicago, USA, and NICTA, Australia. His current research interests are mainly on human motion analysis, mobile and robot vision, and machine learning. More details can be found a http://www.ece.ualberta.ca/~lcheng5/.

×

ICCV 2023: Spotlight on GruVI Lab

September 18, 2023

ICCV, the premier conference on computer vision, will be held in Paris this year (Oct 2-6). GrUVi lab will once again have a good show at ICCV, with 6 technical papers, 3 co-organized workshops! Also, Prof. Yasutaka Furukawa serves as a program chair for this year’s ICCV!

For the workshops, Prof. Richard Zhang co-organizes the 3D Vision and Modeling Challenges in eCommerce, Prof. Angel Chang co-organizes the 3rd Workshop on Language for 3D Scenes and CLVL: 5th Workshop on Closing the Loop between Vision and Language. Also, Prof. Manolis Savva will give a talk at the 1st Workshop on Open-Vocabulary 3D Scene Understanding

And here are the 6 accepted papers:

Multi3DRefer: Grounding Text Description to Multiple 3D Objects

DS-Fusion: Artistic Typography via Discriminated and Stylized Diffusion

SKED: Sketch-guided Text-based 3D Editing

PARIS: Part-level Reconstruction and Motion Analysis for Articulated Objects

HAL3D: Hierarchical Active Learning for Fine-Grained 3D Part Labeling

UniT3D: A Unified Transformer for 3D Dense Captioning and Visual Grounding

Congrats for the authors!

ICCV 2023: Spotlight on GruVI Lab

September 18, 2023

ICCV, the premier conference on computer vision, will be held in Paris this year (Oct 2-6). GrUVi lab will once again have a good show at ICCV, with 6 technical papers, 3 co-organized workshops! Also, Prof. Yasutaka Furukawa serves as a program chair for this year’s ICCV! For the workshops, Prof. Richard Zhang co-organizes the 3D Vision and Modeling Challenges in eCommerce, Prof. Angel Chang co-organizes the 3rd Workshop on Language for 3D Scenes and CLVL: 5th Workshop on Closing the Loop between Vision and Language. Also, Prof. Manolis Savva will give a talk at the 1st Workshop on Open-Vocabulary 3D Scene Understanding And here are the 6 accepted papers: Multi3DRefer: Grounding Text Description to Multiple 3D Objects DS-Fusion: Artistic Typography via Discriminated and Stylized Diffusion SKED: Sketch-guided Text-based 3D Editing PARIS: Part-level Reconstruction and Motion Analysis for Articulated Objects HAL3D: Hierarchical Active Learning for Fine-Grained 3D Part Labeling UniT3D: A Unified Transformer for 3D Dense Captioning and Visual Grounding Congrats for the authors!

×

Talk by Mikaela Uy from Stanford

June 26, 2023

The recording is available at this link.

Title: Towards Controllable 3D Content Creation by leveraging Geometric Priors

Abstract: The growing popularity for extended realities pushes the demand for the automatic creation and synthesis of new 3D content that would otherwise be a tedious and laborious process. A key property needed to make 3D content creation useful is user controllability as it allows one to realize specific ideas. User-control can be of various forms, e.g. target scans, input images or programmatic edits etc. In this talk, I will be touching works that enable user-control through i) object parts and ii) sparse scene images by leveraging geometric priors. The former utilizes object semantic priors by proposing a novel shape space factorization through an introduced cross diffusion network that enabled multiple applications in both shape generation and editing. The latter leverages pretrained models of large 2D datasets for sparse view 3D NeRF reconstruction of scenes by learning a distribution of geometry represented as ambiguity-aware depth estimates. As an add-on, we will also briefly revisit the volume rendering equation in NeRFs and reformulate it to piecewise linear density that alleviates underlying issues caused by quadrature instability.

Bio: Mika is a fourth year PhD student at Stanford advised by Leo Guibas. Her research focuses on the representation and generation of objects/scenes for user-controllable 3D content creation. She was a research intern at Adobe, Autodesk and now, Google, and is generously supported by Apple AI/ML PhD Fellowship and Snap Research Fellowship.

Talk by Mikaela Uy from Stanford

June 26, 2023

The recording is available at this link. Title: Towards Controllable 3D Content Creation by leveraging Geometric Priors Abstract: The growing popularity for extended realities pushes the demand for the automatic creation and synthesis of new 3D content that would otherwise be a tedious and laborious process. A key property needed to make 3D content creation useful is user controllability as it allows one to realize specific ideas. User-control can be of various forms, e.g. target scans, input images or programmatic edits etc. In this talk, I will be touching works that enable user-control through i) object parts and ii) sparse scene images by leveraging geometric priors. The former utilizes object semantic priors by proposing a novel shape space factorization through an introduced cross diffusion network that enabled multiple applications in both shape generation and editing. The latter leverages pretrained models of large 2D datasets for sparse view 3D NeRF reconstruction of scenes by learning a distribution of geometry represented as ambiguity-aware depth estimates. As an add-on, we will also briefly revisit the volume rendering equation in NeRFs and reformulate it to piecewise linear density that alleviates underlying issues caused by quadrature instability. Bio: Mika is a fourth year PhD student at Stanford advised by Leo Guibas. Her research focuses on the representation and generation of objects/scenes for user-controllable 3D content creation. She was a research intern at Adobe, Autodesk and now, Google, and is generously supported by Apple AI/ML PhD Fellowship and Snap Research Fellowship.

×

A Trilogy of Character Animation Research in SIGGRAPH 2023

August 4, 2023

We are proud to highlight that three of Prof. Jason Peng's research papers will be presented in the upcoming SIGGRAPH 2023. These papers mark advances in physics-based character animation.

Below are titles and links to the related project pages:

Learning Physically Simulated Tennis Skills from Broadcast Videos https://xbpeng.github.io/projects/Vid2Player3D/index.html

Synthesizing Physical Character-Scene Interactions https://xbpeng.github.io/projects/InterPhys/index.html

CALM: Conditional Adversarial Latent Models for Directable Virtual Characters https://xbpeng.github.io/projects/CALM/index.html

Note: The SIGGRAPH conference, short for Special Interest Group on Computer GRAPHics and Interactive Techniques, is the world's premier annual event for showcasing the latest innovations in computer graphics and interactive techniques. It brings together researchers, artists, developers, filmmakers, scientists, and business professionals from around the globe. The conference offers a unique blend of educational sessions, hands-on workshops, and exhibitions of cutting-edge technology and applications.

A Trilogy of Character Animation Research in SIGGRAPH 2023

August 4, 2023

We are proud to highlight that three of Prof. Jason Peng's research papers will be presented in the upcoming SIGGRAPH 2023. These papers mark advances in physics-based character animation. Below are titles and links to the related project pages: Learning Physically Simulated Tennis Skills from Broadcast Videos https://xbpeng.github.io/projects/Vid2Player3D/index.html Synthesizing Physical Character-Scene Interactions https://xbpeng.github.io/projects/InterPhys/index.html CALM: Conditional Adversarial Latent Models for Directable Virtual Characters https://xbpeng.github.io/projects/CALM/index.html Note: The SIGGRAPH conference, short for Special Interest Group on Computer GRAPHics and Interactive Techniques, is the world's premier annual event for showcasing the latest innovations in computer graphics and interactive techniques. It brings together researchers, artists, developers, filmmakers, scientists, and business professionals from around the globe. The conference offers a unique blend of educational sessions, hands-on workshops, and exhibitions of cutting-edge technology and applications.

×

GrUVi making waves at CVPR 2023

June 14, 2023

CVPR, the premier conference on computer vision, will be held in Vancouver this year (June 18-22). GrUVi lab will once again have an incredible show at CVPR, with 12 technical papers, 6 invited talks, 4 co-organized workshops!

Conference and workshop co-organization

Former GrUVi Professor Greg Mori serves as one of the four general conference chairs for the main CVPR conference! Prof. Angel Chang, as one of the social activity chairs, is helping to organize the speed mentoring sessions.

In addition, we have exciting workshops and challenges that are organized by GrUVi members as well:

Workshop talks

Prof. Andrea Tagliasacchi is invited to give a keynote talk both in Struco3D and Generative Models for Computer Vision (both on June 18th). He also will give a spotlight at the Area Chair workshop on Saturday.

In the Women in Computer Vision, Prof. Angel Chang will be giving a talk on June 19th. She is also invited to give talks at workshops on 3D Vision and Robotics (June 18th), Compositional 3D Vision (June 18th), and Open-Domain Reasoning Under Multi-Modal Settings (June 19th).

Technical papers

Congratulations to all authors for the accepted papers! The full list of papers featured on CVPR 2023 can be accessed here.

GrUVi making waves at CVPR 2023

June 14, 2023

CVPR, the premier conference on computer vision, will be held in Vancouver this year (June 18-22). GrUVi lab will once again have an incredible show at CVPR, with 12 technical papers, 6 invited talks, 4 co-organized workshops! Conference and workshop co-organization Former GrUVi Professor Greg Mori serves as one of the four general conference chairs for the main CVPR conference! Prof. Angel Chang, as one of the social activity chairs, is helping to organize the speed mentoring sessions. In addition, we have exciting workshops and challenges that are organized by GrUVi members as well: Computer Vision in the Built Environment workshop - co-organized by Prof. Yasutaka Furukawa Second Workshop on Structural and Compositional Learning on 3D Data (Struco3D) - co-organized by Prof. Richard Zhang. ScanNet Indoor Scene Understanding Challenge - co-organized by Prof. Angel X. Chang and Prof. Manolis Savva Embodied AI Workshop featuring a variety of challenges including the Multi-Object Navigation (MultiON) challenge (co-organized by Sonia Raychaudhuri, Angel Chang, and Manolis Savva) Workshop talks Prof. Andrea Tagliasacchi is invited to give a keynote talk both in Struco3D and Generative Models for Computer Vision (both on June 18th). He also will give a spotlight at the Area Chair workshop on Saturday. In the Women in Computer Vision, Prof. Angel Chang will be giving a talk on June 19th. She is also invited to give talks at workshops on 3D Vision and Robotics (June 18th), Compositional 3D Vision (June 18th), and Open-Domain Reasoning Under Multi-Modal Settings (June 19th). Technical papers Congratulations to all authors for the accepted papers! The full list of papers featured on CVPR 2023 can be accessed here.

More News

× Talk by Prof. Emily Whiting from Boston University

June 9, 2023

Title: Computational Fabrication: Design of Geometry and Materials from Simulation to Physical

Abstract: Advancements in rapid prototyping technologies are closing the gap between what we can simulate computationally and what we can build. The effect is opening up new design domains for creating objects with novel functionality, and introducing experimental manufacturing processes. My work applies traditional computer graphics techniques, leveraging tools in simulation, animation, and rendering for making functional objects in the physical world. In this talk I will present several lines of inquiry in computational fabrication workflows. On the topic of manufacturing processes I will discuss work on fabric formwork for creating plaster-cast sculptures. In the domain of design I will discuss mechanics-based strategies for custom-fit knitted garments and discovery of 3D printed structures. Finally I will cover developments on design of tactile illustrations for blind and visually impaired individuals for depicting 3D objects.

Bio:Emily Whiting is an Associate Professor of Computer Science at Boston University. Her research in Computer Graphics combines digital geometry processing, engineering mechanics, and rapid prototyping to explore the development of computational tools for designing functionally-valid and fabrication-ready real world objects. Her lab's work builds on collaborations in a broad range of fields including architecture, human computer interaction, and accessible technologies. She received her PhD from MIT in Computer Graphics and Building Technology. She is the recipient of the NSF CAREER Award and Sloan Research Fellowship. She was General Chair of the ACM Symposium on Computational Fabrication in 2020 and 2021.

June 9, 2023
Talk by Prof. Emily Whiting from Boston University

× Talk by Silvia Sellan from the University of Toronto

June 2, 2023

This week in the VCR seminar, we had a guest speaker Silvia Sellan from the University of Toronto who will give a talk on various geometry problems. See the recording here: https://stream.sfu.ca/Media/Play/10fc4229c5cf43e8b7f2efa29689cd241d

June 2, 2023
Talk by Silvia Sellan from the University of Toronto

× Talk by Prof. Daniel Weiskopf from the University of Stuttgart

May 26, 2023

This week in the VCR seminar, we had a guest speaker Daniel Weiskopf from the University of Stuttgart who will give a talk on multidimensional visualization (please see below for information about the talk). See the recording here: https://stream.sfu.ca/Media/Play/a0b66c0d023f48f5bbe1d7790fa1b8681d

Title: Multidimensional Visualization

Abstract: Multidimensional data analysis is of broad interest for a wide range of applications. In this talk, I discuss visualization approaches that support the analysis of such data. I start with a brief overview of the field, a conceptual model, and a discussion of visualization strategies. This part is accompanied by a few examples of recent advancements, with a focus on results from my own work. In the second part, I detail techniques that enrich basic visual mappings like scatterplots, parallel coordinates, or plots of dimensionality reduction by incorporating local correlation analysis. I also discuss sampling issues in multidimensional visualization, and how we can extend it to uncertainty visualization. The talk closes with an outlook on future research directions.

Bio: Daniel Weiskopf is a professor and one of the directors of the Visualization Research Center (VISUS) and acting director of the Institute for Visualization and Interactive Systems (VIS), both at the University of Stuttgart, Germany. He received his Dr. rer. nat. (PhD) degree in physics from the University of Tübingen, Germany (2001), and the Habilitation degree in computer science at the University of Stuttgart, Germany (2005). His research interests include visualization, visual analytics, eye tracking, human-computer interaction, computer graphics, augmented and virtual reality, and special and general relativity. He is spokesperson of the DFG-funded Collaborative Research Center SFB/Transregio 161 “Quantitative Methods for Visual Computing” (www.sfbtrr161.de), which covers basic research on visualization, including multidimensional visualization.

May 26, 2023
Talk by Prof. Daniel Weiskopf from the University of Stuttgart

× Prof. Pat Hanrahan gave his Turing Award Lecture at SFU CS

May 18, 2023

The recoding is available in this link (SFU accounnt is needed).

Here is some information about the lecture.

Title:

Shading Languages and the Emergence of Programmable Graphics Systems

Abstract:

A major challenge in using computer graphics for movies and games is to create a rendering system that can create realistic pictures of a virtual world. The system must handle the variety and complexity of the shapes, materials, and lighting that combine to create what we see every day. The images must also be free of artifacts, emulate cameras to create depth of field and motion blur, and compose seamlessly with photographs of live action.

Pixar's RenderMan was created for this purpose, and has been widely used in feature film production. A key innovation in the system is to use a shading language to procedurally describe appearance. Shading languages were subsequently extended to run in real-time on graphics processing units (GPUs), and now shading languages are widely used in game engines. The final step was the realization that the GPU is a data-parallel computer, and the the shading language could be extended into a general-purpose data-parallel programming language. This enabled a wide variety of applications in high performance computing, such as physical simulation and machine learning, to be run on GPUs. Nowadays, GPUs are the fastest computers in the world. This talk will review the history of shading languages and GPUs, and discuss the broader implications for computing.

Biography:

Pat Hanrahan is the Canon Professor of Computer Science and Electrical Engineering in the Computer Graphics Laboratory at Stanford University. His research focuses on rendering algorithms, graphics systems, and visualization. Hanrahan received a Ph.D. in biophysics from the University of Wisconsin-Madison in 1985. As a founding employee at Pixar Animation Studios in the 1980s, Hanrahan led the design of the RenderMan Interface Specification and the RenderMan Shading Language. In 1989, he joined the faculty of Princeton University. In 1995, he moved to Stanford University. More recently, Hanrahan served as a co-founder and CTO of Tableau Software. He has received three Academy Awards for Science and Technology, the SIGGRAPH Computer Graphics Achievement Award, the SIGGRAPH Stephen A. Coons Award, and the IEEE Visualization Career Award. He is a member of the National Academy of Engineering and the American Academy of Arts and Sciences. In 2019, he received the ACM A. M. Turing Award.

May 18, 2023
Prof. Pat Hanrahan gave his Turing Award Lecture at SFU CS

× Prof. Tagliasacchi Co-Chairs 3DV 2024: Papers Invitation Open

May 15, 2023

We are thrilled to share that Prof. Andrea Tagliasacchi will serve as co-chair for the International Conference on 3D Vision (3DV) 2024, alongside Prof. Siyu Tang from ETH and Federico Tombari from Google. Smaller conferences like 3DV play a vital role in fostering lasting networks within the research community, and 3DV 2024 promises to be an exciting opportunity for this.

Furthermore, the call for papers for 3DV 2024 is now officially open. The conference will take place in the beautiful location of Davos, Switzerland, where World Economic Forum is annually held there. Researchers are encouraged to submit their papers by July 31st. For additional details, please visit the conference website at https://3dvconf.github.io/2024/call-for-papers/. Don't miss this incredible chance to share your research with the international community at 3DV 2024!

May 15, 2023
Prof. Tagliasacchi Co-Chairs 3DV 2024: Papers Invitation Open

× Talk by Karsten Kreis from NVIDIA

April 18, 2023

Record link is here

Title: Diffusion Models: From Foundations to Image, Video and 3D Content Creation

Abstract: Denoising diffusion-based generative models have led to multiple breakthroughs in deep generative learning. In this talk, I will provide an overview over recent works by the NVIDIA Toronto AI Lab on diffusion models and their applications for digital content creation. I will start with a short introduction of diffusion models and recapitulate their mathematical formulation. Then, I will briefly discuss our foundational works on diffusion models, which includes advanced diffusion processes for faster and smoother diffusion and denoising, techniques for more efficient model sampling, as well as latent space diffusion models, a flexible diffusion model framework that has been widely used in the literature. Moreover, I will discuss works that use diffusion models for image, video and 3D content creation. This includes large text-to-image models as well as recent work on high resolution video synthesis with latent diffusion models. I will also summarize some of our efforts on 3D generative modeling. This includes object-centric 3D synthesis by training diffusion models on geometric shape datasets or leveraging large-scale text-to-image diffusion models as priors for shape distillation, as well as full scene-level generation with hierarchical latent diffusion models.

Bio: Karsten Kreis is a senior research scientist at NVIDIA’s Toronto AI Lab. Prior to joining NVIDIA, he worked on deep generative modeling at D-Wave Systems and co-founded Variational AI, a startup utilizing generative models for drug discovery. Before switching to deep learning, Karsten did his M.Sc. in quantum information theory at the Max Planck Institute for the Science of Light and his Ph.D. in computational and statistical physics at the Max Planck Institute for Polymer Research. Currently, Karsten’s research focuses on developing novel generative learning methods and on applying deep generative models on problems in areas such as computer vision, graphics and digital artistry, as well as in the natural sciences.

April 18, 2023
Talk by Karsten Kreis from NVIDIA

× Visual and Interactive Computing Institute (VINCI) is Founded

April 1, 2023

We are delighted to announce the establishment of the Visual and Interactive Computing Institute (VINCI), co-directed by our esteemed Prof. Yasutaka Furukawa and Prof. Parmit Chilana. VINCI has been brought to life by 44 dedicated faculty members from 14 different departments and 7 distinct faculties within SFU. The primary objective of the institute is to bolster interdisciplinary research collaborations.

April 1, 2023
Visual and Interactive Computing Institute (VINCI) is Founded

× Prof. Furukawa Appointed as Program Chair for ICCV 2023

March 8, 2023

We are thrilled to announce that Professor Yasutaka Furukawa, one of our esteemed professors, has been appointed as the Program Chair for the International Conference on Computer Vision (ICCV) 2023. The ICCV, a top-tier event in the field of computer vision, provides an excellent platform to exchange novel ideas and discuss the latest advancements. We invite you to learn more about the ICCV 2023 and Prof. Furukawa's role by visiting the official conference website at https://iccv2023.thecvf.com/.

March 8, 2023
Prof. Furukawa Appointed as Program Chair for ICCV 2023

× Gruviers have 10 Accepted Papers at CVPR 2023

February 27, 2023

Congratulations to all Gruviers who are publishing their work at CVPR 2023. Among the accepted papers, MobileNeRF is one of the 12 finalists for best paper award! Congrats for Zhiqin and Andrea for their excellent work!

CVPR is the premier conference on computer vision and will be held in Vancourver this year. To learn more about the sample work that Gruvi will be presenting checkout our publication page to get more information.

February 27, 2023
Gruviers have 10 Accepted Papers at CVPR 2023

... see all News