Tifanie Bouchara

Assistant Professor

CNAM - CEDRIC, research group : Interactivité pour Lire et Jouer . tifanie.bouchara(at)cnam.fr

I am an assistant professor at CNAM (Conservatoire National des Arts et Métiers) since Sept. 2014.
 
My research take place in the ILJ team (Interactivity to Play and Read) of the Computer Science laboratory CEDRIC. I'm interested in sonic interaction design, especially using spatial audio in immersive applications. My work focus on the use of audio in the context of video gaming either to develop more immersive games or to make them accessible (blind and visual impaired players). I also employ the auditory modality to developp new sensory rehabilitation approaches and therapeutical games (autism/unilateral spatial neglect).
 
As a teacher, I am responsible of the national M.Eng in Interactive Digital Media by apprenticeship (Angouleme) and teach Computer Sciences (Digital Media, HCI, Audio-visual Perception and Signal Processing) in ENJMIN (National School of Video Games and Interactive Media).
 
Before joining the CNAM, I spent one year as a post-doc researcher at LIPADE (Computer Science Laboratory of Paris Descartes). I defended my Ph.D. from Paris Sud XI University working at LIMSI-CNRS. During my Ph.D., I spent 6 months as a doctoral fellow in the Multimodal Interaction Laboratory ( McGill University / CIRMMT) in Montreal, QC, Canada. After a Master of Science on audiovisual techniques for music and cinema production at ISB (Image&Sound of Brest, University of Bretagne Occidentale) in 2007, I graduated with honors from the master program of Acoustics, Digital Signal Processing and Computer Sciences applied to Music (ATIAM) of IRCAM in September 2008.
 

Research keywords : Sonic Design Interaction / 3D Audio / Sonification / Virtual and Augmented Reality / Human-Computer Interaction / Auditory and multisensory perception / Video Games / Cognitive Psychology / Sensory disorders / Inclusion and Accessibility

Ongoing research projects


eXtended Reality and 3D audio for sensory rehabilitation of autism

DIM RFSI AudioXR4TSA

This project aims at studying and developing multisensory immersive & playfulness apps in Augmented Reality, in order to facilitate sensory rehabilitation of children with low to high autism disorders so, at the end, they can better perceive and treat social stimuli, especially auditory ones. This project is done in collaboration with autism practitioners from the daycare hospital André Boulloche. It's related to the Ph.D project of Valentin Bauer (Ph.D. Funding from ED STIC Paris Saclay) who I co-supervise with Patrick Bourdot.

This project also give us the opportunity to investigate the impact of augmented reality on musical composition processes. An article on this topic is published at IEEE VR SIVE 2021.


Virtual Reality and 3D audio for assessment
and sensory/cognitive rehabilitation of unilateral spatial neglect

DIM RSFI AudioRV-NSU

Unilateral spatial neglect (USN) is a syndrom appearing after a cerebral stroke where patients aren't able to perceive/ treat stimuli coming from one of their side. This project has a twofold goal : 1) it seeks to investigate how audio localization trainings can help the sensory spatial rehabilitation of NSU patients and then 2) to design and test both VR multisensory therapeutical games to rehabilitate the sensory abilities of NSU patients and a VR battery to evaluate perceptual abilities of USN patients. This project also relies on the doctoral project of Tristan-Gaël Bara (Ph.D. Funding from ED SMI HéSam 2020-2023) that I co-supervised with Pierre Cubaud and Alma Guilbert.

First steps of this projects implies to optimization 3D audio localization trainings. This leads to the publication of two conference papers @ EEA Spatial Audio Conference and AES Audio for Virtual and Augmented Reality 2020 : [Bouchara2019, Bara2020].



 


Audio and User eXperience in remote and co-located multi-player VR games

FUI United-VR

The United VR project aim at developing tools for the design and management of VR video game content, so virtual reality arcades can offer games where the players are spread over several sites.
In this context our goal is to investigate the role of audio in the general players’ experience, especially concerning the feelings of spatial presence, copresence and embodiment, and find auditory processing and feedbacks that can improve the player experience. This work is done in collaboration with Yujiro Okuya, post-doc as well as contributions from the industrial partners Spirops and Persistant.

Publications : [Deliverables : state of the art on presence, immersion, copresence and embodiment in VR and games ]


Understanding Mental Models in sonic interaction design of sonic icons

Collaboration with IRCAM

In this research we propose to analyse, through a participative design approach, the mental models underliying the design of sonic icons with several categories of peaple : sound designers, UI designers and naive users. This research is part of larger project to make mobile apps accessible to illeterate and people suffering from numerical illiteracy.



Immersive Sonification of proteins surface

VIDOCK - ERC Starting Grant of Matthieu Montès (Completed)

Main goal of this research is to find new protein representations through sound. Using an immersive sonification approach to display 3D objects through immersive and interactive audio renderings, an interactive Max app was develop by the intern student Valère Raigneau.

Publications : [Bouchara2020 @IEEE VR SIVE workshop 2020]


Audio feedbacks design for FPS games accessible to blind players

DIM RFSI Virtual Guide Dog - Chien Guide Virtuel

The main goal of this project is to develop first person shooter games playable by blind gamers. We propose to combine the development of a virtual dog agent and auditory displays for both visual-free navigation in 3D virtual environments and target acquisition / shooting. This project involved researchers from labs CNAM CEDRIC and U.Paris Sorbonne EXPERICE.


Past research projects

Audiovisual Zoomable Interfaces (PhD work)

My thesis focused on human factors (auditory perception, multisensory integration, auditory attention) to develop audiovisual zoomable interfaces.
PDF of the manuscript.

Audio-visual Magnifying Lens

Transposing methods from visualization to the auditory domain, we proposed an Audiovisual Magnifying Glass to zoom in a collection of videos.

An article on this topic was published in ICAD 2010 : [Bouchara2010a]



Audio-visual Pop-out

In vision some preattentive features (colors, sharpness, shape) can involuntarily attract attention to a distinct object. Investigating auditory preattention, we proposed an analogue of sharpness in audio

This study was published in the journal ACM Transaction on applied perception and also receive a best paper award at the national conference IHM 2012.

Multisensory Perception of Simultaneous Environmental Sounds

In this study, we investigated how the visual context influences the identification of environmental sounds presented in noise. This work was done in collaboration with Catherine Guastavino, Ilja Frissen and Bruno L. Giordano from the CIRMMT.

This study was published in the Audio Engeneering Society Convention.

Objective Intelligibility and Saliency Measures (Post-Doc)

My post-doc takes place in a European project named I'City For All (www.icityforall.eu). The aim of this project is to facilitate the mobility of elderly that suffer from age-relative hearing problems (presbycusis). Two approaches are investigated :
    a) the developpement of intelligent loudspeakers for better intelligibility of vocal messages (airports, train stations, supermarkets, ...)
    b) the developpement of a mobile system embedded in cars to aid in sound alarm localization (of ambulance, police car, fire trucks, ...).
  My job in this project is to conceive mathematical criteria to measure the intelligibility and the saliency of vocal announces in confined spaces (e.g. airport, railstation, train, supermarket...).

Art-Science projects

From 2007 to 2014, I was collaborating with artists, mainly composers but also dancers. In these projects, I was in charge of the audio or graphical renderings, the audio-visual communication and the audio-to-visual mapping. The projects were developed to be presented in public, through installations or during concert performances.
Monthey04 & Tonnetz09 projects
Orgue et Réalité Augmentée
GAVIP project
I collaborated with the composer Antonio de Souse Dias in two different projects. In the Monthey04 project we transformed a radiophonic program into a 3D audiographical interactive environment. [pdf] Conception and exploitation of an immersive, interactive and multimodal platform. Different scenarios were developed where 3D audio-visual renderings were controlled by gestures. [pdf]


Publications

retrieved from HAL


Supervision

En cours

Yujiro Okuya, Post-doc United-VR, CNAM-CEDRIC, sept. 2020 – march 2022 :
Auditory and multisensory perception for user experience in shared virtual environments

Tristan-Gaël Bara, Thèse ED SMI, HéSaM Université, CNAM-CEDRIC, sept. 2020 – 2023 :
Entraînement multisensoriel à la localisation sonore en réalité virtuelle : application au développement de jeux sérieux thérapeutiques
en co-encadrement avec Alma Guilbert (MCF Neuro/Psychologie U. de Paris-VAC) et Pierre-Henri Cubaud (PU CNAM-CEDRIC).

Valentin Bauer, Thèse ED STIC, Université Paris Saclay, LISN, sept. 2019 – 2022 :
Audio3D pour la réhabilitation psychosociale du TSA
en co-encadrement avec Patrick Bourdot (DR CNRS LISN)

Lucas Artis (master thesis) M2, ENS Louis Lumière :
Evaluations de méthodes de sonification pour les jeux FPS inclusifs & accessibles aux déficients visuels


2020

Tristan-Gaël Bara (M1 Psychologie cognitive, Université de Paris, projet DIM RFSI AudioRV-NSU) :
en co-encadrement avec Alma Guilbert (MCF Neuro/Psychologie U. de Paris-VAC)

Michel Qu (XXX, projet DIM RFSI AudioRV-NSU) :


2019

Mathieu Bouchet, M2 Informatique Jeux et Média Numériques, spé. UX/UI, CNAM-ENJMIN
Evaluation d’icônes auditives pour la représentation de fonctionnalités d’IHM mobiles.

Tristan-Gael Bara, M1 Psychologie cognitive, Université Paris-Descartes
(TER stage) Evaluation d’une technique de sonification immersive d’objets 3D et de protéines.
+ (master thesis) Impact de la vision dans l’apprentissage d’HRTFs non individualisées. en co-encadrement avec Alma Guilbert (MCF Neuro/Psychologie U. de Paris-VAC)

Pierre-Louis Weiss TER stage/(master thesis), M1 Psychologie cognitive, Université Paris-Descartes
Impact de la pratique musicale sur la capacité à localiser des sons et dans l’apprentissage d’HRTFs non individualisées.

Martin Peignier (master thesis), M2 Son, Louis Lumière
Lecture augmentée sonore : adaptation sonore immersive et interactive d’une nouvelle

Antoine Boulinguez (master thesis), Mastère Interactive Digital Experience, CNAM & Gobelins
Design d’expérience utilisateur pour environnements hétérogènes en réalité mixte


2018

Valère Raigneau, M2 Son, Louis Lumière
Représentation de protéines par la sonification d’objets 3D : Mise en place d’une technique de sonification immersive sous Max/MSP.


2017

Milan Courcoux, M1 Acoustique, Université Pierre et Marie Curie
Sonification interactive pour la représentation de protéines


2013

Maxime Letellier, 1ère année ingénieur, ENSEA
Evaluation de la saillance sonore de jingles musicaux


Teaching


Current


Head of the M.Eng (Ingénieurs Informatique et Multimédia) in Interactive Media, ENJMIN, Angoulême since 2018

@ Ingénieurs Informatique et Multimédia, ENJMIN, Angoulême since 2014: @ M.Sc. Jeux et Média Interactifs Numériques, ENJMIN, Angoulême since 2014: @ DUT Informatique, CNAM Paris since 2016:

Past

@ ENJMIN, Angoulême from 2012-2017: @ CNAM, Paris : @ Polytech Paris Sud and U. Paris Sud, Orsay 2011-2012 : @ IUT d'Orsay, Orsay from 2008-2012 :