Gene editing


Conventional genetic modification methods require substantial amounts of time, space, and effort to modify even a single gene. The offspring is crossed multiple times, until obtaining completely homozygous mutations on an inbred genomic background (which is required for reliable analysis of the phenotypes of interest), and each generation takes several months. CRISPR-based gene editing made such changes easier, and more efficient, thereby accelerating biological research. However, problems remained: (i) first-generation mice often contained a mosaic of wild-type and KO cells, and (ii) the rate of complete biallelic mutant mice generated was relatively low (~60-80% at best). We previously showed that a combination of triple-targeting (i.e. three separate targets for the same gene) and an algorithmic approach to target design could deliver 95-100% complete biallelic KO mice in the first generation.

However, several hurdles remain. First, whole-genome detection of all the suitable targets across the entire mouse genome takes days. This is done only once per genome, and therefore not a problem for a proof-of-concept focusing on a single genome. However, real-world cases will consider hundreds of genomes, and the computational cost is prohibitive. Whole-genome detection needs to be faster to be practical at that scale. Second, gene editing is relatively inefficient when applied to other organisms, especially plants. Third, the risk of off-target modifications remains a safety concer, especially for uses in human health or future crops. All these have to be overcome for gene editing to achieve its potential. We are developing new algorithms to address these needs.

Key publications:

Tissue clearing


Slicing a brain damages it, and computational reconstruction is time-consuming and error-prone. Yet, it was the only way to investigate complex brain functions at the level of individual cells: lipids make the brain opaque, and tools such as EEG and fMRI only allow measurements of cell groups or indirect observation of overall activity patterns. Tissue clearing, by contrast, removes these lipids and allows high-resolution whole-brain imaging, therefore preserving important structures. Earlier methods were limited by incomplete clearing, or rapid quenching of the fluorescence proteins used to monitor cell activity. Since 2013, several methods have been able to solve this (CUBIC, CLARITY, etc.). They are efficient on the brain, and on most other organs, but clearing is only the first step: the value is in the quantitative analysis of the samples.

We are developing new algorithms for the registration of high-resolution whole-brain images, and for the analysis of cell-level activity data.

Key publications:

Digital Health


We are working with machine learning, mobile apps, wearable devices, etc., on a number of health projects. Across these projects, we are considering aspects such as data access, data quality, data analysis, and user engagement.

Our two current areas of focus are sleep and hearing, but we are always open to new collaborations. Feel free to reach out if we can be of any assistance.

Key publications: