Taha Bouhsine
AI Research Scientist & Engineer · Founder
I work on interpretability, kernel methods, and white-box neural architectures — the structural reasons some deep learning components are legible and others aren't.
Currently
Co-founder and research scientist at Azetta.AI, building linear-time attention and large-scale TPU training infrastructure (JAX/Flax). Previously founded MLNomads, where I introduced the Yat kernel and Neural-Matter Networks.
Google Developer Expert in AI/ML since 2024. Research collaborations with Google DeepMind, Columbia University, and the U.S. FAA William J. Hughes Technical Center.
Selected papers
Full list on Google Scholar.
- A Universal Reproducing Kernel Hilbert Space from Polynomial Alignment and IMQ Distance
The Yat kernel — a universal, characteristic RKHS that yields a finite learned-center kernel expansion with a closed-form norm. Foundation for white-box MLPs.
- In Defense of Cosine Similarity: Normalization Eliminates the Gauge Freedom
Why normalization is not just a heuristic — it removes a real gauge symmetry that confounds interpretability.
- SLAY: Geometry-Aware Spherical Linearized Attention with Yat-Kernel
Linear-time attention that matches softmax performance with O(L) scaling, via a Mercer-kernel reformulation on the sphere.
- DenoMAE 2.0: Improving Denoising Masked Autoencoders by Classifying Local Patches for Automatic Modulation Classification
Multimodal denoising MAE for RF modulation classification. Up to 16.55% accuracy gains on RadioML over prior SOTA.
- DenoMAE: A Multimodal Autoencoder for Denoising Modulation Signals
The original multimodal denoising masked autoencoder for automatic modulation classification.
Elsewhere
- Email tahabhs14@gmail.com
- GitHub github.com/mlnomadpy
- Google Scholar scholar.google.com/citations?user=IsBjb3EAAAAJ
- LinkedIn www.linkedin.com/in/tahabsn/
- GDE profile g.dev/tahabsn
- ML Talks www.youtube.com/playlist?list=PLQKoJ5C0cEMhm-h2fPsIePpTWz5OpBSkS