1/ DL 2.0: A Topological Tale of Everything wrong with Deep Learning

Not the Metrics We Want, But the Metrics We Need

Authors: Taha Bouhsine

GitHub GitHub Repo Blog Blog Post Sponsor Sponsor Project Kaggle Kaggle Notebook arXiv DL 2.0 Paper

Acknowledgments

The Google Developer Expert program and Google AI/ML Developer Programs team supported this work by providing Google Cloud Credit. I want to extend my gratitude to the staff at High Grounds Coffee Roasters for their excellent coffee and peaceful atmosphere. I would also like to thank Dr. Andrew Ng for creating the Deep Learning course that introduced me to this field, without his efforts to democratize access to knowledge, this work would not have been possible. Additionally, I want to express my appreciation to all the communities I have been part of, especially MLNomads, Google Developers, and MLCollective communities

Cite This Work

If you use this work in your research, please cite:

BibTeX

@article{bouhsine2024deep, title={Deep Learning 2.0: Artificial Neurons That Matter - Reject Correlation, Embrace Orthogonality}, author={Bouhsine, Taha}, journal={arXiv preprint arXiv:2411.08085}, year={2024} }

Plain Text Citation

Bouhsine, T. (2024). Deep Learning 2.0: Artificial Neurons That Matter - Reject Correlation, Embrace Orthogonality. arXiv preprint arXiv:2024.xxxxx.

License

The source code, algorithms, and all contributions presented in this work are licensed under the GNU Affero General Public License (AGPL) v3.0. This license ensures that any use, modification, or distribution of the code and any adaptations or applications of the underlying models and methods must be made publicly available under the same license. This applies whether the work is used for personal, academic, or commercial purposes, including services provided over a network.