Deep learning (DL) models have attained state-of-the-art performance in numerous fields. Nevertheless, for certain real-world applications, existing models encounter diverse challenges, ranging from a lack of generability to new data to issues of scalability and overfitting. In this context, integrating information extracted from different modalities holds promise as a potential solution to alleviate these challenges. This paper introduces SeeNN (https://github.com/skywolfmo/seeNN-paper), a multimodal deep-learning framework for long-range atmospheric visibility estimation. Using multimodal deep learning, SeeNN fuses various modalities to estimate long-range atmospheric visibility. These modalities include RGB imagery, Edge Map, Entropy Map, Depth Map, and Normal Surface Map. Results show that in contrast to single modality RGB, which achieves only 87.92% accuracy, multimodal deep learning models achieve an accuracy of over 96%. This significant improvement highlights the potential of multimodal approaches to enhance the accuracy and reliability of atmospheric visibility estimation, which is crucial for improving safety in applications such as aviation, maritime navigation, and autonomous vehicles. By addressing challenges such as data variability, environmental factors, and the inherent complexity of atmospheric conditions, SeeNN contributes to more reliable and robust visibility estimation systems, thereby enhancing safety and operational efficiency in critical environments.