Mohammad
Abd-Elmoniem

Contact Information


Professional Summary

As a 16 year old Computer Engineering and Business Management undergraduate junior, I've honed my skills in bridging the gap between complex technical theories and their practical applications. My journey has been defined by direct involvement in software engineering, system integration, and data analysis, shaping a path focused on technological innovation. My experience ranges from leading team-based projects with notable companies such as General Dynamics to tackling independent and freelance ventures. This trajectory has been driven by a deep interest in cross-platform development and emergent computing systems. I'm committed to exploring the frontiers of engineering research, aiming to bring a unique perspective to the dynamic field of technology.


Education

University of Maryland - College Park

Prince George’s Community College


Skills

Programming Languages:
Proficient in Dart, Python, Java, HTML, CSS, JavaScript, SQL, PHP, and C. Comprehensive experience in software engineering principles and practices across various programming paradigms.
Data Science & Visualization:
Advanced proficiency in Matplotlib, Seaborn, and Pandas for data visualization and analysis. Skilled in statistical modeling, predictive analytics, and utilizing data science methodologies to derive insights and inform decision-making processes.
Machine Learning & Artificial Intelligence:
Expert in TensorFlow and proficient in PyTorch for developing machine learning models. Extensive experience in constructing and optimizing Convolutional Neural Networks (CNNs), U-Nets, and implementing Natural Language Processing (NLP) and Understanding (NLU) for deep learning applications. Demonstrated ability in image classification, computer vision, and algorithm development for AI-driven solutions.
Mobile & Web Development:
Experienced in cross-platform mobile application development using Flutter, including Android and iOS platforms. Proficient in web development technologies and frameworks, ensuring responsive and efficient user experiences.
Technical Software & Tools:
Skilled in using integrated development environments (IDEs) and technical software such as Visual Studio Code (VS Code), Figma, Android Studio, Jupyter Notebook, MatLab, Verilog, CircuitLab, and WaveForms. Familiarity with version control systems like Git for collaborative coding and code management.
UI/UX Design:
Strong foundation in UI/UX design principles, adept at creating user-centered interfaces that enhance user experience. Proficient in wireframing, prototyping, and user testing to ensure engaging and accessible digital products.
Cloud Computing:
Experienced with cloud services and infrastructure, including Google Cloud Platform (GCP), for application hosting, deployment, and scalable computing solutions. Knowledgeable in cloud architecture and security best practices.
Database Management:
Proficient in database design and management using Firebase for real-time applications and MongoDB for NoSQL database architectures. Skilled in SQL for relational database management and data manipulation.
Embedded Systems & Hardware:
Knowledge in embedded systems design and development, including the use of microcontrollers and FPGA with Verilog. Understanding of digital and analog circuit design, PCB layout, and simulation.
Networking & Security:
Basic understanding of network protocols, cybersecurity fundamentals, and secure coding practices to protect data and maintain application security.

Job Experience

Evolv Technology

CATT Laboratory

App Dev Club x General Dynamics

Duke University - Pratt School of Engineering

Youth Crisis Line


Leadership & Involvement

Gaussian Club of Mathematics, Prince George's Community College


Projects

NEAR-MI Model GUI

The Near-MI GUI Viewer is an advanced Python-based interface, meticulously developed with PyQt, designed to offer a seamless and interactive experience in handling neural network models and image processing. This sophisticated tool enables users to effortlessly upload and utilize Keras model weights through h5 files, incorporating the flexibility to define and integrate custom objects tailored to their specific computational needs. A standout feature of this application is its comprehensive support for diverse file formats, including images, CSV, pickle, and text files, alongside a unique functionality for uploading and deconstructing GIFs into individual frames for detailed analysis. Users are endowed with the capability to apply or invert colormaps on their inputs, enhancing the visual representation and interpretability of their data. The Near-MI GUI Viewer excels in bulk image processing, allowing for efficient handling of multiple files simultaneously, thereby streamlining the workflow. Upon processing, it offers versatile export options: users can consolidate their results into a singular composite image, export them as separate files, or amalgamate them into a GIF, showcased in a dynamic slideshow format as illustrated in image 9 of the application's gallery. This tool exemplifies a significant convergence of image processing, neural network model manipulation, and interactive GUI development, showcasing a comprehensive skill set in Python programming, PyQt framework mastery, and a deep understanding of Keras for AI model integration.


PAA5100JE/PMW3901 Arduino Python Reader

In developing the PAA5100JE/PMW3901 Arduino Python Reader, I leveraged the PySerial library to establish serial communication between the Python environment and the Arduino microcontroller, reading data from the PAA5100JE and PMW3901 optical flow sensors. The integration process involved utilizing Bitcraze's Arduino driver, configuring the Arduino IDE for library inclusion, and scripting in the Arduino sketch to facilitate sensor data acquisition and transmission. On the Python side, the code employs serial port detection and data parsing techniques to interpret the X and Y movement counts transmitted by the Arduino, demonstrating a practical application of serial communication protocols and data handling in a cross-platform context. This project underscores skills in sensor integration, data stream processing, and cross-language communication, illustrating the technical nuances of interfacing hardware with high-level programming languages for real-time data analysis. GitHub Repository

Text To Speech & Soundboard

I utilized PyQT to create an interactive system for converting text input into real-time audio. I integrated Azure TTS and gTTS services, leveraging their speech synthesis capabilities for naturalistic audio output. To direct synthesized speech into the user's microphone, I used the VB-CABLE Virtual Audio Device, facilitating seamless audio routing. Additionally, I allowed users to store audio files in a customizable soundboard for easy playback through their microphone. Expanding the tool's functionality, I enabled direct YouTube video link input, allowing users to play videos through their microphone, thus enhancing the utility of the application. This project showcased my proficiency in integrating sophisticated text-to-speech technologies and user interface design to create a versatile and practical application.

Text To Speech & Soundboard
Text To Speech & Soundboard

Pointly

I engineered a family-oriented task management app, deploying Flutter's UI toolkit for cross-platform frontend development. Utilizing Dart language's asynchronous features, I integrated Firebase for seamless real-time database interactions and secure user authentication. I designed a modular architecture, implementing Model-View-Controller (MVC) patterns for efficient data handling and UI updates. The application featured complex user role management, leveraging Firebase's Cloud Firestore for dynamic data storage, enabling parents to assign tasks and track rewards. I deployed the app on Google Play using Gradle build automation and Fastlane for streamlined Continuous Integration and Delivery processes. This project showcased my ability to blend advanced frontend design with intricate backend systems, creating a holistic and responsive mobile application.   Google Play Listing   Youtube Demo

Interactive Prototype in Figma

WasHungry

I utilized Flutter for cross-platform app development, focusing on intuitive UI/UX design for food donation management. The application integrated Google Cloud Platform's Firestore for scalable data storage, alongside Firebase Authentication for secure user identification. Geolocation features were implemented using Google Maps API, facilitating donor-recipient proximity matching. This project also incorporated RESTful API integration for efficient data handling and user interaction. The app's design and implementation underscored my technical proficiency in creating a socially impactful, feature-rich mobile application. View Prototype

Interactive Prototype in Figma

TensorFlow Lite ML Integration in Mobile Apps

In integrating TensorFlow Lite into mobile applications, I architected object recognition algorithms for diverse uses: a fruit/vegetable identifier, a pet classifier, and a handwriting analysis tool. Leveraging TF Lite's quantization and model optimization, I streamlined CNN architectures for mobile efficiency. Key processes involved dataset preprocessing with augmentation and feature scaling, enhancing model's adaptability to varied inputs. Deploying optimized models involved balancing inference speed and memory usage. This initiative illustrated my command in TensorFlow Lite's nuanced application to real-world, mobile-centric machine learning challenges. Watch Demo

Demo on Youtube

Extra Courses