A selection of projects driven by curiosity, innovation, and personal growth.
A custom AI-powered customer service chatbot for seamless integration with e-commerce websites, providing 24/7 support for customer inquiries. The chatbot is built around a Retrieval-Augmented Generation (RAG) pipeline, combining document retrieval and generative models to deliver highly relevant, context-aware answers.
Using Chromadb and Hugging Face encoders, I created a fast document retrieval system, ensuring the chatbot can access the most relevant content quickly and efficiently for each customer query. With just one click, the system can embed both website content and external documents, guaranteeing that responses are always accurate and up-to-date.
This solution aims to empower businesses to manage higher volumes of inquiries while improving customer satisfaction and conversion rates through faster, more accurate, and consistent support.
To make AI more accessible for the Deaf and Hard of Hearing (DHH) community, I participated in the Google - American Sign Language (ASL) Fingerspelling Recognition Competition, which aimed to develop an AI system capable of detecting and translating ASL fingerspelling into text. The dataset provided by Google consisted of over three million fingerspelled characters recorded from 100+ Deaf signers under diverse lighting conditions, making it a challenging yet rewarding task.
I developed a Transformer-based sequence-to-sequence (Seq2Seq) model in PyTorch to accurately recognize and classify fingerspelled sequences from video frames. The model leveraged self-attention mechanisms to effectively track hand movements over time, ensuring robust predictions. To optimize real-time performance, I quantized the model and converted it into TensorFlow Lite (TFLite), making it deployable on edge devices like smartphones.
Through extensive data augmentation and preprocessing techniques, I improved the model’s generalization, reducing classification errors. My solution achieved a high accuracy score, demonstrating AI's potential in bridging communication gaps for the DHH community.
In this project, I explored the fascinating domain of Generative Adversarial Networks (GANs) by designing a model capable of transforming nighttime images into daytime images and vice versa. This problem posed a unique challenge due to the complex lighting variations and dynamic shadows that occur during the transition between night and day.
To create a robust dataset, I scraped and curated a collection of timelapse videos showcasing urban landscapes transitioning from night to day. Using this dataset, I trained a CycleGAN model with two generators: one responsible for transforming night images into day, and the other performing the inverse transformation. The generator architecture combined ResNet and U-Net, ensuring both high-quality texture preservation and spatial consistency. The discriminator was a CNN-based model, which refined the generated outputs by distinguishing real images from synthetically transformed ones.
The final model produced realistic image transformations with minimal artifacts, proving the power of unsupervised deep learning in image-to-image translation.