AI Engineer with hands-on experience in developing, optimizing, and deploying machine learning models in production. Proven expertise in architecting, scaling, and optimizing LLM-powered products and multimodal AI systems. Skilled in cloud (AWS, Azure, GCP), MLOps, and end-to-end AI product lifecycle β from prototyping to production-scale deployment.
- Generative AI with Large Language Models β DeepLearning.AI & AWS | Covered transformer architecture, scaling laws, PEFT (LoRA), and fine-tuning techniques.
- Building Generative AI Applications Using Amazon Bedrock β AWS Skill Builder | Hands-on development with managed foundation models.
- Machine Learning Engineering for Production (MLOps) β Coursera | Taught by Andrew Ng, focused on production-grade ML pipelines.
- Machine Learning Specialization β Stanford University | Core ML concepts including supervised learning and neural networks.
- LangChain for LLM Application Development β DeepLearning.AI | Developed RAG and LLM agent pipelines with LangChain.
- Building Systems with the ChatGPT API β DeepLearning.AI | Covered prompt design, chaining, and evaluation for GPT-based systems.
- Version Control with Git β LinkedIn Learning.
- Building and scaling LLM-driven multi-agent systems
- Developing Speech-to-Speech AI systems and Voice AI pipelines
- Designing RAG-based enterprise solutions with LangChain/LangGraph
- LLM fine-tuning and alignment (SFT, LoRA, PEFT)
- Deploying AI systems on GCP, Azure, AWS using CI/CD and container orchestration
