← Back to portfolio
Sep 2025 • 6 min read

Hands-On with Generative AI

This guided sprint distills seven weeks of deep practice with TensorFlow, Transformers, and production-focused orchestration frameworks including LangChain and LangGraph. The result is a repeatable playbook for taking large language models from fundamentals to enterprise-ready deployments.

Below is the week-by-week breakdown I followed, highlighting the core theories, hands-on builds, and the key technologies that now shape my GenAI toolkit.

Curriculum Overview

The curriculum combines foundational refreshers with production-grade projects. Each week couples core model theory with implementation, graduating from autoencoders to agentic RAG systems. Frequent retros kept the focus on how every concept supports real-world delivery.

Live labs, peer code reviews, and toolchain walkthroughs cemented practices around observability, prompt engineering, and governance. The balance between research-grade understanding and shippable prototypes kept momentum high throughout the program.

Core deliverables included reproducible notebooks, LangChain pipelines, and Streamlit interfaces that could be demoed to stakeholders. By the end of the cohort, I owned an internal accelerator kit that covers data ingestion, fine-tuning, evaluation, and deployment for the most common GenAI initiatives.

Below you will find the highlights from each week, showcasing the projects that stretched my skills and the technologies I now reach for first when architecting intelligent applications.

Week 1 — Foundations of Generative AI

Set up the mathematical and neural network fundamentals that power today's generative systems.

Key Concepts

  • Refreshed core math for AI — probability, statistics, and linear algebra that underpin every model.
  • Revisited feedforward, RNN, and CNN architectures with a deep dive into gradient descent optimizers.
  • Built intuition around how representation learning fuels downstream creativity.

Hands-On Projects

  • Created a TensorFlow classifier from scratch to practice network wiring and debugging.
  • Trained an Autoencoder on MNIST to experience reconstruction loss and latent-space exploration.
TensorFlowKeras

Week 2 — Deep Generative Models

Transitioned from discriminative models to confident generative sampling with GAN and VAE pipelines.

Key Concepts

  • Contrasted discriminative vs. generative modeling approaches and evaluation tactics.
  • Implemented GAN training loops, stabilizing adversarial objectives with metrics visualized in TensorBoard.
  • Explored probabilistic latent modelling and reparameterization tricks inside VAEs.

Hands-On Projects

  • Generated handwritten digits using a custom GAN and tuned loss balance to reduce mode collapse.
  • Built a CelebA face generator with a VAE, experimenting with latent traversals for controlled synthesis.
TensorFlowTensorBoardGenerative Adversarial NetworksVariational Autoencoders

Week 3 — Transformers and Large Language Models

Shifted to sequence modeling with attention, decoding the architectures behind GPT and BERT.

Key Concepts

  • Reviewed RNN and LSTM limitations before constructing multi-head self-attention blocks.
  • Implemented positional encodings and residual layering to stabilise Transformer depth.
  • Compared masked vs. causal language modeling objectives that powerhouse modern LLMs.

Hands-On Projects

  • Built a simplified code Transformer from scratch, reinforcing encoder-decoder design patterns.
  • Fine-tuned BERT for sentiment analysis to appreciate transfer learning efficiency.
PyTorchTransformersBERTAttention Mechanisms

Week 4 — Fine-Tuning, LangChain, LangGraph

Operationalised LLMs for task-specific performance with parameter-efficient techniques and orchestration tools.

Key Concepts

  • Practiced LoRA and QLoRA strategies to adapt large checkpoints without overwhelming compute.
  • Navigated Hugging Face model repositories and dataset versioning workflows.
  • Modelled agentic behaviour using LangChain constructs — prompts, memory, chains, and tool-aware agents.

Hands-On Projects

  • Fine-tuned a summarization model using LoRA adapters and evaluated on QA benchmarks.
  • Prototyped a LangChain-based Q&A assistant with LangGraph workflows to manage conversational state.
LoRAQLoRAHugging FaceLangChainLangGraph

Week 5 — Vector Databases and Retrieval-Augmented Generation

Designed production-ready RAG systems combining knowledge retrieval with responsive generation.

Key Concepts

  • Benchmarked vector databases with a focus on ChromaDB orchestration and embedding hygiene.
  • Connected retrieval pipelines to LangChain retrievers and prompt templates.
  • Prototyped front-end touchpoints using Streamlit for rapid iteration with stakeholders.

Hands-On Projects

  • Delivered an end-to-end ChatGPT-style assistant powered by LangGraph, ChromaDB, web search tools, and memory graphs.
  • Built a multimodal Streamlit app for image generation and captioning workflows.
ChromaDBRetrieval-Augmented GenerationStreamlitLangChain

Week 6 — Trending Topics

Surveyed the fast-moving landscape of tooling and research to stay ahead of production roadmaps.

Key Concepts

  • Experimented with Model Context Protocol (MCP) integrations for structured agent interoperability.
  • Explored Ollama for lightweight local deployment and rapid iteration loops.
  • Studied expert systems including Mixture of Experts and Chain-of-Thought prompting techniques.

Hands-On Projects

  • Ran accelerated fine-tuning experiments with Unsloth to validate training-speed trade-offs.
  • Reverse-engineered DeepSeek architectural patterns to inform internal platform direction.
Model Context ProtocolOllamaUnslothMixture of ExpertsDeepSeek

Week 7 — Projects and Forward Momentum

Consolidated learnings through applied builds and outlined a roadmap for continued mastery.

Key Concepts

  • Dived into diffusion and vision transformers to extend generative fluency beyond text.
  • Connected CLIP-style multimodal representations to prompt engineering best practices.
  • Documented repeatable workflows for distillation and deployment-ready evaluation.

Hands-On Projects

  • Ship-ready prompt engineering playbooks tuned for enterprise chat and creative tooling.
  • Prototype diffusion-based image pipeline with safeguards and evaluation metrics.
Diffusion ModelsVision TransformersCLIPPrompt Engineering

Capstone Delivery

The final deliverable was a production-grade RAG assistant tailored to enterprise documentation. Using LangGraph for workflow management, ChromaDB for vector storage, and Streamlit for stakeholder demos, I built an experience that combined web search, guarded tool usage, and memory-aware conversations.

Alongside the assistant, I shipped a multimodal creation suite that orchestrates image and video captioning with diffusion-based generation. This project reinforced best practices for evaluation, guardrails, and rapid iteration across modalities.

What Stuck With Me

Production-readiness hinges on observability and iterable workflows. Keeping TensorBoard dashboards, evaluation suites, and LangSmith traces close at hand accelerates delivery while maintaining confidence.

Parameter-efficient fine-tuning (LoRA, QLoRA) is now my default when matching bespoke domain needs without incurring full retraining costs. Combined with managed hosting or Ollama, it keeps experimentation nimble.

The best GenAI experiences are opinionated. Codifying prompt libraries, retrieval recipes, and evaluation checklists ensures repeatable wins across teams.

Finally, staying curious about emerging research — diffusion, multimodal alignment, distillation — keeps the roadmap forward-looking and resilient to a rapidly shifting landscape.

Article by Theja Kunuthuru
Hands-On with Generative AI • 6 min read