Nijesh Kanjinghat
AI Engineering Lead, APAC @ IBM
Engineering reliable, scalable, and aligned AI systems
I build scalable LLM and multi-agent systems with alignment, evaluation, and robustness at the core. with more than 16 years spanning foundational ML, distributed systems, and AI governance,I focus on making intelligent systems measurable, interpretable, and safe in real-world environments. Besides work I also serve as the Vice Chair, IEEE SA IC25-003™ Multi-Agent AI Evaluation.
WHAT I DO
LLM Systems & Performance
MLOps/LLMOps pipelines, LLM inference optimization, and hardware-aware performance engineering on GPUs and TPUs using Triton and Pallas.
Evaluation & Agent Reliability
AgentOps, multi-agentic system design, and evaluation frameworks that measure what matters — beyond LLM-as-judge.
Governance & Safety Engineering
Responsible AI frameworks, IEEE SA IC25-003™ multi-agent evaluation standards, and production safety controls.
LATEST WRITING
all posts →PROJECTS
all projects →RLM-Codelens
Architecture intelligence for large codebases. Combines AST parsing, graph analysis, and Recursive Language Models to detect anti-patterns, cycles, and layering violations. Multi-language support, tested on repos up to 3.4M LOC.
Operationalizing GenAI
Code and materials from my ODSC APAC 2024 keynote. Practical implementations of knowledge distillation, pruning, quantization, and model parallelization for production LLM deployment.
Telecom Domain Fine-Tuning
Synthetic conversation dataset generator for training telecom customer service AI. Logic-based plan suggestions, multi-turn dialogues, and built-in dataset validation with quality metrics.
Experiments coming soon.