React
UI library
We do not use tools because they are trendy. We use them because they are right for the job — and because our team knows them deeply enough to use them responsibly.
FRONTEND
React
UI library
Next.js
App framework
TypeScript
Language
Astro
Marketing & content
Vite
Build tool
TanStack
Data & routing
React — The baseline for production web apps. Server components, deep ecosystem, team-wide fluency.
Next.js — Routing, caching, and server components wired right out of the box. Right default for most apps.
TypeScript — Every build is typed. Catches whole classes of bugs before runtime and makes refactors safer.
Astro — Content-first pages that ship minimal JS. Islands architecture where interactivity is needed.
Vite — Fast dev server and build pipeline. Where speed of iteration matters more than a full framework.
TanStack — Query, Router, Table — the typed primitives we reach for when the app gets state-heavy.
BACKEND
Node.js
Runtime
Bun
Runtime
Hono
Edge framework
NestJS
Node framework
Express
Node framework
GraphQL
API protocol
Python
AI & data
Go
High-throughput
Node.js — Mature ecosystem, great primitives for I/O-bound services, and the language we share with the frontend.
Bun — Faster JS runtime with a batteries-included toolkit — install, bundle, test — in one binary.
Hono — Tiny, fast web framework for edge runtimes and Bun. Great for APIs that need to be close to the user.
NestJS — Opinionated, modular Node framework for larger services where structure and DI pay off.
Express — The minimal, no-ceremony choice when all we need is a small HTTP layer.
GraphQL — Strongly typed API contract when the client needs fine-grained control over the response shape.
Python — Default for AI, data, and ML workloads — rich scientific stack and mature tooling.
Go — When latency and throughput matter. Strong concurrency primitives, minimal runtime.
AI STACK
OpenAI
LLM API
Anthropic
LLM API
Gemini
LLM API
Groq
Fast inference
Mistral
LLM API
Hugging Face
Model hub
Cohere
Embeddings & rerank
Replicate
Hosted inference
LangChain
Orchestration
LangGraph
Agents
LlamaIndex
RAG framework
CrewAI
Multi-agent
Vercel AI SDK
Client SDK
Pinecone
Vector DB
Qdrant
Vector DB
Chroma
Vector DB
Ollama
Local inference
LangSmith
Observability
Langfuse
Observability (OSS)
PyTorch
Custom models
OpenAI — GPT-4 class models for production features — robust function calling, structured output, solid latency.
Anthropic — Claude for reasoning-heavy, agentic, and long-context workloads.
Gemini — Google Gemini for multi-modal tasks and tight integration with Google Cloud tooling.
Groq — Ultra-low-latency LLM inference when response time is the UX.
Mistral — Open-weight and hosted models when we need cost-performance control.
Hugging Face — Open-source model hub, datasets, and inference endpoints for research-grade AI.
Cohere — High-quality embeddings and rerankers for retrieval — a strong default for RAG quality.
Replicate — Run open-source models on demand without operating GPUs ourselves.
LangChain — For chaining LLM calls, tool use, and memory when we need a well-trodden abstraction.
LangGraph — Graph-based agent flows with proper state, checkpoints, and human-in-the-loop.
LlamaIndex — Retrieval pipelines, indexing, and data-aware LLM apps.
CrewAI — Coordinating multiple specialised agents on a shared goal.
Vercel AI SDK — Typed React / Node helpers for streaming, tool calls, and chat UIs in Next.js apps.
Pinecone — Managed vector store for retrieval-augmented generation and semantic search at scale.
Qdrant — Self-hostable vector search when data has to stay inside your infrastructure.
Chroma — Embedded-first vector store for lightweight RAG prototypes and small corpora.
Ollama — Running models locally for privacy-sensitive workloads and dev-time iteration.
LangSmith — Tracing, evaluation, and debugging for LLM apps in production.
Langfuse — Self-hostable alternative for LLM tracing and evals when data residency matters.
PyTorch — When a task truly needs a custom-trained or fine-tuned model, not a prompt.
DATABASE
PostgreSQL
Primary
Drizzle
ORM
MongoDB
Document store
Redis
Cache & queues
Supabase
BaaS
PostgreSQL — Battle-tested, feature-rich, right for most business applications.
Drizzle — Typed SQL in TypeScript — edge-runtime friendly, no hidden query magic.
MongoDB — For unstructured or rapidly evolving data schemas.
Redis — Caching, session management, and real-time pub/sub. Keeps apps fast under load.
Supabase — Postgres with auth, storage, and real-time built in. Accelerates smaller teams.
INFRASTRUCTURE & DEVOPS
AWS
Cloud
Google Cloud
Cloud
Docker
Containers
Kubernetes
Orchestration
Vercel
Frontend hosting
GitHub Actions
CI / CD
AWS — Default cloud for larger deployments. Deep service catalogue, proven reliability.
Google Cloud — Where AI and data workloads benefit from native tooling (Vertex, BigQuery).
Docker — Consistent environments from dev to prod. The packaging baseline.
Kubernetes — Orchestration at scale — only when the complexity actually earns its keep.
Vercel — The fastest path to production for Next.js and static apps. Great DX.
GitHub Actions — Automated testing and deployment pipelines on every project, no exceptions.
INTEGRATIONS & APIS
Make
Workflow
Zapier
Workflow
n8n
Self-hosted automation
HubSpot
CRM
Salesforce
CRM
Stripe
Payments
Razorpay
Payments (IN)
Twilio
Comms
WhatsApp Business
Messaging
Make — Visual automation workflows where custom code is overkill.
Zapier — Wide app coverage for quick, reliable glue between SaaS tools.
n8n — When automations need to stay inside your infrastructure.
HubSpot — For full-funnel CRM integrations and lifecycle marketing.
Salesforce — Enterprise sales stacks and custom Salesforce API work.
Stripe — Recurring billing, invoicing, and global payment flows.
Razorpay — UPI, cards, and India-first payment flows with clean APIs.
Twilio — SMS, voice, and programmable messaging at scale.
WhatsApp Business — Customer conversations where WhatsApp is the primary channel.
OUR PHILOSOPHY
A startup does not need Kubernetes. An enterprise does not need to rebuild on a trendy new framework. We use what is right — and explain every decision.
Have a specific tech requirement? Let's talk.