<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Muhammad Zulqarnain | Full Stack AI & Geospatial]]></title><description><![CDATA[Muhammad Zulqarnain | Full Stack AI & Geospatial]]></description><link>https://blog.zunain.com</link><generator>RSS for Node</generator><lastBuildDate>Thu, 09 Apr 2026 09:47:39 GMT</lastBuildDate><atom:link href="https://blog.zunain.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Next.js Production Apps: Server Components and API Routes]]></title><description><![CDATA[Next.js has become my framework of choice for building production-ready full-stack applications. Here's how I leverage its powerful features.
Why Next.js?
Complete Full-Stack Solution

Server and client components
Built-in API routes
Automatic code s...]]></description><link>https://blog.zunain.com/nextjs-production-apps-server-components-and-api-routes</link><guid isPermaLink="true">https://blog.zunain.com/nextjs-production-apps-server-components-and-api-routes</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[Next.js]]></category><category><![CDATA[React]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[webdev]]></category><dc:creator><![CDATA[Muhammad Zulqarnain]]></dc:creator><pubDate>Mon, 06 Apr 2026 23:29:00 GMT</pubDate><content:encoded><![CDATA[<p>Next.js has become my framework of choice for building production-ready full-stack applications. Here's how I leverage its powerful features.</p>
<h2 id="heading-why-nextjs">Why Next.js?</h2>
<p><strong>Complete Full-Stack Solution</strong></p>
<ul>
<li>Server and client components</li>
<li>Built-in API routes</li>
<li>Automatic code splitting</li>
<li>Optimized image handling</li>
<li>SEO-friendly by default</li>
</ul>
<h2 id="heading-server-components-app-router">Server Components (App Router)</h2>
<pre><code class="lang-tsx">// app/page.tsx
import { getProducts } from '@/lib/data';

export default async function HomePage() {
  const products = await getProducts();

  return (
    &lt;main&gt;
      &lt;h1&gt;Products&lt;/h1&gt;
      {products.map(product =&gt; (
        &lt;ProductCard key={product.id} product={product} /&gt;
      ))}
    &lt;/main&gt;
  );
}
</code></pre>
<h2 id="heading-api-routes">API Routes</h2>
<pre><code class="lang-ts"><span class="hljs-comment">// app/api/products/route.ts</span>
<span class="hljs-keyword">import</span> { NextResponse } <span class="hljs-keyword">from</span> <span class="hljs-string">'next/server'</span>;
<span class="hljs-keyword">import</span> { db } <span class="hljs-keyword">from</span> <span class="hljs-string">'@/lib/db'</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">GET</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">try</span> {
    <span class="hljs-keyword">const</span> products = <span class="hljs-keyword">await</span> db.product.findMany();
    <span class="hljs-keyword">return</span> NextResponse.json(products);
  } <span class="hljs-keyword">catch</span> (error) {
    <span class="hljs-keyword">return</span> NextResponse.json(
      { error: <span class="hljs-string">'Failed to fetch products'</span> },
      { status: <span class="hljs-number">500</span> }
    );
  }
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">POST</span>(<span class="hljs-params">request: Request</span>) </span>{
  <span class="hljs-keyword">const</span> body = <span class="hljs-keyword">await</span> request.json();
  <span class="hljs-keyword">const</span> product = <span class="hljs-keyword">await</span> db.product.create({ data: body });
  <span class="hljs-keyword">return</span> NextResponse.json(product);
}
</code></pre>
<h2 id="heading-data-fetching-patterns">Data Fetching Patterns</h2>
<pre><code class="lang-ts"><span class="hljs-comment">// Server Component - Direct database access</span>
<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">getUser</span>(<span class="hljs-params">id: <span class="hljs-built_in">string</span></span>) </span>{
  <span class="hljs-keyword">const</span> user = <span class="hljs-keyword">await</span> db.user.findUnique({ where: { id } });
  <span class="hljs-keyword">return</span> user;
}

<span class="hljs-comment">// With caching</span>
<span class="hljs-keyword">import</span> { unstable_cache } <span class="hljs-keyword">from</span> <span class="hljs-string">'next/cache'</span>;

<span class="hljs-keyword">const</span> getCachedProducts = unstable_cache(
  <span class="hljs-keyword">async</span> () =&gt; db.product.findMany(),
  [<span class="hljs-string">'products'</span>],
  { revalidate: <span class="hljs-number">3600</span> }
);
</code></pre>
<h2 id="heading-my-production-setup">My Production Setup</h2>
<p><strong>Performance</strong></p>
<ul>
<li>Image optimization with next/image</li>
<li>Font optimization</li>
<li>Bundle analyzer</li>
</ul>
<p><strong>SEO</strong></p>
<ul>
<li>Metadata API</li>
<li>Sitemap generation</li>
<li>robots.txt</li>
</ul>
<p><strong>Developer Experience</strong></p>
<ul>
<li>TypeScript throughout</li>
<li>ESLint &amp; Prettier</li>
<li>Tailwind CSS</li>
</ul>
<p>Next.js simplifies full-stack development while maintaining excellent performance and SEO. The App Router and Server Components have transformed how I build web applications.</p>
]]></content:encoded></item><item><title><![CDATA[How We Scaled Quran.com to 50M+ Monthly Users: Architecture Lessons from the Inside]]></title><description><![CDATA[Quran.com is one of the most-visited Islamic websites in the world. During my time there as a Full Stack Engineer, the platform crossed 50 million monthly active users — a milestone that forced us to rethink nearly every assumption we'd made about ar...]]></description><link>https://blog.zunain.com/how-we-scaled-qurancom-to-50m-monthly-users-architecture-lessons-from-the-inside</link><guid isPermaLink="true">https://blog.zunain.com/how-we-scaled-qurancom-to-50m-monthly-users-architecture-lessons-from-the-inside</guid><category><![CDATA[architecture]]></category><category><![CDATA[Next.js]]></category><category><![CDATA[performance]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[webdev]]></category><dc:creator><![CDATA[Muhammad Zulqarnain]]></dc:creator><pubDate>Mon, 06 Apr 2026 23:14:16 GMT</pubDate><content:encoded><![CDATA[<p>Quran.com is one of the most-visited Islamic websites in the world. During my time there as a Full Stack Engineer, the platform crossed 50 million monthly active users — a milestone that forced us to rethink nearly every assumption we'd made about architecture, data, and performance.</p>
<p>This post covers the technical decisions that got us there, the tradeoffs we made, and what I'd do differently.</p>
<h2 id="heading-the-scale-reality">The Scale Reality</h2>
<p>50M MAU sounds like an abstract number. In practice it means spikes of hundreds of thousands of concurrent users during Ramadan, Quran audio files being streamed from every continent simultaneously, prayer time calculations for locations across the entire globe, and serving both high-bandwidth users in the West and users on 2G/3G connections in South Asia and Sub-Saharan Africa.</p>
<p>That last point shaped almost every technical decision. You can't optimize purely for fast connections. You have to think about the user opening the site on a 2G connection in rural Pakistan at 3am for Fajr prayer.</p>
<h2 id="heading-frontend-nextjs-as-the-foundation">Frontend: Next.js as the Foundation</h2>
<p>We built on Next.js. Server-side rendering was critical for two reasons: SEO (the Quran text needs to be indexable) and first-load performance on slow connections.</p>
<p>Each Surah page (114 in total) is statically generated at build time — meaning the first HTML response is near-instant, no waiting for JS to hydrate before the user sees text.</p>
<p>We were aggressive about code splitting: audio player logic only loads when the user actually interacts with audio. Next.js Image with AVIF/WebP fallback cut image payloads significantly while staying compatible with older devices.</p>
<h2 id="heading-offline-audio-streaming">Offline Audio Streaming</h2>
<p>One of the most technically interesting problems: how do you let a user listen to Quran recitations with no internet?</p>
<p>Quran audio is broken into individual ayah (verse) recordings — up to 6,236 of them per reciter. You can't cache all of that upfront.</p>
<p>Our solution: when a user starts playing a Surah, we silently prefetch the next 10 ayahs in the background using Service Workers and the Cache API. If connectivity drops, playback continues seamlessly.</p>
<p>We also gave users an explicit "Download for offline" option for Surahs they listen to regularly.</p>
<h2 id="heading-geospatial-prayer-times-at-scale">Geospatial: Prayer Times at Scale</h2>
<p>Prayer times are calculated based on GPS coordinates — simple until you consider DST rules that differ by country, multiple calculation methodologies (Hanafi, Shafi'i, MWL, ISNA), and users who want accurate times regardless of whether they share their location.</p>
<p>We moved prayer time calculation to the client side. The astronomy math runs in under a millisecond on any modern device, requires no server round-trip, and stores no user location data.</p>
<p>For map-based features — nearby mosques, Qibla direction — we used Mapbox GL JS. The vector tile approach is ideal: compact, cacheable, and sharp at any zoom level.</p>
<h2 id="heading-database-and-infrastructure">Database and Infrastructure</h2>
<p>PostgreSQL was our primary data store. The Quran text is relational in nature — verses have translations, tafsirs, word-by-word breakdowns, audio timestamps, and cross-references. A relational model maps cleanly to this structure.</p>
<p>For our read-heavy endpoints (almost all of them), we leaned on read replicas to distribute query load, Redis for caching frequently-accessed data, and aggressive CDN caching at the edge for static API responses.</p>
<p>Infrastructure ran on AWS: ECS for containerized services, RDS for PostgreSQL, ElastiCache for Redis, CloudFront as CDN.</p>
<h2 id="heading-what-id-do-differently">What I'd Do Differently</h2>
<p><strong>Invest in observability earlier.</strong> We added distributed tracing later than we should have. When you're debugging a latency spike that only affects users in a specific region, you want granular trace data — not logs.</p>
<p><strong>Be more aggressive with edge caching.</strong> Many API responses could have been fully cached at the CDN layer, eliminating server load for the vast majority of requests. We were conservative about this early and paid the price during Ramadan traffic spikes.</p>
<p><strong>Design for low-bandwidth from day one.</strong> We retrofitted a lot of optimizations later. Starting with that constraint leads to better decisions upfront — smaller bundles, progressive loading, offline-first thinking baked in from the beginning.</p>
<h2 id="heading-closing-thoughts">Closing Thoughts</h2>
<p>Working on a platform where the content matters deeply to hundreds of millions of people sharpens your instincts. Performance isn't vanity. A 3-second improvement in load time is the difference between someone completing Fajr prayer with the app or giving up.</p>
<p>If you're working on similar challenges — high-scale web apps, geospatial features, or offline-capable PWAs — find me at <a target="_blank" href="https://zunain.com">zunain.com</a> or on GitHub at <a target="_blank" href="https://github.com/mzulqarnain118">mzulqarnain118</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Building a Production RAG Pipeline: Lessons from Real-World AI Apps]]></title><description><![CDATA[If you've built more than a toy RAG prototype, you already know that the hard part isn't connecting an LLM to a vector database. The hard part is everything that comes after: degraded retrieval qualit]]></description><link>https://blog.zunain.com/building-a-production-rag-pipeline-lessons-from-real-world-ai-apps</link><guid isPermaLink="true">https://blog.zunain.com/building-a-production-rag-pipeline-lessons-from-real-world-ai-apps</guid><category><![CDATA[AI]]></category><category><![CDATA[Python]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[webdev]]></category><dc:creator><![CDATA[Muhammad Zulqarnain]]></dc:creator><pubDate>Mon, 06 Apr 2026 22:43:40 GMT</pubDate><content:encoded><![CDATA[<p>If you've built more than a toy RAG prototype, you already know that the hard part isn't connecting an LLM to a vector database. The hard part is everything that comes after: degraded retrieval quality on edge cases, latency spikes at scale, context windows filled with irrelevant chunks, and evaluation that tells you nothing useful.</p>
<p>This post covers the production patterns we actually use — chunking strategies, FAISS setup, cross-encoder reranking, and offline evaluation metrics that give you real signal.</p>
<h2>The Core Problem with Naive RAG</h2>
<p>Most tutorials show you this pipeline:</p>
<ol>
<li>Split document into chunks</li>
<li>Embed chunks</li>
<li>Store in vector DB</li>
<li>At query time: embed query → find nearest chunks → stuff into LLM prompt</li>
</ol>
<p>It works in demos. It fails in production. Here's why:</p>
<p><strong>Fixed-size chunking splits sentences mid-thought.</strong> A chunk ending with "The model performs well when" and the next chunk starting "the temperature is above 0.7" means neither chunk retrieves correctly for a query about model behavior.</p>
<p><strong>No score threshold means garbage in, garbage out.</strong> If you retrieve the top-k regardless of score, you'll pass irrelevant chunks to the LLM. The LLM will either hallucinate or say "I don't know" — even when the answer exists in your corpus.</p>
<p><strong>Bi-encoder similarity is approximate.</strong> Fast for retrieval, but the dot product between independently encoded embeddings is a coarse approximation of relevance. A cross-encoder that jointly processes the query and document is far more accurate.</p>
<h2>Chunking: Get This Right First</h2>
<p>Four strategies worth knowing:</p>
<p><strong>Fixed-size chunking</strong> — Simple. Chunk every N tokens with M token overlap. Fast, predictable. Bad for prose, acceptable for structured data.</p>
<p><strong>Sentence-boundary chunking</strong> — Split on sentence endings, accumulate until you hit the size limit, then start a new chunk with overlap. This preserves semantic units. What we use for most text at Quran.com.</p>
<p><strong>Recursive chunking</strong> — Try paragraph splits first. If still too large, try sentence splits. If still too large, try word splits. LangChain's <code>RecursiveCharacterTextSplitter</code> is the standard implementation.</p>
<p><strong>Semantic chunking</strong> — Embed each sentence, detect where cosine similarity drops (meaning shift), split there. Highest quality, ~5x slower. Worth it for high-stakes retrieval.</p>
<p>Key parameter: <strong>overlap matters more than chunk size.</strong> We found 10–15% overlap (e.g., 50 tokens of overlap on 400-token chunks) gives the best recall without wasting context budget. Too little and you lose context at boundaries. Too much and you fill the LLM prompt with duplicate content.</p>
<h2>FAISS Setup for Production</h2>
<pre><code class="language-python">import faiss
import numpy as np
from sentence_transformers import SentenceTransformer

model = SentenceTransformer('all-MiniLM-L6-v2')
dim = model.get_sentence_embedding_dimension()  # 384

# For small-medium corpora (&lt;500k docs): IndexFlatIP
index = faiss.IndexFlatIP(dim)

# For large corpora (&gt;500k docs): IndexHNSW
# index = faiss.IndexHNSWFlat(dim, 32)
</code></pre>
<p><strong>Always normalize your embeddings</strong> before adding to an IP index. This makes inner product equivalent to cosine similarity.</p>
<pre><code class="language-python">def embed_and_add(texts, index, model, batch_size=64):
    for i in range(0, len(texts), batch_size):
        batch = texts[i:i+batch_size]
        embeddings = model.encode(batch, normalize_embeddings=True)
        index.add(embeddings.astype('float32'))
</code></pre>
<p><strong>Score threshold is your most important hyperparameter.</strong> Set it empirically on a validation set, not by gut.</p>
<pre><code class="language-python">def retrieve(query, index, chunks, model, top_k=10, threshold=0.5):
    query_vec = model.encode([query], normalize_embeddings=True).astype('float32')
    scores, indices = index.search(query_vec, top_k)
    results = []
    for score, idx in zip(scores[0], indices[0]):
        if idx != -1 and score &gt;= threshold:
            results.append((chunks[idx], float(score)))
    return results
</code></pre>
<h2>Cross-Encoder Reranking</h2>
<p>The pattern: retrieve top-20 with a fast bi-encoder, then rerank to top-5 with a cross-encoder.</p>
<p>Cross-encoders jointly process the query and document through the full attention mechanism. They're 10–100x slower than bi-encoders but far more accurate at judging relevance.</p>
<pre><code class="language-python">from sentence_transformers import CrossEncoder

reranker = CrossEncoder('cross-encoder/ms-marco-MiniLM-L-6-v2')

def rerank(query, candidates, top_k=5):
    pairs = [(query, chunk.text) for chunk, _ in candidates]
    scores = reranker.predict(pairs)
    ranked = sorted(zip([c for c, _ in candidates], scores),
                   key=lambda x: x[1], reverse=True)
    return ranked[:top_k]
</code></pre>
<p>At Quran.com, adding a cross-encoder reranker improved our semantic search precision from ~72% to ~89% on our eval set.</p>
<h2>Evaluation: Context Precision and Recall</h2>
<p>Don't evaluate RAG by running queries through the LLM and judging the answers. Evaluate the retrieval layer directly.</p>
<p><strong>Context Precision</strong> — Of the chunks I retrieved, what fraction are actually relevant?
<strong>Context Recall</strong> — Of all the relevant chunks in my corpus, what fraction did I retrieve?</p>
<pre><code class="language-python">def context_precision(retrieved_chunks, relevant_sources):
    if not retrieved_chunks:
        return 0.0
    relevant = sum(1 for c in retrieved_chunks if c.source in relevant_sources)
    return relevant / len(retrieved_chunks)

def context_recall(retrieved_chunks, relevant_sources, all_chunks):
    total_relevant = sum(1 for c in all_chunks if c.source in relevant_sources)
    if total_relevant == 0:
        return 1.0
    retrieved_relevant = sum(1 for c in retrieved_chunks if c.source in relevant_sources)
    return retrieved_relevant / total_relevant
</code></pre>
<p>Build a test set of (query, relevant_source_ids) pairs. Run your pipeline. Track precision and recall over time.</p>
<h2>Production Patterns That Actually Matter</h2>
<p><strong>Cache at the right layer.</strong> Cache (query_embedding → chunk_ids), not (query_string → chunk_ids). Semantically similar queries hit the same cache entry. We use Redis with a 1-hour TTL and cut embedding inference load by ~60%.</p>
<p><strong>Groundedness monitoring.</strong> Log every LLM response along with the context chunks. Periodically sample and check whether the answer is grounded in the context or hallucinated.</p>
<p><strong>Fallback on low confidence.</strong> If no chunk clears your score threshold, don't pass empty context to the LLM. Return a "I don't have enough information to answer this" response. Users trust a system that knows what it doesn't know.</p>
<p><strong>Semi-supervised eval improvement.</strong> Every time a user clicks on a search result, that's a weak positive label. Accumulate these signals to periodically retune your threshold and swap embedding models.</p>
<h2>Conclusion</h2>
<p>The gap between a RAG demo and production RAG is mostly about the decisions around the retrieval layer — chunking strategy, score thresholds, reranking, and systematic evaluation. Get these right and the LLM part largely takes care of itself.</p>
<p>The patterns here are what we've battle-tested at Quran.com serving 50M+ monthly users. They're not the only way, but they work.</p>
<p>If you want to see the full implementation as runnable code, check out the companion Kaggle notebook: <a href="https://kaggle.com/mzulqarnain118">rag_pipeline_from_scratch</a>.</p>
]]></content:encoded></item><item><title><![CDATA[How We Scaled Quran.com to 50M Monthly Users: Architecture Lessons From the Inside]]></title><description><![CDATA[Quran.com is one of the most-visited Islamic websites in the world. During my time there as a Full Stack Engineer, the platform crossed 50 million monthly active users — a milestone that forced us to ]]></description><link>https://blog.zunain.com/how-we-scaled-quran-com-to-50m-monthly-users-architecture-lessons-from-the-inside</link><guid isPermaLink="true">https://blog.zunain.com/how-we-scaled-quran-com-to-50m-monthly-users-architecture-lessons-from-the-inside</guid><dc:creator><![CDATA[Muhammad Zulqarnain]]></dc:creator><pubDate>Sat, 04 Apr 2026 18:26:15 GMT</pubDate><content:encoded><![CDATA[<p>Quran.com is one of the most-visited Islamic websites in the world. During my time there as a Full Stack Engineer, the platform crossed 50 million monthly active users — a milestone that forced us to rethink nearly every assumption we'd made about architecture, data, and performance.</p>
<p>This post covers the technical decisions that got us there, the tradeoffs we made, and what I'd do differently.</p>
<p>The Scale Reality</p>
<p>50M MAU sounds like an abstract number. In practice it means spikes of hundreds of thousands of concurrent users during Ramadan, Quran audio files being streamed from every continent simultaneously, prayer time calculations for locations across the entire globe, and serving both high-bandwidth users in the West and users on 2G/3G connections in South Asia and Sub-Saharan Africa.</p>
<p>That last point shaped almost every technical decision. You can't optimize purely for fast connections. You have to think about the user opening the site on a 2G connection in rural Pakistan at 3am for Fajr prayer.</p>
<p>Frontend: Next.js as the Foundation</p>
<p>We built on Next.js. Server-side rendering was critical for two reasons: SEO (the Quran text needs to be indexable) and first-load performance on slow connections.</p>
<p>Each Surah page (114 in total) is statically generated at build time — meaning the first HTML response is near-instant, no waiting for JS to hydrate before the user sees text. We were aggressive about code splitting: audio player logic only loads when the user actually interacts with audio. Next.js Image with AVIF/WebP fallback cut image payloads significantly while staying compatible with older devices.</p>
<p>Offline Audio Streaming</p>
<p>One of the most technically interesting problems: how do you let a user listen to Quran recitations with no internet?</p>
<p>Quran audio is broken into individual ayah (verse) recordings — up to 6,236 of them per reciter. You can't cache all of that upfront. Our solution: when a user starts playing a Surah, we silently prefetch the next 10 ayahs in the background using Service Workers and the Cache API. If connectivity drops, playback continues seamlessly. We also gave users an explicit "Download for offline" option for Surahs they listen to regularly.</p>
<p>Geospatial: Prayer Times at Scale</p>
<p>Prayer times are calculated based on GPS coordinates — simple until you consider DST rules that differ by country, multiple calculation methodologies (Hanafi, Shafi'i, MWL, ISNA), and users who want accurate times regardless of whether they share their location.</p>
<p>We moved prayer time calculation to the client side. The astronomy math runs in under a millisecond on any modern device, requires no server round-trip, and stores no user location data. For map-based features — nearby mosques, Qibla direction — we used Mapbox GL JS. The vector tile approach is ideal: compact, cacheable, and sharp at any zoom level.</p>
<p>Database and Infrastructure</p>
<p>PostgreSQL was our primary data store. The Quran text is relational in nature — verses have translations, tafsirs, word-by-word breakdowns, audio timestamps, and cross-references. A relational model maps cleanly to this structure.</p>
<p>For our read-heavy endpoints (almost all of them), we leaned on read replicas to distribute query load, Redis for caching frequently-accessed data, and aggressive CDN caching at the edge for static API responses. Infrastructure ran on AWS: ECS for containerized services, RDS for PostgreSQL, ElastiCache for Redis, CloudFront as CDN.</p>
<p>What I'd Do Differently</p>
<p>Invest in observability earlier. We added distributed tracing later than we should have. When you're debugging a latency spike that only affects users in a specific region, you want granular trace data — not logs.</p>
<p>Be more aggressive with edge caching. Many API responses could have been fully cached at the CDN layer, eliminating server load for the vast majority of requests. We were conservative about this early and paid the price during Ramadan traffic spikes.</p>
<p>Design for low-bandwidth from day one. We retrofitted a lot of optimizations later. Starting with that constraint leads to better decisions upfront — smaller bundles, progressive loading, offline-first thinking baked in from the beginning.</p>
<p>Closing Thoughts</p>
<p>Working on a platform where the content matters deeply to hundreds of millions of people sharpens your instincts. Performance isn't vanity. A 3-second improvement in load time is the difference between someone completing Fajr prayer with the app or giving up.</p>
<p>If you're working on similar challenges — high-scale web apps, geospatial features, or offline-capable PWAs — find me at zunain.com or on GitHub at mzulqarnain118.</p>
]]></content:encoded></item></channel></rss>