<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>FEMA on Chinmay Deval</title>
    <link>https://chinmaydeval.com/tags/fema/</link>
    <description>Recent content in FEMA on Chinmay Deval</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <lastBuildDate>Tue, 31 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://chinmaydeval.com/tags/fema/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>15x Faster Geospatial Pipelines: Why I Swapped Pandas for DuckDB</title>
      <link>https://chinmaydeval.com/blog/15x-faster-geospatial-pipelines-why-i-swapped-pandas-for-duckdb/</link>
      <pubDate>Tue, 31 Mar 2026 00:00:00 +0000</pubDate>
      
      <guid>https://chinmaydeval.com/blog/15x-faster-geospatial-pipelines-why-i-swapped-pandas-for-duckdb/</guid>
      <description>⚡ TL;DR  The Task: Map 73M+ FEMA NFIP policy records and 3M+ claims (since 1978) to U.S. county boundaries
 The Bottleneck: A Pandas/GeoPandas pipeline that took 10.5s and consumed significant memory
 The Fix: A single DuckDB SQL query using the spatial extension
 The Result: 0.69 seconds (15.2x faster)   This post walks through how I optimized a real-world FEMA geospatial pipeline using DuckDB and why it significantly outperforms Pandas for large-scale spatial joins.</description>
    </item>
    
  </channel>
</rss>
