<?xml version="1.0" encoding="UTF-8"?>
<rss 
    version="2.0"
    xmlns:dc="http://purl.org/dc/elements/1.1/" 
    xmlns:content="http://purl.org/rss/1.0/modules/content/" 
    xmlns:atom="http://www.w3.org/2005/Atom" 
    xmlns:media="http://search.yahoo.com/mrss/" 
>
    <channel>
        <title><![CDATA[Wedgworth Technology]]></title>
        <description><![CDATA[Rooted in Tradition, Growing with Technology]]></description>
        <link>https://wedgworth.dev</link>
        
        <generator>Ghost 6.22</generator>
        <lastBuildDate>Fri, 13 Mar 2026 19:58:17 -0400</lastBuildDate>
        <atom:link href="https://wedgworth.dev" rel="self" type="application/rss+xml"/>
        <ttl>60</ttl>

                <item>
                    <title><![CDATA[Eliminating Blind Spots: Using Technology to Bring Visibility to Operations]]></title>
                    <description><![CDATA[Presented originally at the 2025 ARA Conference and Expo in Salt Lake City — expanded here in blog form.

On Tuesday, I joined two fellow panelists to share how technology can improve operational efficiency. My portion focused on a simple but persistent challenge nearly every retailer faces: operational blind spots.

Across]]></description>
                    <link>https://wedgworth.dev/eliminating-blind-spots-using-technology-to-bring-visibility-to-operations/</link>
                    <guid isPermaLink="false">69161ae001c8980001b544b9</guid>


                        <dc:creator><![CDATA[Patrick Altman]]></dc:creator>

                    <pubDate>Thu, 04 Dec 2025 21:04:44 -0500</pubDate>

                        <media:content url="https://wedgworth.dev/content/images/2025/12/ara-panel.jpeg" medium="image"/>

                    <content:encoded><![CDATA[<img src="https://wedgworth.dev/content/images/2025/12/ara-panel.jpeg" alt="Eliminating Blind Spots: Using Technology to Bring Visibility to Operations"/> <p><em>Presented originally at the </em><a href="https://ara.swoogo.com/agretailers25?ref=wedgworth.dev" rel="noreferrer"><em>2025 ARA Conference and Expo</em></a><em> in Salt Lake City — expanded here in blog form.</em></p><p>On Tuesday, I joined two fellow panelists to share how technology can improve operational efficiency. My portion focused on a simple but persistent challenge nearly every retailer faces: <strong>operational blind spots</strong>.</p><p>Across our plants and logistics, we depend on assets like trailers and bulk material that move constantly.  Not knowing when trailers are empty, where they are located, or what's going on with our physical inventory in between monthly manual estimates have been some pretty big blind spots.</p><p>Over the last three years, we’ve invested heavily in eliminating those blind spots which has dramatically improved operations. This leads directly to improved customer service.</p><p>Here are three examples from my talk which preceded a very engaging Q&amp;A session with the panel.</p><hr><h2 id="case-study-1-%E2%80%94-liquid-trailers"><strong>Case Study #1 — Liquid Trailers</strong></h2><h3 id="knowing-exactly-what%E2%80%99s-full-empty-and-where-everything-is">Knowing Exactly What’s Full, Empty, and Where Everything Is</h3><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://wedgworth.dev/content/images/2025/12/image.png" class="kg-image" alt="" loading="lazy" width="963" height="368" srcset="https://wedgworth.dev/content/images/size/w600/2025/12/image.png 600w, https://wedgworth.dev/content/images/2025/12/image.png 963w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">One of our two tank liquid trailers</span></figcaption></figure><h3 id="problem">Problem</h3><p>We operate a lot of liquid fertilizer trailers across Florida. At peak times we'd run low on trailers, creating a lot of pressure to meet customer expectations on delivery times. Dispatch and sales teams would spend valuable time calling around trying to find empties we could pick up. </p><p>It's of paramount importance for us to be able to not only meet but <strong><em>to exceed</em></strong> our customer expectations. We had to find a solution and fast.</p><p>The root problem: <strong>no visibility into fill levels.</strong></p><h3 id="solution"><strong>Solution</strong></h3><p>We found an off the shelf solution.  <a href="https://tankscan.com/?ref=wedgworth.dev" rel="noreferrer">TankScan</a> sold IoT sensors with a backend that provided an API for integrating the data into our systems. These sensors will read fill levels of the tank periodically and then send that data long with GPS coordinates to their servers over cellular networks.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://wedgworth.dev/content/images/2025/12/image-8-1-1.png" class="kg-image" alt="" loading="lazy" width="436" height="557"><figcaption><span style="white-space: pre-wrap;">One of the tank monitors just arrived and ready to install</span></figcaption></figure><p>We ran a test on a few trailers.  Determined it worked and we could fetch the data from their API.  Then rolled out on rest of the fleet within months.</p><p>We pull this data into our system so we can tailor alerts and display in a dispatch dashboard.</p><h3 id="outcome"><strong>Outcome</strong></h3><p>Now we can no only see what trailers are empty but we can see precise fill levels of each tank on a trailer.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://wedgworth.dev/content/images/2025/12/Screenshot-2025-12-04-at-3.22.14---PM.png" class="kg-image" alt="" loading="lazy" width="200" height="85"><figcaption><span style="white-space: pre-wrap;">Dispatch dashboard showing fill levels of each tank on each trailer.</span></figcaption></figure><p>In addition, we have:</p><ul><li>No more wasted trips</li><li>No more running out of trailers</li><li>Data-driven dispatching</li></ul><p>The best part is how quickly this one change improved team coordination. We are making decisions on pickups based on data rather than intuition.</p><hr><h2 id="case-study-2-%E2%80%94-dry-trailers"><strong>Case Study #2 — Dry Trailers</strong></h2><h3 id="when-no-vendor-exists-build-the-tool-you-need">When No Vendor Exists, Build the Tool You Need</h3><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://wedgworth.dev/content/images/2025/12/image-2.png" class="kg-image" alt="" loading="lazy" width="960" height="292" srcset="https://wedgworth.dev/content/images/size/w600/2025/12/image-2.png 600w, https://wedgworth.dev/content/images/2025/12/image-2.png 960w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">A portion of our Killebrew fleet</span></figcaption></figure><h3 id="problem-1">Problem</h3><p>Our <a href="https://floridacitrushalloffame.com/inductees/sam-h-killebrew/?ref=wedgworth.dev" rel="noreferrer">Killebrew</a> trailer fleet presented the same visibility problem but this time, there was no off-the-shelf solution. </p><p>Killebrew trailers are more than just a trailer.  They have a small engine with controls that lift each of the four bins up and to the side via hydraulic arms.  The bins tilt to empty material into a spreader that is pulled up next to the trailer.</p><p>Nothing existed to tell us whether a bin had been emptied.</p><p>So we built it.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://wedgworth.dev/content/images/2025/12/image-1.png" class="kg-image" alt="" loading="lazy" width="2000" height="1499" srcset="https://wedgworth.dev/content/images/size/w600/2025/12/image-1.png 600w, https://wedgworth.dev/content/images/size/w1000/2025/12/image-1.png 1000w, https://wedgworth.dev/content/images/size/w1600/2025/12/image-1.png 1600w, https://wedgworth.dev/content/images/size/w2400/2025/12/image-1.png 2400w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">BrewBots boards fresh off the manufacturing line.</span></figcaption></figure><h3 id="solution-1"><strong>Solution</strong></h3><p>We created a custom IoT device that detects tilt angle. On these trailers, a tilt <em>high enough</em> and for <em>long enough</em>, reliably means “this bin has been emptied,” so we built around that signal.</p><p>The path we took looked like this:</p><ul><li>In-house prototype</li><li>Partnering with an engineering firm - (<a href="https://www.geocene.com/?ref=wedgworth.dev" rel="noreferrer">Geocene</a> has been a great partner to bring this to life)</li><li>Proof of concept</li><li>Field testing</li><li>Production hardware</li></ul><p>Each device combines an accelerometer, microcontroller, and cellular modem — rugged, low-power, and built specifically for these trailers.</p><h3 id="outcome-1"><strong>Outcome</strong></h3><p>Now we can see every movement of each bin on every trailer.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://wedgworth.dev/content/images/2025/12/Screenshot-2025-12-04-at-10.48.44---AM-1.png" class="kg-image" alt="" loading="lazy" width="1092" height="949" srcset="https://wedgworth.dev/content/images/size/w600/2025/12/Screenshot-2025-12-04-at-10.48.44---AM-1.png 600w, https://wedgworth.dev/content/images/size/w1000/2025/12/Screenshot-2025-12-04-at-10.48.44---AM-1.png 1000w, https://wedgworth.dev/content/images/2025/12/Screenshot-2025-12-04-at-10.48.44---AM-1.png 1092w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Details of a specific trip for a single trailer showing exact tilts of each bin.</span></figcaption></figure><ul><li>Fleet-wide visibility into trailer empties</li><li>Major relief during peak times when trailer shortages would bottleneck operations</li><li>A meaningful competitive advantage in asset efficiency</li></ul><p>And perhaps most importantly, it proved to our team that <strong>we can build whatever we need</strong>.</p><hr><h2 id="case-study-3-%E2%80%94-bulk-pile-inventory"><strong>Case Study #3 — Bulk Pile Inventory</strong></h2><h3 id="real-time-physical-inventory-at-scale">Real-Time Physical Inventory at Scale</h3><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://wedgworth.dev/content/images/2025/12/image-5.png" class="kg-image" alt="" loading="lazy" width="761" height="540" srcset="https://wedgworth.dev/content/images/size/w600/2025/12/image-5.png 600w, https://wedgworth.dev/content/images/2025/12/image-5.png 761w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Josh Altman standing in front of a potash pile.</span></figcaption></figure><h3 id="problem-2">Problem</h3><p>Across our four plants, we maintain over sixty bulk material piles and move more than a quarter-million tons annually. Historically, pile measurement was manual, and labor-intensive.  Because of this, we'd only take a physical inventory on a monthly basis.  The manual measurements could have a ±8% variance, maybe more depending on size of the pile.</p><h3 id="solution-2"><strong>Solution</strong></h3><p>We deployed LiDAR cameras from <a href="https://www.blickfeld.com/?ref=wedgworth.dev" rel="noreferrer">Blickfeld</a> (another great technology partner who I can't say enough great things about) that continuously scan each pile and calculate volume in real time. The software driving these cameras provides a data feed of these real-time volume estimates. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://wedgworth.dev/content/images/2025/12/image-6.png" class="kg-image" alt="" loading="lazy" width="542" height="540"><figcaption><span style="white-space: pre-wrap;">A LiDAR camera pointed at a pile</span></figcaption></figure><p>We built a web application around this vendor solution to handle:</p><ul><li>material management, </li><li>bulk density measurement captures, </li><li>inventory visualization and more.  </li></ul><p>The application consumes the volume estimate feed from Blickfeld's software to give us live tonnage.</p><h3 id="outcome-2"><strong>Outcome</strong></h3><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://wedgworth.dev/content/images/2025/12/Screenshot-2025-12-04-at-10.33.55---AM-1.png" class="kg-image" alt="" loading="lazy" width="1241" height="912" srcset="https://wedgworth.dev/content/images/size/w600/2025/12/Screenshot-2025-12-04-at-10.33.55---AM-1.png 600w, https://wedgworth.dev/content/images/size/w1000/2025/12/Screenshot-2025-12-04-at-10.33.55---AM-1.png 1000w, https://wedgworth.dev/content/images/2025/12/Screenshot-2025-12-04-at-10.33.55---AM-1.png 1241w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">How tons have changed over the previous seven days.</span></figcaption></figure><ul><li>Eliminates manual measurement labor</li><li>Real-time visibility instead of 30-day snapshots</li><li>Accuracy tightens dramatically (±8% → <strong>0.6%</strong>)</li><li>A stable, trustworthy baseline for managing COGS and shrink</li></ul><p>This project has fundamentally changed how we think about inventory, costing, and operational planning.</p><hr><h2 id="what-these-projects-have-in-common"><strong>What These Projects Have in Common</strong></h2><p>Across these three examples, a few themes emerged:</p><ol><li><strong>We created data where none existed. </strong><br><em>Sensors and automation enabled us to see more.</em></li><li><strong>Immediate operation impacts.  </strong><br><em>Each project paid off quickly because inefficiencies were real and recurring.</em></li><li><strong>Culture changed. </strong><br><em>Once folks get used to real-time data, expectations shift–standards rise.</em></li></ol><hr><h2 id="what%E2%80%99s-next"><strong>What’s Next</strong></h2><p>This post captures the high-level version of what I shared at ARA. Over the coming months, I plan to publish deeper technical pieces on how we built these systems — from designing custom IoT hardware to integrating live data streams into a custom ERP.</p><p>If you were part of the ARA session, thanks again for the great conversation. If you weren’t, I hope this post gives you a sense of what’s possible when you make the right technology investments into operations.</p><p><em>Thanks to </em><a href="https://www.linkedin.com/in/brian-blodgett-30348769/?ref=wedgworth.dev" rel="noreferrer"><em>Brian Blodgett</em></a><em> (Next Generation Technologies) for moderating and to my co-panelists, </em><a href="https://www.linkedin.com/in/joshuaussel/?ref=wedgworth.dev" rel="noreferrer"><em>Joshua Ussel</em></a><em> (Willard Ag) and </em><a href="https://www.linkedin.com/in/garrett-asmus-740801232/?ref=wedgworth.dev" rel="noreferrer"><em>Garrett Asmus</em></a><em> (Asmus Farm Supply).  These are three great guys that are worth getting to know.</em></p>]]></content:encoded>
                </item>
                <item>
                    <title><![CDATA[The Case for Building It Yourself]]></title>
                    <description><![CDATA[How in-house projects can drive team growth and cultivate real ownership.]]></description>
                    <link>https://wedgworth.dev/the-case-for-building-it-yourself/</link>
                    <guid isPermaLink="false">68f990993317c8000113c106</guid>


                        <dc:creator><![CDATA[Drew Beno]]></dc:creator>

                    <pubDate>Wed, 12 Nov 2025 16:22:12 -0500</pubDate>

                        <media:content url="https://wedgworth.dev/content/images/2025/10/diy.jpg" medium="image"/>

                    <content:encoded><![CDATA[<img src="https://wedgworth.dev/content/images/2025/10/diy.jpg" alt="The Case for Building It Yourself"/> <h3 id="my-start-to-software-engineering">My start to software engineering</h3><p>My journey into software was far from the typical path. After three years of going to school for baseball and a degree in business administration, I realized that I should probably figure out some kind of meaningful career before it was too late. I knew I needed some kind of hard skill, so I decided to concentrate on the only available course load I could squeeze into my schedule – data science. </p><p>This led me into the world of database management, R, and of course, Python. At this time it was all for data manipulation and some machine learning algorithms. I can remember the first time I smashed that green play button on PyCharm and saw ```Hello World``` pop up in my terminal, it was euphoric! Flash forward to the end of my senior year, Pandas Dataframes were my best friend and I was comfortable with a language for the first time. <br><br>After graduation, I worked some roles in data analytics, eventually ending up here at Wedgworth's as a data engineer. Wedgworth's is the largest and oldest custom fertilizer blender in the state of Florida. Agriculture is an extremely niche field. Good handling of your immense amounts of data is imperative and often overlooked. Many times in this industry, this sort of thing is outsourced to consultants or contract employees. Wedgworth's was special in that they were putting together a group of in-house, advanced software and tech projects, led by their own VP of Tech, my now boss and mentor, <a href="https://wedgworth.dev/author/patrick/">Patrick Altman</a>.</p><p>So when I got to Wedgworth, the fact that Patrick was working on these projects as a full time employee opened up a world of possibility for me. There were IoT Projects, <a href="https://wedgworth.dev/connecting-cloud-apps-to-industrial-equipment-with-tailscale/" rel="noreferrer">API Design Projects</a>, and the biggest one, the complete rebuild of the ERP that the entire Wedgworth operation runs on, now known as <a href="https://www.linkedin.com/posts/paltman_django-vue-activity-7347117707688902656-aQZu?utm_source=share&utm_medium=member_desktop&rcm=ACoAADIwStgBu2Xv9X1_tjJbntl136wA-aF_b-M" rel="noreferrer">Stark</a>. </p><p>One day, I went to Patrick and said, "Hey, that stuff you are working on is pretty cool. Do you think you could show me how it works?" He gracefully said yes! And that started me down the path of learning about CS and eventually shifting my focus to work on software projects, contributing to the multiple projects that we have released over the last two years. </p><p>The first benefit of doing things in-house that I am describing is a long-winded explanation of <strong>opportunity</strong>. You give yourself and your team more opportunity for growth when they have more paths of work available for them to passionately pursue. I've seen first hand the benefit of investing in your people and letting them pursue what they are passionate about even if it's rough at first. There were plenty of better options out there for software engineers then me when I first started if Patrick needed extra hands. But because of the willingness to do it with our people, the result is work that is purposeful to me and now an employee that has the tribal knowledge from the inside out of how our technology works, which hopefully benefits the whole team in the long run. </p><h3 id="how-does-this-approach-help-the-team-as-a-whole">How does this approach help the team as a whole? </h3><p>To shift focus to the non-engineering team, there is one word that I believe summarizes the benefit to everybody when this kind of mindset is utilized – <strong>ownership</strong>.</p><p>Not coincidentally, one of the first projects that I worked on in my new role was to bring our regulatory services in-house. The major pieces that we were outsourcing was label creation and registration tracking for our products. </p><p>Our services at Wedgworth are highly custom, but once our products are completed, the paperwork stays static, so there was no reason why we couldn't just generate paperwork directly inside of our ERP. The first iteration of this solution consisted of very large Vue files for each and every variation of file needed, that the user could just ```cmd+p``` to save or print. I then learned that PDF handling inside of web apps was much more complicated than I expected, and we ended up rolling our own PDF server to serve PDFs to our users in a cleaner way. </p><p>That is worth a whole other blog post, but the important part of this story is that in order for me to make fertilizer labels and know what was needed for fertilizer registration across the country, I now found myself studying the ins and outs of the fertilizer industry with the help of the resident experts on the team. Because of this, I had to be more closely tied to the actual core of the business that I was working on and more importantly, tied to the people on our team that were entrusting there years of hard-earned knowledge to put into the app. This puts even the non-engineers at every level of the company in a position where they are playing an active role in the development of the product.</p><p>To summarize the second benefit, you and your team are more involved, invested, and care more about the product that you are building. The mindset of "Oh, I told you it wouldn't work", is no longer an option. Everybody is rooting for it to work because it's their name attached to that product, too! You spent hours getting ideas from the 'customer', which in this case is your co-worker, and it's their ideas and work that you are punching into the computer. That's true regardless of who built it, but when you do it together, it's easier to see and helps promote ownership from everybody, not just the builders. </p><h3 id="encouragement-and-determination">Encouragement and Determination</h3><p>This post mainly hits on the <u>benefits</u> of homegrown software and technology, but of course, this is not true for everything. We can't all roll our own git (like the one and only Casey Muratori), and sometimes it's worth it to take the risk of US-East-1 shutting down for the day.</p><p>However, I do think the benefits of working on that project you didn't think you could do are worth contemplating. If you turn around at the first sign of resistance, you may miss out on a lot of value. It's okay to feel completely lost when working on an entirely new project... I am very familiar with the feeling! Just like in the rest of life, those are the times where the most growth can occur. </p><p></p><p><em>“Whatever you do, work heartily, as for the Lord and not for men, knowing that from the Lord you will receive the inheritance as your reward. You are serving the Lord Christ.” (</em><a href="https://ref.ly/Col%203.23-24;esv?t=biblia&ref=wedgworth.dev" rel="noopener"><em>Colossians 3:23-24</em></a><em>)</em></p><p></p>]]></content:encoded>
                </item>
                <item>
                    <title><![CDATA[Using Vite with Vue and Django]]></title>
                    <description><![CDATA[Learn how we integrate Vue and Django for a bullet proof deployment.]]></description>
                    <link>https://wedgworth.dev/using-vite-with-vue-and-django/</link>
                    <guid isPermaLink="false">6910e6c8537dec000145258c</guid>

                        <category><![CDATA[Django]]></category>
                        <category><![CDATA[Python]]></category>
                        <category><![CDATA[Vue]]></category>

                        <dc:creator><![CDATA[Patrick Altman]]></dc:creator>

                    <pubDate>Mon, 10 Nov 2025 12:58:00 -0500</pubDate>

                        <media:content url="https://images.unsplash.com/photo-1488229297570-58520851e868?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;M3wxMTc3M3wwfDF8c2VhcmNofDE0fHxpbnRlZ3JhdGlvbnxlbnwwfHx8fDE3NjI3MTg5Mjh8MA&amp;ixlib&#x3D;rb-4.1.0&amp;q&#x3D;80&amp;w&#x3D;2000" medium="image"/>

                    <content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1488229297570-58520851e868?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;M3wxMTc3M3wwfDF8c2VhcmNofDE0fHxpbnRlZ3JhdGlvbnxlbnwwfHx8fDE3NjI3MTg5Mjh8MA&amp;ixlib&#x3D;rb-4.1.0&amp;q&#x3D;80&amp;w&#x3D;2000" alt="Using Vite with Vue and Django"/> <p>I've been building web applications with <a href="https://vuejs.org/?ref=wedgworth.dev" rel="noreferrer">Vue</a> and <a href="https://www.djangoproject.com/?ref=wedgworth.dev" rel="noreferrer">Django</a> for a long time.  I don't remember my first one—certainly before <a href="https://vite.dev/?ref=wedgworth.dev" rel="noreferrer">Vite</a> was available.  As soon as I switched to using Vite, I ended up building a template tag to join the frontend and backend together rather than having separate projects.  I've always found things simpler to have Django serve everything.</p><p>While preparing this post to share the latest version of what is essentially a small set of files we copy between projects, I started exploring the idea of open-sourcing the solution. </p><p>The goal was twofold</p><ol><li>To create a reusable package instead of relying on copy-and-paste code, and</li><li>To contribute something back to the open-source community. </li></ol><p>In the process, I stumbled upon an excellent existing project — <a href="https://github.com/MrBin99/django-vite?ref=wedgworth.dev" rel="noreferrer">django-vite</a>.  </p><p>So now I think we might give this a good look to switch to and add a Redis backend.</p><p>For now though, I think it's still worth sharing our simple solution in case it's a better fit for you (I haven't fully examined django-vite yet).</p><h2 id="the-problem">The Problem</h2><p>The problem we are trying to solve is using Vite to bundle/build our Vue frontend and yet have Django be able to serve the bundle entry point JS and CSS entry points automatically.  Running <code>vite build</code> will yield output like:</p><pre><code>main-2uqS21f4.js
main-BCI6Z1XL.css</code></pre><p>Without any extra tooling, we'd have to commit build output, hard-code these cache-busting file names to the base template, every time we made a change that could affect the bundle.</p><p>This was completely unacceptable.</p><h2 id="the-solution">The Solution</h2><p>Vite offers the ability to generate a manifest file that will map the cache-busting file name with their base name in a machine readable format. This will allow us to leverage builds happening on CI/CD as part of our Docker image build, and then read the manifest produced by Vite, to keep everything neat and simple.  </p><p>Here is the setting in the <code>vite.config.ts</code> key to this:</p><pre><code class="language-javascript">{
  // ...
  build: {
    manifest: true,
    // ...
  }
  // ...
}</code></pre><p>This will produce a file in your output folder (under <code>.vite/</code>) called <code>manifest.json</code>.</p><p>Here is a snippet; note that you typically won’t need to inspect it manually:</p><pre><code class="language-json">"main.ts": {
    "file": "assets/main-2uqS21f4.js",
    "name": "main",
    "src": "main.ts",
    "isEntry": true,
    "imports": [
      "_runtime-D84vrshd.js",
      "_forms-OJiVtksU.js",
      "_analytics-CCPQRNnj.js",
      "_forms-pro-qreHBaUb.js",
      "_icons-3wXMhf1p.js",
      "_pv-DzJUpav-.js",
      "_vue-mapbox-BRpo1ix7.js",
      "_mapbox--vATkUHK.js"
    ],
    "dynamicImports": [
      "views/HomeView.vue",
      "views/dispatch/DispatchNewOrdersView.vue",
      ...</code></pre><p>This is the key to tying things together dynamically.  We constructed a template tag so that we could dynamically add our entry point in our base template:</p><pre><code class="language-html">{% load vite %}

&lt;html&gt;
  &lt;head&gt;
    &lt;!-- ... base head template stuff --&gt;
    {% vite_styles 'main.ts' %}
  &lt;/head&gt;
  &lt;body&gt;
    &lt;!-- ... base template stuff --&gt;
  
    {% vite_scripts 'main.ts' %}
  &lt;/body&gt;
&lt;/html&gt;</code></pre><p>The idea behind this type of solution is conceptually pretty simple. The template tag needs to read the manifest.json, find the referenced entry point <code>main.ts</code>, then return the <code>staticfiles</code> based path to what's in the <code>file</code> key (e.g. <code>assets/main-2uqS21f4.js</code> before rendering the template).</p><p>Given this, we need to optimize by reducing file I/O hits on every request, and since we’ll use caching we must also handle cache invalidation. Every deployment is a candidate for invalidation because the bundle could change at deployment, but not between.</p><p>We'll solve the caching using Redis. Since we have multiple nodes in our web app cluster local memory isn't an option.  We'll solve the cache invalidation with a management command that runs at the end of each deployment.  This uses a short stack (keeping only the latest <em>n</em> versions) instead of deleting.  </p><p>We use a stack so we can push the new manifest to the top of the queue while leaving older references around.  Requests to updated nodes can then fetch the latest bundle, while allowing older nodes to still work and serve up their existing (older) bundle.  This enables random rolling upgrades on our cluster allowing us to push up updates in middle of a work day without disrupting end users.</p><p>All of this is done with basically a template tag python module and a management command.</p><h3 id="template-tag">Template Tag</h3><p>We have this template tag module stored as <code>vite.py</code>, so that you can load it with <code>{% load vite %}</code> which then exposes the <code>{% vite_styles %}</code> and <code>{% vite_scripts %}</code> template tags.</p><pre><code class="language-python">import json
import re
import typing

from django import template
from django.conf import settings
from django.core.cache import cache
from django.templatetags.static import static
from django.utils.safestring import mark_safe


if typing.TYPE_CHECKING:  # pragma: no cover
    from django.utils.safestring import SafeString

    ChunkType = typing.TypedDict("chunk", {"file": str, "css": str, "imports": list[str]})
    ManifestType = typing.Mapping[str, ChunkType]
    ScriptsStylesType = typing.Tuple[list[str], list[str]]


DEV_SERVER_ROOT = "http://localhost:3001/static"


register = template.Library()


def is_absolute_url(url: str) -&gt; bool:
    return re.match("^https?://", url) is not None


def set_manifest() -&gt; "ManifestType":
    with open(settings.MANIFEST_LOADER["output_path"]) as fp:
        manifest: "ManifestType" = json.load(fp)

    cache.set(settings.MANIFEST_LOADER["cache_key"], manifest, None)
    return manifest


def get_manifest() -&gt; "ManifestType":
    if manifest := cache.get(settings.MANIFEST_LOADER["cache_key"]):
        if settings.MANIFEST_LOADER["cache"]:
            return manifest

    return set_manifest()


def vite_manifest(entries_names: typing.Sequence[str]) -&gt; "ScriptsStylesType":
    if settings.DEBUG:
        scripts = [f"{DEV_SERVER_ROOT}/@vite/client"] + [
            f"{DEV_SERVER_ROOT}/{name}"
            for name in entries_names
        ]
        styles = []
        return scripts, styles

    manifest = get_manifest()

    _processed = set()

    def _process_entries(names: typing.Sequence[str]) -&gt; "ScriptsStylesType":
        scripts = []
        styles = []

        for name in names:
            if name in _processed:
                continue
            chunk = manifest[name]

            import_scripts, import_styles = _process_entries(chunk.get("imports", []))
            scripts.extend(import_scripts)
            styles.extend(import_styles)

            scripts.append(chunk["file"])
            styles.extend(chunk.get("css", []))

            _processed.add(name)
        return scripts, styles

    return _process_entries(entries_names)


@register.simple_tag(name="vite_styles")
def vite_styles(*entries_names: str) -&gt; "SafeString":
    _, styles = vite_manifest(entries_names)
    styles = map(lambda href: href if is_absolute_url(href) else static(href), styles)
    return mark_safe("\n".join(map(lambda href: f'&lt;link rel="stylesheet" href="{href}" /&gt;', styles)))  # nosec


@register.simple_tag(name="vite_scripts")
def vite_scripts(*entries_names: str) -&gt; "SafeString":
    scripts, _ = vite_manifest(entries_names)
    scripts = map(lambda src: src if is_absolute_url(src) else static(src), scripts)
    return mark_safe("\n".join(map(lambda src: f'&lt;script type="module" src="{src}"&gt;&lt;/script&gt;', scripts)))  # nosec
</code></pre><p>Here are a few features this supports::</p><ol><li>If running in local development, it will bypass loading from the manifest and load the <code>@vite/client</code> and point to the dev server that is running in a docker compose instance so we get HMR (Hot Module Replacement).</li><li>It relies on some settings that control if caching is enabled, what the cache key is (we set it to the <code>RELEASE_VERSION</code> which is pulled from the environment and tied to the git sha or tag.</li><li>We leverage the Django cache backend here for getting from and setting to the cache independent on what the actual cache backend is.  This layer of indirection only works for this tag though and not for our cache invalidation management command.</li></ol><p>The settings we use:</p><pre><code class="language-python">MANIFEST_LOADER = {
    "cache": not DEBUG,
    "cache_key": f"vite_manifest:{RELEASE_VERSION}",
    "output_path": f"{STATIC_ROOT}/.vite/manifest.json",
}</code></pre><p>The management command gets a bit fancy with invalidation mainly to support running a multi-node cluster.  </p><p>If you run a single web instance this probably isn't a lot of benefit.  </p><p>However, we encountered issues spinning up additional nodes: some were updated, others weren’t, and we were seeing 500 errors during deployment because we needed to support both versions in the cache.  </p><p>Our short term solution was to just put entire site into maintenance mode during deploys, but that's kind of annoying for pushing out some simple fixes.  This technique has solved that for us with this management command that lives in <code>post_deploy.py</code></p><pre><code class="language-python">from django.conf import settings
from django.core.cache import cache
from django.core.management import BaseCommand
from redis.exceptions import RedisError

from ...templatetags.vite import set_manifest


class Command(BaseCommand):

    def success(self, message: str):
        self.stdout.write(self.style.SUCCESS(message))

    def warning(self, message: str):
        self.stdout.write(self.style.WARNING(message))

    def error(self, message: str):
        self.stdout.write(self.style.ERROR(message))

    def set_new_manifest_in_cache(self):
        current_version = settings.RELEASE_VERSION
        if not current_version:
            self.warning(
                "RELEASE_VERSION is empty; skipping cleanup to avoid deleting default keys."
            )
            return

        prefix = "vite_manifest:*"  # Match all versionsed keys
        recent_versions_key = "recent-manifest-versions"  # Redis key for tracking versions

        try:
            redis_client = cache._client.get_client()

            # Add current version to the front of the list (in bytes)
            redis_client.lpush(recent_versions_key, current_version.encode("utf-8"))

            # Keep only the last 5 versions
            redis_client.ltrim(recent_versions_key, 0, 5)

            # Get recent versions as a set for quick lookup (decoding to strings)
            recent_versions = {
                v.decode("utf-8")
                for v in redis_client.lrange(recent_versions_key, 0, -1)
            }

            self.success(f"Recent versions: {recent_versions}")

            cursor = "0"
            deleted_count = 0
            while cursor != 0:
                cursor, keys = redis_client.scan(cursor=cursor, match=prefix, count=100)  # Batch scan
                for key in keys:
                    key_str = key.decode("utf-8")
                    self.success(f"Checking key: {key_str}")
                    # If the key's version is not in recent versions, delete it
                    if not any(key_str.endswith(f":{version}") for version in recent_versions):
                        redis_client.delete(key)
                        deleted_count += 1
                        self.success(f"Deleted old manifest cache key: {key_str}")

            self.success(
                f"Added current version '{current_version}' and deleted {deleted_count} old manifest cache keys."
            )

            set_manifest()
            self.success("Updated Vite manifest in cache.")
        except RedisError as e:
            self.error(f"Redis error: {e}")

    def handle(self, *args, **options):
        self.set_new_manifest_in_cache()
</code></pre><p>This isn't the prettiest code.  We could probably tidy it up by extracting the Redis operations and/or the main while loop to make things more readable.  But for now it's working and we haven't had to touch it in a while.</p><p>The latest six versions in our cache:</p><figure class="kg-card kg-image-card"><img src="https://wedgworth.dev/content/images/2025/11/Screenshot-2025-11-09-at-2.02.47---PM.png" class="kg-image" alt="" loading="lazy" width="515" height="508"></figure><p>We had to break out of the pure django cache backend here to get access to some redis specific operations for the stack operations.  Again, this is something that might be worth tidying up if we build a cache backend for django-vite but maybe not necessary if we build a Redis specific backend.</p><p>Not only do we invalidate the latest cache by pushing the version key down the stack, but we then seed the cache with the current version to save some time on a lazy load.</p><h2 id="summary">Summary</h2><p>Next up is for us to take a hard look at <code>django-vite</code> as this seems to be a well structured and maintained project.  Perhaps we can move to using this, retire our custom code, and then contribute what remains lacking either to the project or via a sidecar package.</p><p>Have you dealt with these problems in a different way?  If so, we'd love to hear from you and learn about your approach.</p>]]></content:encoded>
                </item>
                <item>
                    <title><![CDATA[How We Continually Deliver Software]]></title>
                    <description><![CDATA[We&#x27;ve open-sourced a reusable set of Github Actions that enable us to move fast and continually deliver high quality software.]]></description>
                    <link>https://wedgworth.dev/how-we-continually-deliver-software/</link>
                    <guid isPermaLink="false">68e1d19b7e2cdd000114634d</guid>

                        <category><![CDATA[Python]]></category>
                        <category><![CDATA[Docker]]></category>
                        <category><![CDATA[Tooling]]></category>

                        <dc:creator><![CDATA[Patrick Altman]]></dc:creator>

                    <pubDate>Wed, 29 Oct 2025 13:57:19 -0400</pubDate>

                        <media:content url="https://images.unsplash.com/photo-1429497419816-9ca5cfb4571a?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;M3wxMTc3M3wwfDF8c2VhcmNofDV8fGJ1aWxkfGVufDB8fHx8MTc1OTYzMTM0MXww&amp;ixlib&#x3D;rb-4.1.0&amp;q&#x3D;80&amp;w&#x3D;2000" medium="image"/>

                    <content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1429497419816-9ca5cfb4571a?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;M3wxMTc3M3wwfDF8c2VhcmNofDV8fGJ1aWxkfGVufDB8fHx8MTc1OTYzMTM0MXww&amp;ixlib&#x3D;rb-4.1.0&amp;q&#x3D;80&amp;w&#x3D;2000" alt="How We Continually Deliver Software"/> <p>We currently have five different web applications in production and they all share a very similar stack - <a href="https://www.djangoproject.com/?ref=wedgworth.dev" rel="noreferrer">Django</a>/<a href="https://vuejs.org/?ref=wedgworth.dev" rel="noreferrer">Vue</a>/<a href="https://www.docker.com/?ref=wedgworth.dev" rel="noreferrer">Docker</a>/<a href="https://www.postgresql.org/?ref=wedgworth.dev" rel="noreferrer">PostgreSQL</a> (some with <a href="https://redis.io/?ref=wedgworth.dev" rel="noreferrer">Redis</a>/<a href="https://github.com/rq/django-rq?ref=wedgworth.dev" rel="noreferrer">django-rq</a> for background tasks).</p><p>We have developed a set of Github Actions for Continuous Integration / Continuous Delivery that take care the this basic workflow:</p><figure class="kg-card kg-image-card"><img src="https://wedgworth.dev/content/images/2025/10/Screenshot-2025-10-04-at-9.12.59---PM.png" class="kg-image" alt="" loading="lazy" width="2000" height="922" srcset="https://wedgworth.dev/content/images/size/w600/2025/10/Screenshot-2025-10-04-at-9.12.59---PM.png 600w, https://wedgworth.dev/content/images/size/w1000/2025/10/Screenshot-2025-10-04-at-9.12.59---PM.png 1000w, https://wedgworth.dev/content/images/size/w1600/2025/10/Screenshot-2025-10-04-at-9.12.59---PM.png 1600w, https://wedgworth.dev/content/images/2025/10/Screenshot-2025-10-04-at-9.12.59---PM.png 2104w" sizes="(min-width: 720px) 720px"></figure><ol><li>Every commit either on <code>main</code> or a feature branch, runs:<ol><li>Python Linting</li><li>Vue/JS Testing</li><li>Build Docker Image and then on that image run:<ol><li>Python tests</li><li>Check for missing migrations</li><li>Push image / tags after being rebuilt without the dev mode flag</li></ol></li></ol></li><li>Then if on <code>main</code> it follows through with a deployment to a QA app on <a href="https://www.heroku.com/?ref=wedgworth.dev" rel="noreferrer">Heroku</a>.</li></ol><p>We have a second workflow for handling releases.  </p><p>When a release is generated/published in <a href="https://github.com/?ref=wedgworth.dev" rel="noreferrer">Github</a>:</p><ol><li>Pulls latest image from the Github Container Repository</li><li>Pushes the tagged image to Heroku</li><li>Executes release commands, but this time to a Production app on Heroku </li></ol><figure class="kg-card kg-image-card"><img src="https://wedgworth.dev/content/images/2025/10/Screenshot-2025-10-04-at-9.13.43---PM.png" class="kg-image" alt="" loading="lazy" width="586" height="1262"></figure><h2 id="results">Results</h2><p>These two pipelines enable us to work really fast.  It speeds up code reviews as most of the testing is done automatically allowing us to focus on just the business rules and architecture getting put into place.  It speeds up end to end testing and getting user feedback having code automatically deployed to a QA test instance that won't interfere / interrupt production.  And finally it speeds up getting releases out to production which we do as needed, often a few times a day!</p><h2 id="open-source">Open Source</h2><p>The two yaml files configuring these were hundreds of lines long with lots of duplication except for a few things.  We were copying them around when we'd start a new web app, and then tweak.  They'd invariably get out of sync and it was becoming a burden to maintain.</p><p>So we extracted actions and workflows into <a href="https://github.com/wedgworth/actions?ref=wedgworth.dev" rel="noreferrer">wedgworth/actions</a> which is now open source so if you like our workflow you can feel free to use (or fork and tweak to suit your needs).</p><p>Now each project looks like this:</p><h3 id="ciyaml">ci.yaml</h3><pre><code class="language-yaml">name: Test / Build / Deploy to QA
on:
  push:
    branches: "**"
    tags-ignore: "**"

jobs:
  test-and-build:
    name: CI
    uses: wedgworth/actions/.github/workflows/test.yml@v7.0.0
    with:
      python-src-dir: myapp
    secrets:
      CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
      CR_UN: ${{ secrets.CR_UN }}
      CR_PAT: ${{ secrets.CR_PAT }}
      SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}

  deploy-qa:
    name: CD
    needs: [test-and-build]
    if: ${{ github.event.ref == 'refs/heads/main' }}
    uses: wedgworth/actions/.github/workflows/deploy.yml@v7.0.0
    with:
      app-name: my-heroku-app-qa
      processes: web release
    secrets:
      HEROKU_API_KEY: ${{ secrets.HEROKU_API_KEY }}
      CR_UN: ${{ secrets.CR_UN }}
      CR_PAT: ${{ secrets.CR_PAT }}
</code></pre><h3 id="releaseyaml">release.yaml</h3><pre><code class="language-yaml">name: Publish and Release Image
on:
  release:
    types: [published]

jobs:
  release:
    name: Release
    uses: wedgworth/actions/.github/workflows/release.yml@v7.0.0
    with:
      app-name: my-heroku-app-prod
      processes: web release
    secrets:
      HEROKU_API_KEY: ${{ secrets.HEROKU_API_KEY }}
      CR_UN: ${{ secrets.CR_UN }}
      CR_PAT: ${{ secrets.CR_PAT }}</code></pre><p>We still copy and paste these but they are extremely stable.  </p><p>We just need to set <code>python-src-dir</code>, <code>app-name</code>, and <code>processes</code> . </p><p>These do use runners from <a href="https://namespace.so/?ref=wedgworth.dev" rel="noreferrer">namespace.so</a> which are not free (but cheap!) and run much faster especially when doing Docker builds than the Github runners.  </p><p>There might be a way to make these configurable so if you like what you see but want to use the Github runners, we'd welcome a pull request to make this more generally useful, otherwise feel free to fork it and run your own copies.</p><p>Happy building!</p>]]></content:encoded>
                </item>
                <item>
                    <title><![CDATA[Reusing GraphQL Queries within Django]]></title>
                    <description><![CDATA[How we query our GraphQL API directly through Python avoiding duplication of query logic and overhead of web requests.]]></description>
                    <link>https://wedgworth.dev/reusing-graphql-queries-within-django/</link>
                    <guid isPermaLink="false">68ebd4d427a7930001332cb4</guid>

                        <category><![CDATA[Django]]></category>
                        <category><![CDATA[GraphQL]]></category>
                        <category><![CDATA[Python]]></category>

                        <dc:creator><![CDATA[Patrick Altman]]></dc:creator>

                    <pubDate>Wed, 22 Oct 2025 07:40:04 -0400</pubDate>

                        <media:content url="https://wedgworth.dev/content/images/2025/10/Screenshot-2025-10-12-at-11.53.13---AM.png" medium="image"/>

                    <content:encoded><![CDATA[<img src="https://wedgworth.dev/content/images/2025/10/Screenshot-2025-10-12-at-11.53.13---AM.png" alt="Reusing GraphQL Queries within Django"/> <p>We like to use GraphQL as the API layer for building SPAs (Single Page App) written with a VueJS frontend and Django backend.  Types tend to map pretty nicely to models and frontend development can reuse types and queries for efficient querying to support different UI.</p><p>A great library for making this work so nicely for us is <a href="https://strawberry.rocks/?ref=wedgworth.dev" rel="noreferrer">Strawberry</a> and is a big reason we have sponsored the project for some time now.</p><p>One downside though, especially in Django, is potential for duplication of query logic.  Do you extract it all into custom managers and querysets and keep resolvers super lean?  In order to keep queries efficient though, you'll probably still need to inject some queryset optimizations in your types.</p><p>We do a bit of both.  We reduce duplication of annotations and subqueries by creating reusable units in <code>annotations.py</code> and <code>subqueries.py</code> for instance.  Then we optimize our GraphQL layer.  Overriding <code>get_queryset</code> on types and other tricks that go beyond the scope of this post (we'll have to write one soon of all how to get the most out of Strawberry).</p><p>We have a number of processes that execute within Django background tasks that need to query some of the same data and at first we were taking care to recreate the same queryset logic.  That wasn't going to last very long.  Whenever you duplicate complex logic, code drift happens, and before you know it you are generating reports that don't reconcile in subtle and weird ways.</p><p>Our solution was to just query through the same GraphQL layer from Python but without going through the overhead of the request/response cycle.  To do this we needed to generate a machine readable schema and then load that up in an object that would allow us to execute queries just like we were doing from the frontend.</p><p>The star of the show is this object that we tuck away in a <code>graphql.py</code> module:</p><pre><code class="language-python">import os
import json

from django.conf import settings
from django.http import HttpRequest

from graphql import parse
from graphql.language.ast import OperationDefinitionNode

from strawberry.types.execution import ExecutionResult


from .api.schema import private


class GraphQLSchema:
    def __init__(self):
        # The path to the JSON file containing the GraphQL queries generate from yarn generate
        self._path = os.path.join(settings.PROJECT_ROOT, "static/src/gql/persisted-documents.json")
        self._ops_by_name = {}
        self._create_named_op_map()

    def _create_named_op_map(self):
        assert os.path.exists(self._path), f"GraphQL file not found at {self._path}"

        ops_by_name = {}

        with open(self._path, encoding="utf-8") as file:
            documents = json.load(file).values()
            for doc in documents:
                ast = parse(doc)
                for d in ast.definitions:
                    if isinstance(d, OperationDefinitionNode) and d.name:
                        ops_by_name[d.name.value] = doc

        self._ops_by_name = ops_by_name

    def execute(self, query_name: str, context: HttpRequest = None, **variables) -&gt; ExecutionResult:
        assert query_name in self._ops_by_name, f"Query {query_name} not found"
        query = self._ops_by_name[query_name]
        return private.execute_sync(query, variable_values=variables, context_value=context)


schema = GraphQLSchema()
</code></pre><p>We use <code>@graphql-codegen/cli</code> and this config to create assets for our frontend to use as well as a version of the schema for our <code>GraphQLSchema</code> to consume:</p><p>This is our <code>codegen.ts</code></p><pre><code class="language-ts">import type { CodegenConfig } from '@graphql-codegen/cli';

const config: CodegenConfig = {
  schema: 'http://localhost:8000/local-graphql/',
  ignoreNoDocuments: true, // for better experience with the watcher
  generates: {
    './static/src/gql/types.ts': {
      plugins: ['typescript'],
      config: {
        useTypeImports: true,
      },
    },
    './static/src/gql/': {
      preset: 'client',
      config: {
        useTypeImports: true,
      },
      presetConfig: {
        fragmentMasking: false,
        persistedDocuments: {
          hashAlgorithm: 'sha256' // optional; defaults to sha1
        }
      },
      documents: [
        'static/src/compositions/data/**/*.ts',
        'static/src/compositions/data/gql/**/*.gql'
      ],
    },
    './static/src/compositions/data/': {
      preset: 'near-operation-file',
      presetConfig: {
        folder: '__generated__',
        extension: '.ts',
        baseTypesPath: '../../gql/types.ts'  // GENERATES '@/gql/types' as './@/gql/types'...
      },
      config: {
        useTypeImports: true,
        preResolveTypes: false,
      },
      plugins: [
        'typescript-operations',
        'typed-document-node'
      ],
      documents: ['static/src/compositions/data/gql/**/*.gql'],
    },
  },
};

export default config;
</code></pre><p>Now in our Django/Python code we can execute GraphQL operations just like our frontend code does:</p><pre><code class="language-python">from .graphql import schema

data = schema.execute("ShipmentsReport", id="12345")
</code></pre><p>This has been working really well.  Not only is DRYing up code like this great for maintenance it a real reduction of cognitive burden.</p>]]></content:encoded>
                </item>
                <item>
                    <title><![CDATA[Polars vs Pandas – Quantile Method]]></title>
                    <description><![CDATA[You can save some memory by moving to Polars from Pandas but watch out for a subtle difference in the quantile&#x27;s different default interpolation methods.]]></description>
                    <link>https://wedgworth.dev/polars-vs-pandas-quantile-method/</link>
                    <guid isPermaLink="false">68e08d062e8da50001cac007</guid>

                        <category><![CDATA[Python]]></category>

                        <dc:creator><![CDATA[Patrick Altman]]></dc:creator>

                    <pubDate>Fri, 17 Oct 2025 17:32:10 -0400</pubDate>

                        <media:content url="https://images.unsplash.com/photo-1611510938299-2c8e66959a50?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;M3wxMTc3M3wwfDF8c2VhcmNofDR8fHBvbGFyc3xlbnwwfHx8fDE3NTk1NDY5NTl8MA&amp;ixlib&#x3D;rb-4.1.0&amp;q&#x3D;80&amp;w&#x3D;2000" medium="image"/>

                    <content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1611510938299-2c8e66959a50?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;M3wxMTc3M3wwfDF8c2VhcmNofDR8fHBvbGFyc3xlbnwwfHx8fDE3NTk1NDY5NTl8MA&amp;ixlib&#x3D;rb-4.1.0&amp;q&#x3D;80&amp;w&#x3D;2000" alt="Polars vs Pandas – Quantile Method"/> <p>I set out this weekend to port the data processing pipeline for the BrewBots IoT data from Pandas to Polars. Specifically, this code processes the accelerometer data from the BrewBots and calculates the tilt angles.</p><p>We’ve recently been seeing some heavy memory spikes and I’ve heard the promise of Polars being much faster and more memory efficient than Pandas.</p><p>I wanted to see if this was true.</p><p>After a lot of just learning about DataFrames in general, and having lots of fits and starts, I got something very close but the max angle for each tilt was off for a decent number of tilts when comparing output side by side.</p><p>I zero’d in on one specific tilt’s data to step through it.</p><p>Up until this point, Polars and Pandas were producing the same results (with the exception that my computed tilt ids on Pandas were 1-based and Polars were 0-based).</p><pre><code>┌────────────┬───────┬──────┬─────────┬───────────┐
│ tstamp     ┆ angle ┆ tilt ┆ tilt_id ┆ angle_bin │
│ ---        ┆ ---   ┆ ---  ┆ ---     ┆ ---       │
│ i64        ┆ f64   ┆ bool ┆ u32     ┆ cat       │
╞════════════╪═══════╪══════╪═════════╪═══════════╡
│ 1723809945 ┆ 44.6  ┆ true ┆ 110     ┆ 40_50     │
│ 1723809946 ┆ 63.3  ┆ true ┆ 110     ┆ 60_70     │
│ 1723809947 ┆ 38.5  ┆ true ┆ 110     ┆ 30_40     │
│ 1723809948 ┆ 150.3 ┆ true ┆ 110     ┆ 90_inf    │
│ 1723809949 ┆ 68.2  ┆ true ┆ 110     ┆ 60_70     │
│ 1723809950 ┆ 31.8  ┆ true ┆ 110     ┆ 30_40     │
│ 1723809951 ┆ 46.15 ┆ true ┆ 110     ┆ 40_50     │
│ 1723809952 ┆ 44.7  ┆ true ┆ 110     ┆ 40_50     │
│ 1723809953 ┆ 53.1  ┆ true ┆ 110     ┆ 50_60     │
│ 1723809954 ┆ 68.3  ┆ true ┆ 110     ┆ 60_70     │
│ 1723809955 ┆ 43.5  ┆ true ┆ 110     ┆ 40_50     │
│ 1723809956 ┆ 19.75 ┆ true ┆ 110     ┆ 10_20     │
│ 1723809957 ┆ 26.1  ┆ true ┆ 110     ┆ 20_30     │
│ 1723809958 ┆ 24.1  ┆ true ┆ 110     ┆ 20_30     │
│ 1723809959 ┆ 43.75 ┆ true ┆ 110     ┆ 40_50     │
│ 1723809960 ┆ 33.9  ┆ true ┆ 110     ┆ 30_40     │
│ 1723809961 ┆ 43.8  ┆ true ┆ 110     ┆ 40_50     │
│ 1723809962 ┆ 103.5 ┆ true ┆ 110     ┆ 90_inf    │
│ 1723809963 ┆ 50.2  ┆ true ┆ 110     ┆ 50_60     │
│ 1723809964 ┆ 37.9  ┆ true ┆ 110     ┆ 30_40     │
│ 1723809965 ┆ 34.95 ┆ true ┆ 110     ┆ 30_40     │
│ 1723809966 ┆ 36.7  ┆ true ┆ 110     ┆ 30_40     │
│ 1723809967 ┆ 43.5  ┆ true ┆ 110     ┆ 40_50     │
│ 1723809968 ┆ 39.85 ┆ true ┆ 110     ┆ 30_40     │
│ 1723809969 ┆ 36.8  ┆ true ┆ 110     ┆ 30_40     │
└────────────┴───────┴──────┴─────────┴───────────┘</code></pre><p>We collapse this data down to a single&nbsp;<code>BotTilt</code>&nbsp;Django model instance by recording the max angle and recording in a JSON field, the binned counts of angles by the&nbsp;<code>angle_bin</code>&nbsp;column categories.</p><p>The bins between the two libraries were the same but the max angle was&nbsp;<code>150.3</code>&nbsp;in Polars and&nbsp;<code>127.8</code>&nbsp;in Pandas.</p><p>The Pandas calculation was:</p><pre><code class="language-python">angle_max = df["angle"].quantile(0.98)</code></pre><p>While the Polars calculation, after a naive port, was:</p><pre><code class="language-python">angle_max = df.select(pl.col("angle").quantile(0.98)).item()</code></pre><p>Diving a bit deeper here, it seems that&nbsp;<a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.quantile.html?ref=wedgworth.dev">Pandas quantile method defaults to a “linear” interpolation</a>&nbsp;method while&nbsp;<a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.quantile.html?ref=wedgworth.dev#polars.Expr.quantile">Polars uses “nearest” by default</a>.</p><p>I was able to update the Polars calculation to the following and get all the data matching up exactly:</p><pre><code class="language-python">angle_max = df.select(
  pl.col("angle").quantile(0.98, interpolation="linear")
).item()</code></pre><p>I haven’t done any full benchmarking yet but on some one-off payloads, I’m seeing around 20% better memory usage and it being noticeably faster. Admittedly, about half of this improvement is due to fine tuning the data types I’m explicitly casting to within Polars.</p><hr><p><em>Originally published in March 2025 at </em><a href="https://paltman.com/polars-vs-pandas-quantile-method/?ref=wedgworth.dev">https://paltman.com/polars-vs-pandas-quantile-method/</a></p>]]></content:encoded>
                </item>
                <item>
                    <title><![CDATA[Keep Your Vue Apps Fresh]]></title>
                    <description><![CDATA[Keeping your Vue SPA up to date when new code is released.]]></description>
                    <link>https://wedgworth.dev/keep-your-vue-apps-fresh/</link>
                    <guid isPermaLink="false">68e2b10c60917e00010e6deb</guid>

                        <category><![CDATA[Vue]]></category>
                        <category><![CDATA[Python]]></category>
                        <category><![CDATA[Django]]></category>
                        <category><![CDATA[GraphQL]]></category>

                        <dc:creator><![CDATA[Patrick Altman]]></dc:creator>

                    <pubDate>Wed, 15 Oct 2025 12:31:29 -0400</pubDate>

                        <media:content url="https://images.unsplash.com/photo-1591779051696-1c3fa1469a79?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;M3wxMTc3M3wwfDF8c2VhcmNofDE1fHxmcmVzaHxlbnwwfHx8fDE3NTk2ODc1MTd8MA&amp;ixlib&#x3D;rb-4.1.0&amp;q&#x3D;80&amp;w&#x3D;2000" medium="image"/>

                    <content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1591779051696-1c3fa1469a79?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;M3wxMTc3M3wwfDF8c2VhcmNofDE1fHxmcmVzaHxlbnwwfHx8fDE3NTk2ODc1MTd8MA&amp;ixlib&#x3D;rb-4.1.0&amp;q&#x3D;80&amp;w&#x3D;2000" alt="Keep Your Vue Apps Fresh"/> <h2 id="introduction">Introduction</h2><p>A few years ago, I&nbsp;<a href="https://paltman.com/keep-your-vue-apps-fresh?ref=wedgworth.dev">posted</a>&nbsp;about how to keep Vue apps up to date. Conceptually, this solution is still valid, however, a lot has changed with how I create web applications over this time. Not least of which is a move to GraphQL instead of lots of different REST endpoints.</p><h2 id="the-problem">The Problem</h2><p>Just to recap that previous article and the problem we are solving, in a SPA (single page app), your users have the app open in their browser for a longer period of time than when every page is served new from the backend. In a SPA as your user is navigating around client side, the front-end is merely fetching data needed for each component or view.</p><p>With this occurring, it is quite easy to deploy updates to the server with no way of telling your users to refresh and get the updates in a new bundle.</p><p>What we want to have happen is anytime a deployment occurs that the user is notified that a new version is available and they can refresh to get the latest version.</p><h2 id="the-solution">The Solution</h2><h3 id="backend">Backend</h3><p>First we need to define a type, hook the type up to our GraphQL schema, and implement the resolver:</p><pre><code class="language-Python">@strawberry.type
class Version:
    version: str


def get_version(info: "StrawberryDjangoContext") -&gt; Version:
    return Version(version=settings.RELEASE_VERSION)  # This setting pulls the version from the environment


@strawberry.type
class Query:
    version: Version = strawberry.field(resolver=get_version)</code></pre><p>That’s it for the backend. This enables us to query for the latest version that has been deployed as defined by the&nbsp;<code>RELEASE_VERSION</code>&nbsp;environment variable. I set this through my CI process running on GitHub during deployment.</p><h3 id="frontend">Frontend</h3><p>Now we need the frontend to not only query this version and check against it’s own version, but to poll it periodically and when there is a mismatch, to display a notice instructing the user to refresh.</p><pre><code class="language-Typescript">import { computed } from 'vue';
import { useQuery, provideApolloClient } from '@vue/apollo-composable';

import config from '@/config';
import { graphql } from '@/gql';

const QUERY = graphql(/* GraphQL */ `
  query Version {
    version {
      version
    }
  }
`);

export default () =&gt; {
  const { result } = useQuery(QUERY, undefined, {
    fetchPolicy: 'no-cache',
    pollInterval: 15000, // poll every 15 seconds
  });
  const version = computed(() =&gt; result.value?.version.version);
  const versionMismatch = computed(() =&gt;
    version.value === undefined
    ? false
    : version.value !== config.RELEASE_VERSION
  );
  return {
    version,
    versionMismatch,
  };
};</code></pre><p>We use that composable to know when to display the notice banner in our main layout component that the entire app is wrapped in:</p><pre><code class="language-Django">&lt;script setup lang="ts"&gt;
  import useVersion from '@/compositions/useVersion';

  const { version, versionMismatch } = useVersion();

  const reload = () =&gt; {
    window.location.reload();
  };
&lt;/script&gt;

&lt;template&gt;
  &lt;div&gt;
    &lt;div v-if="versionMistach" class="mb-8 mx-auto max-w-7xl py-8 px-4 sm:px-6 lg:px-8 bg-teal-100 border-b border-teal-300 text-teal-600"&gt;
      &lt;div class="text-xl"&gt;Version {{ version }} has been released.&lt;/div&gt;
      &lt;div class="mt-2"&gt;&lt;a href="" @click.prevent="reload" class="cursor-pointer underline text-teal-900"&gt;Click here to refresh&lt;/a&gt; and get the latest version.&lt;/div&gt;
    &lt;/div&gt;
    &lt;router-view /&gt;
  &lt;/div&gt;
&lt;/template&gt;</code></pre><p>You might have noticed a&nbsp;<code>config.RELEASE_VERSION</code>&nbsp;in the composable. This is a global config variable that is built in a Django template&nbsp;<code>&lt;head /&gt;</code>&nbsp;section. This is how we get the version from the backend into the frontend:</p><pre><code class="language-HTML">&lt;html&gt;
  &lt;head&gt;
    ...
    &lt;script&gt;
      window.MyAppConfig = {
        ...
        RELEASE_VERSION: '{{ RELEASE_VERSION }}',
      };
    &lt;/script&gt;
  &lt;/head&gt;
  ...
&lt;/html&gt;</code></pre><p>Then the config:</p><pre><code class="language-Typescript">interface MyAppConfig {
  RELEASE_VERSION: string;
}

const config: MyAppConfig = (window as any).MyAppConfig || {};

export default config;</code></pre><h2 id="that%E2%80%99s-it">That’s It!</h2><p>The Apollo client takes care of the polling so it feels “pushed” to the client. Soon after a deploy is completed, a message is shown a user without them having to do anything, telling them about the newer version and a link that they can easily click to refresh and pull the new bundle.</p><hr><p><em>Originally published in August 2023 at</em> <a href="https://paltman.com/keep-your-vue-apps-fresh-v2/?ref=wedgworth.dev">https://paltman.com/keep-your-vue-apps-fresh-v2/</a></p>]]></content:encoded>
                </item>
                <item>
                    <title><![CDATA[Connecting Cloud Apps to Industrial Equipment with Tailscale]]></title>
                    <description><![CDATA[How to bridge the gap between cloud-based Django apps and on-premise equipment with Tailscale]]></description>
                    <link>https://wedgworth.dev/connecting-cloud-apps-to-industrial-equipment-with-tailscale/</link>
                    <guid isPermaLink="false">68e2e95160917e00010e6e15</guid>

                        <category><![CDATA[Docker]]></category>
                        <category><![CDATA[Python]]></category>
                        <category><![CDATA[Tooling]]></category>

                        <dc:creator><![CDATA[Patrick Altman]]></dc:creator>

                    <pubDate>Thu, 09 Oct 2025 09:17:49 -0400</pubDate>

                        <media:content url="https://wedgworth.dev/content/images/2025/10/bag-plant-robot.jpg" medium="image"/>

                    <content:encoded><![CDATA[<img src="https://wedgworth.dev/content/images/2025/10/bag-plant-robot.jpg" alt="Connecting Cloud Apps to Industrial Equipment with Tailscale"/> <p>In industrial automation, leveraging all the benefits of cloud-based web apps presents a big challenge. Connecting these cloud applications to equipment running on private local networks makes you start thinking of firewalls, security holes, and handling failover between two different ISPs — so IP addresses could change.</p><p>It can be so daunting that you’d be tempted to toss in the tower and host everything on-premise.</p><p>However, with modern tools like&nbsp;<a href="https://tailscale.com/?ref=wedgworth.dev" rel="nofollow ugc noopener">Tailscale</a>, this challenge becomes more manageable. In my setup, I’ve leveraged Tailscale to securely connect a cloud-based Django web app, running in a Docker container on Heroku, to a containerized Flask API hosted on a local server.</p><p>This Flask API, sitting on the same private network as the industrial equipment, serves as a bridge, allowing the Django app to make API calls that interact directly with various vendor equipment.</p><p>The process is straightforward: Tailscale, running on both the local server and the Docker container, creates a secure, encrypted mesh network between them. This setup eliminates the need for complex VPNs, firewall rules, or exposing sensitive equipment to the public Internet. Instead, we gain secure access to equipment APIs from anywhere while keeping everything else isolated and protected.</p><p>The&nbsp;<a href="https://tailscale.com/kb/1107/heroku?ref=wedgworth.dev" rel="nofollow ugc noopener">docs on getting a client running on Heroku</a>&nbsp;are pretty straightforward.</p><p>I had to modify things a bit, though, for the startup script:</p><pre><code class="language-bash"># Run tailscale daemon
XDG_CACHE_HOME=/var/lib/tailscale /opt/tailscaled \
  --tun=userspace-networking \
  --socks5-server=localhost:1055 \
  --statedir=/var/lib/tailscale/ &amp;
/opt/tailscale up \
  --authkey=${TAILSCALE_AUTHKEY} \
  --hostname=${TAILSCALE_HOSTNAME}
echo Tailscale started</code></pre><p>In my image, I don’t run as root and don’t have a home directory, so I need to specify where to put the cache/state files. I also want to pass in the hostname based on the environment so that my QA and Production environments can operate independently and I can keep the machines straight.</p><p>I also don’t set the ALL_PROXY environment variable because I mostly don’t want to use proxies when making outbound calls. We are integrating with several cloud-based services, and this would get in the way.</p><p>Furthermore, you want any proxied calls to use Tailscale’s DNS (MagicDNS) and to do that; you need to use&nbsp;<code>socks5h://localhost:1055</code>&nbsp;instead of&nbsp;<code>socks5://localhost:1055</code>&nbsp;(notice the “h” after socks5).</p><p>Then, making API calls from the Heroku-hosted container environment back into our corporate network is simple:</p><pre><code class="language-python">requests.get(
    url,
    proxies=dict(http="socks5h://localhost:1055")
)</code></pre><p>There seem to be many extra features of Tailscale that look like they’ll be really useful, but for now, this unlocks a lot for us. It only took a couple of hours to go from no account to having something deployed and working.</p><hr><p><em>Originally published in October 2024 on my personal Substack newsletter at</em> <a href="https://patrickaltman.substack.com/p/connecting-cloud-apps-to-industrial?ref=wedgworth.dev">https://patrickaltman.substack.com/p/connecting-cloud-apps-to-industrial</a></p>]]></content:encoded>
                </item>
                <item>
                    <title><![CDATA[Crafting Software: Writing Maintainable Code]]></title>
                    <description><![CDATA[Maintainable code can easily be the difference between long-lived, profitable software, and short-lived money pits.]]></description>
                    <link>https://wedgworth.dev/crafting-software-writing-maintainable-code/</link>
                    <guid isPermaLink="false">68e08ebf2e8da50001cac029</guid>

                        <category><![CDATA[Software Craftsmanship]]></category>

                        <dc:creator><![CDATA[Patrick Altman]]></dc:creator>

                    <pubDate>Tue, 07 Oct 2025 07:29:22 -0400</pubDate>

                        <media:content url="https://images.unsplash.com/photo-1547609434-b732edfee020?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;M3wxMTc3M3wwfDF8c2VhcmNofDJ8fHdvb2R3b3JraW5nfGVufDB8fHx8MTc1OTU0NzIwNnww&amp;ixlib&#x3D;rb-4.1.0&amp;q&#x3D;80&amp;w&#x3D;2000" medium="image"/>

                    <content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1547609434-b732edfee020?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;M3wxMTc3M3wwfDF8c2VhcmNofDJ8fHdvb2R3b3JraW5nfGVufDB8fHx8MTc1OTU0NzIwNnww&amp;ixlib&#x3D;rb-4.1.0&amp;q&#x3D;80&amp;w&#x3D;2000" alt="Crafting Software: Writing Maintainable Code"/> <h2 id="introduction">Introduction</h2><p>Maintainable code can easily be the difference between long-lived, profitable software, and short-lived money pits.</p><p>Sometimes, it is necessary to sacrifice maintainability for shipping quickly. When this happens,&nbsp;<em>technical debt</em>&nbsp;is incurred because you are borrowing from your future development capacity. It is important to pay these debts down quickly so the burden does not compound.</p><p>Even better though is to avoid the debt and focus on writing lean, maintainable code as you go.</p><p><em>Slow is smooth, smooth is fast…</em></p><p>In this article you’ll find a non-exhaustive list for techniques to keep in mind as you code that will go a long way to keeping your software maintainable.</p><h2 id="commenting-the-why">Commenting the Why</h2><p>Commenting code is one of easiest things to do and it is also one of the easiest things to do incorrectly.</p><p>There are two main audiences for comments:</p><ol><li>External developers who use your code as a library. This is where providing good docstrings helps a developer to understand the API as they are using editors that support “intellisense” like help.</li><li>Other developers within the same codebase, including future-self, who maintain the code a long time into the future.</li></ol><p>For the purposes of this article, we are ignoring the first use case and focused solely on comments that support maintenance.</p><p>Ideally, we want the code to be as comment-free as possible so that the code speaks for itself. Comments also need to be maintained. When they are not, they tend to drift out of sync with the code. This can be worse than no comments at all, causing confusion and wild-goose chases.</p><p>One good rule of thumb is to reserve comments for explaining&nbsp;<strong>why</strong>&nbsp;and/or to provide&nbsp;<strong>context for design decisions</strong>&nbsp;that isn’t obvious from the code.</p><p>For example, maybe the current version of a library you are working with has a known (at the time) limitation with a certain API and there is a suggested workaround. Adding a comment in the code with a link to the comment on the PR or Issue would be helpful to someone coming in some years later wondering why some awkard use of the library was in play. And maybe there is now a fix and we can clean up the code with given context.</p><p>Another good prompt, is if during code reviews, one of your peers asks why something was done a particular way. That might be a good opportunity to answer that question in a comment rather than a reply on the PR.</p><h2 id="single-responsibility">Single Responsibility</h2><p>The&nbsp;<a href="https://en.wikipedia.org/wiki/Single-responsibility_principle?ref=wedgworth.dev">Single Responsibility</a>&nbsp;principle is a pretty well known one stemming, but not limited to, object oriented design.</p><p>The idea is pretty simple—a unit of code (function, class, module)—should have a single point of responsibility. Martin talks about actors and relationships to them but I think it is simple enough to think about a “single concern”.</p><p>When a function or class has multiple concerns coupled together it can introduce edge-case bugs that are hard to track down and fix. It can also make the code harder to read and grok as a newcomer to the code base.</p><p>Resist the urge to add a quick code branch inside a function during maintenance to add a new side-affect. You could very well be unwittingly adding new concerns to a single-concerned function. Take the time to factor out code to keep things single concerned and decoupled as much as possible.</p><h2 id="meaningful-names">Meaningful Names</h2><p>Variable names like&nbsp;<code>x</code>&nbsp;and&nbsp;<code>a</code>&nbsp;are fine in quick one off scripts. And&nbsp;<em>maybe</em>&nbsp;even in short lived loops where it is obvious that the variable is holding some index because the loop is only 2-3 lines long.</p><p>When in doubt though, use names that are meaningful, but short.</p><p>Think more Hemingway and less Faulkner.</p><p>The goal here is to make skimming a class you’ve written quick and easy while minimizing the chance that the reader missed something.</p><p>Variable, class, and property names should be short nouns.</p><p>Methods should be verbs describing the action they perform, maybe sometimes with a hint at what they return (e.g.&nbsp;<code>fetch_person()</code>).</p><p>If your classes, methods and variables are named well, your code will be easier, perhaps even a delight, to read. Code with good names doesn’t come easy.</p><p>There is a lot of thought and care that goes into it and it is well worth the investment.</p><h2 id="no-magic-numbers">No Magic Numbers</h2><p>Similar in motivation to having meaningful names is getting rid of any&nbsp;<em>magic numbers</em>&nbsp;and by “numbers” I mean strings or other types too. There is never a good argument for unnamed constants in the code.</p><p>Instead of pasting in to the&nbsp;<code>requests.post</code>&nbsp;call the URL for the API endpoint your client code uses, set a&nbsp;<code>API_ENDPOINT</code>&nbsp;constant and referenced that named constant. It can be clear enough with something like a URL but not with other values you could very well end up using.</p><p>Collecting all these into a single&nbsp;<code>constants.py</code>&nbsp;module depending on the size of your project will keep things even tidier.</p><h2 id="testable-code">Testable Code</h2><p>I’m not a test driven development champion. I know. We aren’t supposed to admit this. But in the 20+ years I’ve written software professionally, I can count the number of times that TDD felt worthwhile on one, maybe two hands.</p><p>I mostly write tests after the fact and not for every line of code that I produce, generally just the trickier parts. Sometimes, if I’m working on something really tricky that I need some iterations in the code to help me think through, then I might write some test harnesses to help execute the code.</p><p>That said, as I code I try to keep top of mind just how testable is what I’m writing: Am I using services that will have to be mocked in a test? Am I coupled to dependencies that I can’t influence through injection? Can I decompose what I am writing to smaller units that would be easier to test if I get around to writing the tests? Can I isolate the nasty bits that will need to mocked to a smaller piece?</p><p>Thinking in terms of single responsibility helps with this mental framing of the problem.</p><h2 id="easy-to-read">Easy to Read</h2><p>You’ve made sure things all have a single concern, are well named, are testable, and have any relevant context commented. Still, though there might be room to make it easier to read.</p><p>Yes, this one is very subjective, however, I think we all generally know it when we see it. However, two easy objective rules to add on to some of the previous techniques are:</p><ol><li>code-block lengths</li><li>two much branching</li></ol><p>Generally, speaking, we should refactor any function or method to fit on a typically display without having to scroll to avoid code-blocks that are hard to read because they are too long.</p><p>Likewise, heavy / deeply nested branching of the code can be hard to follow and easy for bugs to sneak in. Refactoring this out to strategy patterns and/or named functions will yield a lot of readability benefits.</p><h2 id="dry">DRY</h2><p><a href="https://en.wikipedia.org/wiki/Don't_repeat_yourself?ref=wedgworth.dev">Don’t Repeat Yourself</a>&nbsp;is a classic engineering principal made well known by&nbsp;<a href="https://en.wikipedia.org/wiki/The_Pragmatic_Programmer?ref=wedgworth.dev">The Pragmatic Programmer</a>&nbsp;(highly recommend this book!).</p><p>This is as simple as it sounds.</p><p>If you find yourself copying and pasting code—stop. Make a function or class.</p><p>If you find yourself having very similar pieces of code—stop. Consider how, with the the right abstractions, you could reuse code that might behave a bit differently depending on the inputs.</p><p>Getting really good at keeping your code DRY will make your code more readable, more bug free, and easier to maintain.</p><h2 id="future-thinking">Future Thinking</h2><p>Lastly, as you working on any bit of code, always consider, what you might want to leave it with if you knew you were going to come back to this code some number of years later to have to fix something under a tight deadline.</p><p>What things, could you do now, given all the context you have in your head about the weaknesses, edge cases, etc, that would make your job easier in that future scenario.</p><p>It might just be leaving some comments. It might be refactoring or buttoning up a known weak point.</p><p>Only you really know what things you could best do to help your future self.</p><hr><p><em>Originally published in November 2022 at </em><a href="https://paltman.com/crafting-software-writing-maintainable-code/?ref=wedgworth.dev">https://paltman.com/crafting-software-writing-maintainable-code/</a></p>]]></content:encoded>
                </item>
                <item>
                    <title><![CDATA[Rooted in Tradition, Growing with Technology]]></title>
                    <description><![CDATA[For 93 years, Wedgworth has pioneered agricultural innovation. We&#x27;re now building on that legacy with custom software and hardware solutions. Join us at wedgworth.dev to explore what we&#x27;re creating and learning.]]></description>
                    <link>https://wedgworth.dev/rooted-in-tradition-growing-with-technology/</link>
                    <guid isPermaLink="false">68e05d682e8da50001cabf77</guid>

                        <category><![CDATA[Announcements]]></category>

                        <dc:creator><![CDATA[Patrick Altman]]></dc:creator>

                    <pubDate>Sat, 04 Oct 2025 07:45:21 -0400</pubDate>

                        <media:content url="https://wedgworth.dev/content/images/2025/10/Hero-Image-2.png" medium="image"/>

                    <content:encoded><![CDATA[<img src="https://wedgworth.dev/content/images/2025/10/Hero-Image-2.png" alt="Rooted in Tradition, Growing with Technology"/> <p>We were born out of innovation in 1932, when Herman Wedgworth left his plant pathology lab to help growers get the nutrition their crops needed. After his passing, <a href="https://en.wikipedia.org/wiki/Ruth_Springer_Wedgworth?ref=wedgworth.dev" rel="noreferrer">Ruth Wedgworth</a> pushed that mission forward.  Ten decades later, we're still building on that legacy–innovating and doing what's best for the grower.</p><p>Today, we're kicking off wedgworth.dev: a space to dive into the technical guts of what we're building–custom software and hardware solutions that:</p><ul><li>give our crop advisors tools for formulating precision blends</li><li>provide world-class inventory tracking to minimize shrink and improve purchasing</li><li>deliver exacting controls over the blending processes for the highest quality in the industry </li><li>and much more</li></ul><p>Here, you can expect posts on wrangling complex problems with <a href="https://www.djangoproject.com/?ref=wedgworth.dev" rel="noreferrer">Django</a> and <a href="https://www.python.org/?ref=wedgworth.dev" rel="noreferrer">Python</a>, crafting great UI/UX with <a href="https://vuejs.org/?ref=wedgworth.dev" rel="noreferrer">Vue</a>, leveraging <a href="https://graphql.org/?ref=wedgworth.dev" rel="noreferrer">GraphQL</a> for efficient data flows, and giving back through open source.</p><p>Whether it's a conceptual deep dive on system integrations that helps growers get the right nutrients faster, or code snippets from our custom ERP build, the goals are simple: <strong>share what we're building</strong>, <strong>spark conversations</strong>, and <strong>learn from each other</strong>. </p><p>Stick around. We've got a lot brewing.</p>]]></content:encoded>
                </item>
    </channel>
</rss>