ai Stop Checking Dashboards — Let n8n Email You Google Analytics Reports Instead! Do you still open Google Analytics every Monday to see if your website’s traffic went up? Yeah… same here — until I automated the whole thing. In this post, we’ll create a workflow that fetches your site metrics from Google Analytics, formats them into clean HTML, and emails them
ai LLMs at Ludicrous Speed: Dockerizing vLLM for Real Apps If you’ve ever watched your GPU twiddle its thumbs between prompts, this one’s for you. In this post we’ll cover what vLLM is, why it’s fast, how to run it with Docker Compose, and how to test it with real calls. I’ll also show concrete
ai Shrinking Giants: Understanding LLM Quantization Models (Q2, Q4, Q6 and Friends) Why Quantization Matters Large Language Models (LLMs) are huge. Even a “small” 7B parameter model can chew up 14+ GB in FP16 (16-bit floating point). If you’ve tried running one locally without a beefy GPU, you’ve probably noticed your machine crying in pain—or worse, swapping memory like
c# BattleBots + MCP = Fun: Building Custom Tools in C# That Your IDE and Workflows Understand Opening AI copilots and IDE extensions are evolving fast, and Microsoft’s Model Context Protocol (MCP) is emerging as the glue that makes custom tools discoverable and usable by editors like VS Code and Visual Studio Insiders. For this post, I’ll be using .NET Core 10 along with Visual
Technical Build Your Own Local AI Automation Hub with n8n + Ollama (No Cloud Required!) If you’ve been playing with local LLMs like Llama 3.1 (8B) and are a fan of automation tools like n8n, you’re in for a treat. Today, we’ll connect n8n to your local Ollama instance (with Open WebUI) using Docker Compose. The result? Automated AI workflows that