
TL;DR
You can build a fully automated news blog using Make.com, Google News RSS, AI writers, and WordPress—no coding required.
The workflow uses Anthropic Claude for content generation and ChatGPT for cheaper tasks like titles and slugs, mixing both for cost and quality.
A Google Sheets deduplication check prevents your automation from republishing the same article twice (scrappy but effective).
This is an MVP proof of concept, not a masterpiece, and that’s exactly why it’s useful for getting started right now.
I built an automated news blog that publishes articles while I sleep, and honestly the whole thing is held together with duct tape and good intentions 👋 It runs on Make.com, pulls from Google News RSS feeds, gets rewritten by AI, and lands on my WordPress site looking… surprisingly decent. If you’ve ever seen one of those slick automated content sites and thought “I could never build that,” I’m here to tell you that you absolutely can because I did it, and I’m not a pro at Make. It’s just me trying to figure this out as we go along.
Before you bounce thinking this is some polished corporate tutorial, let me be clear: this is a look-over-my-shoulder walkthrough of a system that works but is far from perfect 🤙 I’ll show you every module, every janky workaround, and I even fix a bug live on camera. That’s the whole point.
Create an automatic news feed for your blog
What the Finished Product Actually Looks Like
Before I get into the guts of this thing, let me show you what it actually produces. On my skateboarding blog, I have a news section that’s constantly updating with fresh articles related to skateboarding. New stuff coming in all the time. And here’s the thing, I personally do edit them, but you don’t necessarily have to because they come out looking pretty solid out of the box with an image and just the news article.

They’re short on purpose. These are news articles, not blog posts. Totally different formats. Each one comes with a DALL-E generated image, the article content, and all the relevant info. Some I’ve gone in and added a video to manually, but out of the box the automation handles text and image on its own.
The whole system runs every 180 minutes, checks for new articles, and publishes them without me touching a thing. Could it run more often? Yeah. The minimum on Make.com is every 15 minutes. But I keep it at 180 to avoid duplicates because this is not optimized, this is very low optimized, but it just works. It’s like an MVP, Minimal Viable Product.
The Full Make.com Workflow, Module by Module
Here’s how this automated content workflow is actually stitched together. I’ll walk you through every piece in order.
Step 1: Setting Up Your Google News RSS Feed
First thing, you need a source of news. You go to Google News and search for your topic. In my case, skateboarding. You grab that URL from the Google News search results page, then head over to RSS.app and create an account.
Inside RSS.app, you tell it you want to make a new feed, paste in your Google News URL, and it generates an RSS feed URL for you. Takes about a minute for some reason. I don’t know why it takes so long, but it does. Once it spits out that URL, you copy it. That’s your golden ticket.
RSS.app acts as a bridge between Google News and Make.com. Google News doesn’t natively offer clean RSS feeds anymore, so services like RSS.app generate XML feeds from any web source you point them at. (More details: RSS.app documentation.)
Step 2: The RSS Module in Make.com
Back in Make.com, your first module is the RSS feed watcher. You paste that URL from RSS.app right in there. I have mine set to return one item at a time, so every time the scenario runs it only grabs one new article. This keeps things manageable and reduces the chance of the whole thing tripping over itself.
Step 3: Google Sheets Deduplication (The Duct Tape Filter)
Here’s where it gets a little scrappy. After the RSS module grabs an article, it checks a Google Sheets spreadsheet to see if that article’s URL already exists in Column A. If it does? Stop. Don’t continue. If it doesn’t exist, move forward.
That’s it. That’s the deduplication system. Is there a better way? Absolutely, you could do dynamic date-based filtering or use Make.com’s built-in “determine where to start” setting on the RSS module. But this works, and I built it in like five minutes, so I’m rolling with it.
The Google Sheets method works fine at low volumes, but if you crank this to run every 15 minutes across multiple feeds, your spreadsheet will get bloated fast. Consider switching to a date-based filter once you’re past the proof-of-concept stage.
Step 4: Scrape the Full Article Content
After deduplication confirms the article is new, it hits a web content scraper module. Now this one’s a bit different, it’s not a native Make.com module. It’s a custom one made by a YouTuber named Hassan, and you have to download and install it into your Make.com account. You plug in the API key and it scrapes the full content from that article URL.
Once scraped, the content gets added as a new row in the Google Sheet along with the URL. So now your spreadsheet is both your deduplication database and your content archive.
Step 5: AI Content Generation (Anthropic, Not ChatGPT)
This is where it gets interesting. The scraped content gets passed to an Anthropic Claude module, and this is a deliberate choice. I use Anthropic, not ChatGPT because it creates better content for me. That’s been my experience.
My prompt isn’t the biggest prompt in the world, it’s not the shortest in the world. It includes my tone of voice and specifies that I want a news article, not a blog post. Those are completely different things and if your prompt doesn’t make that distinction, you’re gonna get 1,500-word opinion pieces when you wanted a 300-word news brief. Spell it out clearly.
The way variables work in Make.com is the key to this whole system.
After the first Anthropic module generates the article, I actually have a second Anthropic module right after it. Because there’s no real good way to have a back-and-forth conversation with AI inside Make.com right now, you just have to pile a bunch of these on top of each other. The first one creates the content, the second one edits it, adds things, removes things, makes sure it’s tighter.
And honestly? Ideally you would make three or four more of these checks because then after this finishes it’ll make a new variable, and then you would create a new Anthropic module saying hey, now with all this perfect content, do this more stuff. And that’s how you get a really good piece of content. Iterative passes help.
Why I Mix Claude + ChatGPT
Why Anthropic Claude
- Better long-form content quality for news articles
- More natural tone
- Stronger at following detailed voice instructions
Why ChatGPT (for other tasks)
- Cheaper API calls
- Good enough for titles, slugs, excerpts
- Don’t burn premium tokens on a five-word slug
Step 6: Titles, Slugs, and Metadata (Where ChatGPT Earns Its Keep)
After Anthropic does the heavy creative lifting, I switch over to ChatGPT for the smaller stuff. Post title. URL slug. Excerpt. Each one is a separate OpenAI module, and each creates its own variable that I reference later when publishing to WordPress. It’s modular on purpose.
This is pure cost management. Anthropic does the heavy lifting on content quality, ChatGPT handles the quick-hit metadata tasks. ChatGPT is a little bit less expensive, so why pay more when you don’t have to? Spend tokens where it matters.
Step 7: Image Generation and WordPress Publishing
DALL-E generates a featured image based on the article content. That image gets uploaded to WordPress as media first, and then the WordPress post gets created referencing that uploaded media ID plus all the variables from the previous modules, title, slug, excerpt, and the full HTML content.
After publishing, I use a Get Post module to grab the new post’s ID and URL, then update my Google Sheet with the editor URL and live post URL. The final row also gets marked as “complete.”
And then the cherry on top: an email module fires off letting me know a new article just went live. Subject line includes the post title so I can glance at it and know what published without even opening the email. Simple, but effective.
Why I Still Manually Review (And You Should Too)
Look, this thing runs automatically, and most of the time it produces solid output. But I wanna be real about something: I actually sometimes want to compare what it created to what it gave me. And sometimes you’ll find that whoa, that actually has some hallucinations, which is really not good. You need a review loop.
AI hallucinations in news articles are a big deal. You can’t have your automated skateboarding news blog claiming Tony Hawk landed a trick he never did. That’s how you lose credibility overnight. So I keep all the original scraped content in my spreadsheet specifically so I can compare it against what the AI produced and catch any invented “facts.” Keep the source material.
AI-generated news content can and will hallucinate details like names, dates, and event outcomes. Always keep your source material accessible for comparison. An automated workflow that publishes unchecked misinformation will damage your site’s reputation faster than it built it.
The “It’s Not Perfect” Disclaimer (That’s Actually the Point)
I fixed a bug on camera during the walkthrough. The “original published date” field in my spreadsheet was pulling from the wrong variable, it was set to a generic “result” instead of the specific module’s output. Took me two seconds to spot and fix once I actually looked at it. So you see me fixing something right in the spot. That’s the real process.
That’s the whole vibe here. It’s just a proof of concept, but it does work. I don’t have filters for location or date on the RSS feed. The deduplication is basic. The markdown-to-HTML parser might not even be working properly yet. And you know what? None of that matters because the system runs and publishes real articles to a real website.
Research shows that a more polished version of this would include five stages: Fetch, Filter, Analyze, Enrich, and Deliver, with semantic filtering and metadata enrichment. Cool. I’ll get there eventually. But the MVP version that I built in an afternoon is already saving me hours every week. Solopreneurs and small content creators often spend a huge chunk of their time on repetitive admin and content tasks like this. Getting any of that back is a win.
Final Thoughts
The whole point of showing you this messy, functional, duct-taped-together system is that you don’t need to be an expert to build something that works. I built this Make.com workflow as a proof of concept, and it publishes real news articles to a real WordPress site every few hours without me doing anything. Is it perfectly optimized? No. Does it have filters for location or advanced date parsing? Nope. Do I sometimes find a hallucinated detail I have to fix? Yep. But it runs, and it gives me my time back. That’s the win.
So if you’ve been sitting on the idea of an automated news blog because you think you need to master Make.com first, stop waiting. Build the ugly version. Fix the bugs as they show up, like I did on camera. Stack a few more AI modules for better quality when you’re ready. I don’t have too many answers though… I’m not a pro at Make. It’s just me trying to figure this out as we go along. And honestly, that’s more than enough to get started.

Frequently Asked Questions
Yeah, absolutely. The niche is determined entirely by what you search for on Google News. Replace “skateboarding” with “sustainable fashion” or “local real estate” or whatever your blog covers, generate a new RSS feed URL through RSS.app, and the rest of the workflow stays identical. The feed defines the niche.
Depends on volume, but it’s surprisingly cheap. Make.com’s free tier gives you 1,000 operations per month. The Anthropic and OpenAI API calls are pennies per article, we’re talking maybe $0.05–0.15 per published piece depending on article length and which models you use. RSS.app has a free tier too. For a blog publishing a few articles a day, you’re looking at roughly $20–30/month total across all services, though that can vary based on your model choices and volume.
Google’s current stance is that they care about quality, not method. If your AI-generated news articles are accurate, well-written, and provide value to readers, you’re fine. The risk comes from publishing low-quality, unedited AI slop at scale. That’s why the human review step matters, and why the prompts need to be dialed in for your specific format and tone. Quality is the whole game.
Make.com’s RSS module has a “determine where to start” option that tracks what it’s already processed. That handles basic deduplication on its own. The Google Sheets method adds a second layer of protection and also doubles as a content archive where you can review everything that’s been scraped and published. Belt and suspenders.
You can, but I’d recommend running separate scenarios for different feeds to keep things clean and easier to debug. Stacking multiple feeds into one scenario increases complexity and makes it harder to figure out what broke when something inevitably goes sideways. Keep it debuggable.
Sources and References
- RSS.app — How to Use RSS Feeds with Make.com
- RSS.app — How to Summarize RSS Feed Items with AI via Make.com
- Make.com Community — How to Dynamically Filter RSS Feed Based on Date
- Alibaba.com — How to Build a No-Code AI Newsletter Curator Using Make.com and RSS Feeds
- The AI Hat — The Solopreneur’s Guide to AI Tools for Content Creation
- Google Search Central — Creating Helpful, Reliable, People-First Content
- Make.com — Pricing Plans


















