Skip to main content

Transform RSS feeds into Podcasts using AI

·5 mins
ia rss podcast audio python code
Romain Boulanger
Author
Romain Boulanger
Infra/Cloud Architect with DevSecOps mindset
Table of Contents

Technology watching: an essential activity for developers
#

Keeping up with technology is something I’d say is crucial in our jobs. Lots of concepts and tools keep changing, and staying up to date is key to avoid offering outdated components or fixing the latest security flaw.

However, dedicating time to this task is never easy…

I still prefer podcasts as my main format, mainly because I absorb much more information by listening to audio content than by reading, but also because I can listen to them everywhere: on public transport, to the beach, or to the gym.

So I had an idea…

In this article, I would like to present a small project that I wanted to create specifically for summer: transforming RSS feeds into podcasts using local AI thanks to the power of Apple Silicon chips.

Please be aware that macOS will be the reference platform throughout the remainder of this post. Some adjustments may be necessary to run the script on other operating systems.

Designing your own script: AI as a co-pilot
#

As you know, I tend to develop scripts for setting up infrastructure rather than coding applications.

Nevertheless, Python is a language I am familiar with and seems well suited to this project.

In addition, I used three Artificial Intelligence engines to assist me in this creation and provided 80% of the required code:

  • Claude (Sonnet 4): Very strong in application code generation;
  • Gemini (2.5 Pro): Enables highly relevant improvements to code generation based on very specific needs;
  • Perplexity (Pro): Searches and finds the best Python libraries to meet my needs.

Of course, I could have used other assistants or just one of the three, but this allows me to iterate and come up with something that is perfectly suited to my requirements.

My main goal: to stay local and not depend on proprietary solutions. To do this, I used Ollama and mlx-audio.

Ollama: Run LLMs locally in a private way
#

Ollama is an open-source tool that greatly simplifies the deployment and use of large language models (LLMs) on a personal machine (macOS in my case).

This tool has an API that will be used in the Python script, but you can also use a graphical interface such as open-webui or Ollamac if you want something native.

Why Ollama?

  • Privacy: Everything is executed locally, nothing leaves your computer;
  • Cost: Ollama offers a very wide catalogue of models that are free to use;
  • Control: You choose the model that best suits your needs;
  • Offline access: No internet connection is required to interact with the model.

In the code, Ollama’s role is crucial. A blog post is written to be read, not to be listened to. The structure, code blocks, lists, and tone are not always suited to an audio format.

An LLM via Ollama will therefore be used to act as an “audio content producer”. The script will send the raw text of the blog post to the LLM with a specific prompt, defined within the code.

In addition, the LLM will also be tasked with translating the blog posts into the default language defined by the audio interpreter, in this case English.

mlx-audio: Speech synthesis on Apple Silicon
#

Once the spoken script has been generated, a voice is needed. This is where mlx-audio comes in.

mlx-audio is a Python library specialising in audio processing, designed specifically to take advantage of the Apple Silicon architecture (from M1 to M4 chips) via Apple’s MLX framework.

Why this library rather than another?

  • Performance: By directly using the Unified Memory and hardware accelerators of Apple chips, audio generation is extremely fast;
  • Efficiency: The inference of text-to-speech (TTS) models is optimised, consuming fewer resources than a generic solution;
  • Confidentiality: Like Ollama, synthesis is done entirely on the machine.

Here’s how it works: the Python script takes the text generated by the LLM and passes it to a TTS model loaded via mlx-audio. The library then converts this text into an audio waveform, which is then saved as a .wav file.

There are a wide variety of TTS models available, but I opted to use the default ones.

This integration of Ollama for content processing and mlx-audio for voice generation creates a fully automated, locally-hosted podcast production workflow.

Want to give it a try?
#

Without further delays, here is the code repository to generate your own podcasts:

A brief reminder:

  • The script works on macOS with an Apple Silicon chip;
  • Ollama must be installed and the model of your choice downloaded;
  • You need to create a Python virtual environment (version 3.10+) and download the required dependencies using the requirements.txt file.

Once the prerequisites are met, here is the magic command:

python rss_to_podcast.py --rss-url https://blog.filador.ch/en/index.xml --site-name "Filador" --max-articles 2

Without forgetting the result:

=== Filador Comprehensive Extract Generator with Audio ===

✅ Ollama is accessible
🔍 Fetching the latest 2 articles from RSS feed...
🔍 Fetching RSS feed from https://blog.filador.ch/en/index.xml
✅ Found 17 items in RSS feed
2 articles found

📖 Article 1/2: IaC Security: OpenTofu vs Terraform
🔗 URL: https://blog.filador.ch/en/posts/iac-security-opentofu-vs-terraform/
📅 Published: Tue, 27 May 2025 07:35:56 +0000
✅ Content extracted (7365 characters)
🤖 Generating comprehensive extract...
📋 Comprehensive Extract:
   Welcome to a look at the evolving landscape of Infrastructure as Code, specifically comparing OpenTofu and Terraform. This discussion stems from a recent Silicon Chalet Meetup, where we 
[...]
✅ Audio generated: ./outputs/filador_extracts_2025-06-07.wav
✅ Audio generation completed! (./outputs/filador_extracts_2025-06-07.wav)

📁 Files generated:
   - Text extracts: ./outputs/filador_extracts_2025-06-07.txt
   - Audio: ./outputs/filador_extracts_2025-06-07.wav

🎉 Processing completed!

What could be better than listening to your own articles as a podcast? :)

The arguments --rss-url and --site-name are the two required parameters; the rest can be found in the documentation.

For a summer filled with podcasts…
#

You now have everything you need to create your own podcasts using your favourite RSS feeds!

Of course, a few adjustments are necessary depending on what you want to achieve, such as adding new languages, creating an intro, varying the voices for each article, etc. Nevertheless, for a first version, we have something functional and ready to use.

Feel free to use the code or fork it! Have a great holiday! ☀️

Related

IaC Security: OpenTofu vs Terraform
·7 mins
iac opentofu terraform cloud security encryption infrastructure code
Looking back at KubeCon 2025 in London
·13 mins
conference kubernetes 2025 london kubecon cloudnative cloudnativecon cncf security plateformengineering ia finops
Protect your services with Cloudflare and deploy your configurations with OpenTofu
·16 mins
iac opentofu code cloudflare terraform security dns waf mtls