How I Managed X/Twitter's Hobby Tier Limit with Vercel Edge Store Cache

~
~
Published on
Authors
edge-store-cache-banner

Introduction

Recently, I encountered a significant limitation while building my blog app built on Nextjs, tailwind template. X/Twitter's Hobby tier restricts API usage to GET only 100 posts per month, a substantial constraint for a frequently updated blog that embeds tweets.

This is the story of how I overcame this challenge by cleverly implementing a caching solution using Vercel's Edge Store and automating the process through GitHub Actions.

Context and Background

My blog application has embedded tweets to enhance content engagement and relevance. However, the free "Hobby" tier of X/Twitter API severely limits how often tweets can be fetched—just 100 requests monthly. Given the frequency and volume of content updates, this was insufficient.

Retrieve up to 100 Posts and 500 writes per month with the free tier

Defining the Problem

The core issues were:

  • Limited to retrieve 100 posts per month, insufficient for regular content updates.
  • High risk of quickly exhausting the quota, impacting content freshness and user experience.
  • Need for a cost-effective, automated solution to maintain blog content dynamically.

Initial Investigation

Initially, I explored caching strategies and external storage solutions that could help reduce direct API calls. I considered several solutions:

  • Self-hosted caching mechanisms.
  • Third-party caching providers.
  • Edge storage services provided by Vercel.

Root Cause Analysis

After assessing the solutions, I found:

  • Self-hosted options added complexity and management overhead.
  • Third-party solutions could introduce latency and increase costs.
  • Vercel's Edge Store offered a simple, low-latency solution tightly integrated with my Next.js app.

The Solution

I implemented a caching mechanism using Vercel Edge Store combined with an automated workflow:

  • Set up a GitHub Actions cronjob to fetch tweets periodically:
name: Fetch Tweets and Cache

on:
  schedule:
    - cron: '0 0 * * *' # Runs daily

jobs:
  fetch-and-cache:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Fetch tweets and upload to Edge Store
        run: |
          npm install
          npm run fetch-tweets-and-upload
  • The fetched tweets were stored in Vercel's Edge Store, reducing direct API calls dramatically.
{
  "fetched_at": "2025-05-11T21:56:10.334039Z",
  "tweets": [
    {
      "id": 1921127627202363893,
      "text": "Rebooting… #SpaghettiCodeJungle https://t.co/1umhjNjOmk",
      "created_at": "2025-05-10T08:58:13+00:00"
    }
  ]
}

Visual Representation of the twitter cache flow

twitter-cache-banner

Results and Validation

Post-implementation:

  • Successfully reduced direct X/Twitter API calls by up to 90%.
  • Dramatically improved application reliability and content freshness.
  • No longer at risk of exhausting the limited Hobby tier quota.

Lessons Learned

Key takeaways include:

  • Leveraging Vercel Edge Store for caching provides simplicity, efficiency, and cost-effectiveness.
  • Automating API interactions through GitHub Actions significantly enhances developer productivity.
  • Early identification and addressing of API limitations prevent service disruptions.

Conclusion

Navigating X/Twitter's strict API limits was a valuable learning experience. Using Vercel Edge Store cache in conjunction with automated GitHub Actions workflows effectively resolved the issue, allowing the blog to maintain fresh, engaging content without exceeding API limits.

I could have gone with Vercel's own cron job but decided to opt for github, incase in the future i would like to save my tweets in the github repo, which were fetched.

Resources