02/19/2026

Designing scalable real-time commenting systems with redis Pub/Sub

In live-streaming systems, one of the main technical challenges is maintaining real-time communication when thousands of users interact simultaneously. This article reviews how Pig Party’s engineering team redesigned live comment architecture using Redis Pub/Sub and batch publishing. The implementation enabled a tenfold increase in concurrent user capacity, reduced bottlenecks, and significantly improved system stability during mass events.

Designing scalable real-time commenting systems with redis Pub/Sub
Share
LinkedIn
X (Twitter)
Facebook

Table of Contents

Introduction

Digital products that integrate real-time interaction such as chats, live-stream comments, or virtual event participation face major challenges when user counts grow rapidly. In these scenarios, each new viewer not only consumes content but also generates events and messages that must be distributed to all participants.

Pig Party, an avatar-based social service where users create their own “Pig” and join virtual parties with voice chat and live comments, experienced significant growth in collaborative events and creator-led streams. As these events began to gather thousands of simultaneous viewers, the volume of real-time comments started placing pressure on the existing architecture.

Background: Limitations of the Initial Architecture

Pig Party runs on a Kubernetes-based infrastructure and originally consisted primarily of two components:

  • A stateless API server
  • A stateful chat server

22.png

In the original architecture, comment delivery followed this flow:

  • Comments were sent from viewer clients.
  • Messages arrived at the party area (central chat).
  • From there, messages were redistributed to all pods responsible for broadcasting the event.

While this model worked under moderate traffic, it showed weaknesses during large-scale events. Main limitations included:

  • Each new viewer pod increased the number of connections needed to distribute comments.
  • The party area became a central message distribution bottleneck.
  • High volumes of simultaneous comments caused system overload.

Introducing Redis Pub/Sub as an Intermediary

To address message distribution challenges, the team adopted Redis Pub/Sub as an intermediary between message producers and the services that deliver them to users. Redis Pub/Sub uses a publish–subscribe model: publishers send messages to specific channels and subscribers automatically receive messages published to those channels.

33.png

In the new architecture:

  • Viewer servers subscribe to a Redis channel.
  • The party area publishes comments to that channel.
  • Redis automatically distributes messages to all subscribers.

Key benefits:

  • Significant reduction in direct connections between services.
  • Decoupling of message producers and consumers.
  • Improved horizontal scalability.

Redesigning the Comment Flow

Redis simplified the comment flow inside the system.

The updated process works like this:

  • Users send comments from their clients.
  • The party area receives the messages.
  • Comments are published to a Redis channel.
  • Subscribed viewer servers receive the messages. Comments are delivered to connected spectators.

Advantages of the redesigned flow:

  • Removal of the central distribution bottleneck.
  • Greater flexibility to scale infrastructure horizontally.
  • Better handling of high volumes of real-time messages.

Optimization via Batch Publishing

Even with Redis improving distribution, extreme comment volumes during massive events still generated a very high number of publish operations. To optimize performance, the team implemented a batch publishing strategy.

44.png

The approach:

  • Temporarily buffer incoming comments in an in-memory queue.
  • Aggregate comments over a short time window.
  • Publish them together as a single Redis operation.

Example:

  • Comments are accumulated for 1,000 milliseconds.
  • Multiple comments are sent in a single message.

Benefits:

  • Fewer total publish operations.
  • Reduced load on Redis.
  • More efficient message transmission.

55.png

Recommendations

  • Decouple message-producing systems from distribution systems to avoid bottlenecks during large events.
  • Use publish–subscribe architectures when multiple services need to consume real-time events.
  • Implement batch publishing mechanisms to reduce operation frequency in high-volume systems.
  • Design infrastructures ready for horizontal scaling from early development stages.
  • Favor simple solutions that integrate with existing infrastructure before introducing more complex systems.

Conclusion

Adopting Redis Pub/Sub enabled Pig Party to transform its real-time commenting architecture, removing bottlenecks and significantly improving scalability. By combining a publish subscribe model with batch publishing, the team reduced server load, optimized message distribution, and increased concurrent user capacity by an order of magnitude during massive events demonstrating how well-designed architectural decisions can deliver substantial gains in performance and stability.

Glossary

  • Redis Pub/Sub: A publish–subscribe messaging system for real-time communication.
  • Horizontal scalability: The ability to increase capacity by adding more instances or servers.
  • Pod: The basic deployment unit in Kubernetes containing one or more containers.
  • Stateful server: A server that retains data or session state in memory during operation.
  • Message broker: An intermediary system that manages message distribution between services.

Gain perspective with curated insights

Designing scalable real-time commenting systems with redis Pub/Sub | Meetlabs