Hey folks!
Quick tooling post today. Before moving forward to the next services, I want to share the tools I use to visualize and inspect data in MongoDB and Kafka during development.
Without these tools, debugging an asynchronous data pipeline would be much harder — you're essentially "blind" to what's happening inside the containers.
MongoDB — MongoDB Compass
MongoDB Compass is MongoDB's official GUI. Free and available for Windows, Mac and Linux.
What you can do:
- Browse collections and documents
- Run queries with visual filters
- See indexes created by Flyway/code
- Inspect document schemas
- Monitor performance metrics
How to connect to the local container:
URI: mongodb://root:password@localhost:27017/?authSource=admin
After connecting, you'll see the market_data_db database with the price_history collection being populated by broker-market-data-api on each ingestion round.
Download: mongodb.com/products/tools/compass
Offset Explorer is a GUI for inspecting Kafka clusters. It lets you see topics, partitions, messages and offsets without needing the command line.
What you can do:
- List all topics in the cluster
- See messages from each partition in real time
- Inspect the content (key + value) of each message
- Monitor consumer group offsets
- Check whether consumers have lag
How to connect to the local container:
Bootstrap Servers: localhost:9092
After connecting, you'll see the trading-assets-market-data-v1 topic with messages published by broker-market-data-api. Each message will have the ticker as key (e.g. PETR4) and the quote JSON as value.
Download: kafkatool.com
RabbitMQ — Management UI (already included)
RabbitMQ comes with a built-in web interface. In our docker-compose.yml, port 15672 is exposed:
URL: http://localhost:15672
User: admin
Pass: admin_pass
What you can do:
- View queues, exchanges and bindings
- Inspect messages in queues
- Manually publish messages to test consumers
- Monitor message rates and lag
- See the DLQ (Dead Letter Queue) when messages fail
This is the most useful tool for debugging the flow between trading-broker-order and b3-matching-engine.
Workflow Tip
During development, I use all three tools in parallel:
- Offset Explorer to confirm
broker-market-data-api is publishing messages to Kafka
- MongoDB Compass to confirm the history is being persisted
- RabbitMQ Management to confirm orders are arriving in the B3 queues
This turns an opaque asynchronous pipeline into something completely observable.
What's Next?
With the tools set up, in the next post we build b3-market-sync — the Java service that consumes quotes from Kafka and synchronizes them to Redis, feeding the Matching Engine with real-time prices.
About the Series
⬅️ Previous Post: Market Data Integrator
➡️ Next Post: Syncing the Real Market: Consuming Brapi and Feeding Redis with Spring Boot
Series Index: Series Roadmap
Links: