A high-concurrency web crawler with intelligent rate-limiting.
This Go-based crawler explores web pages concurrently while strictly adhering to a rate limit of one page every 0.5 seconds. It demonstrates how to coordinate goroutines using WaitGroups and apply a global rate-limiter using time.Tick, ensuring responsible crawling behavior under concurrency.
- Recursive concurrent crawling with depth control
- Global rate limiter to throttle requests
- Efficient use of goroutines and channels
- Mocked fetcher for safe testing and demonstration
cd crawl_rate_limiter
go run .
A concurrency-focused simulation of a Twitter-like data stream using Golang. This project demonstrates the classic Producer-Consumer model with goroutines, channels, and error handling.
- A mocked stream of tweets is fetched by the Producer.
- Tweets are passed through a channel to the Consumer.
- The Consumer filters and prints tweets that mention "Go" or "Gopher".
- The program exits cleanly after consuming all data.
Concept revision or Learning source: https://getstream.io/blog/goroutines-go-concurrency-guide/
cd twitter_producer_consumer
go run .
I am just implementing the problems list by @loong
👉️Here