diff --git a/content/articles/2025-08-24-ctlog-3.md b/content/articles/2025-08-24-ctlog-3.md index c7514b6..8cba7e5 100644 --- a/content/articles/2025-08-24-ctlog-3.md +++ b/content/articles/2025-08-24-ctlog-3.md @@ -14,7 +14,7 @@ subsequently it issued hundreds of fraudulent SSL certificates, some of which we man-in-the-middle attacks on Iranian Gmail users. Not cool. Google launched a project called **Certificate Transparency**, because it was becoming more common -that the root of trust given to _Certification Authorities_ could no longer be unilateraly trusted. +that the root of trust given to _Certification Authorities_ could no longer be unilaterally trusted. These attacks showed that the lack of transparency in the way CAs operated was a significant risk to the Web Public Key Infrastructure. It led to the creation of this ambitious [[project](https://certificate.transparency.dev/)] to improve security online by bringing @@ -49,8 +49,8 @@ yet, take a look at [[zrepl](https://zrepl.github.io/)], a one-stop, integrated replication. This tool is incredibly powerful, and can do snapshot management, sourcing / sinking to remote hosts, of course using incremental snapshots as they are native to ZFS. -Once the machine is up, we pass three four enterprise-class storage, in our case 3.84TB Kioxia NVMe -drives, model _KXD51RUE3T84_ which are PCIe 3.1 x4 lanes, and NVMe 1.2.1 specification with a good +Once the machine is up, we pass four enterprise-class storage drives, in our case 3.84TB Kioxia +NVMe, model _KXD51RUE3T84_ which are PCIe 3.1 x4 lanes, and NVMe 1.2.1 specification with a good durability and reasonable (albeit not stellar) read throughput of ~2700MB/s, write throughput of ~800MB/s with 240 kIOPS random read and 21 kIOPS random write. My attention is also drawn to a specific specification point: these drives allow for 1.0 DWPD, which stands for _Drive Writes Per @@ -500,7 +500,7 @@ allow the Static CT logs (regardless of being Sunlight or TesseraCT) to serve ve ## What's Next -I need to spend a little bit of time thinking about rate limites, specifically write-ratelimits. I +I need to spend a little bit of time thinking about rate limits, specifically write-ratelimits. I think I'll use a request limiter in upstream NGINX, to allow for each IP or /24 or /48 subnet to only send a fixed number of requests/sec. I'll probably keep that part private though, as it's a good rule of thumb to never offer information to attackers.