Merge branch 'main' of ssh://git.ipng.ch:222/ipng/ipng.ch
Some checks failed
continuous-integration/drone/push Build is failing

This commit is contained in:
2026-01-04 16:38:37 +01:00
10 changed files with 98 additions and 65 deletions

View File

@@ -17,13 +17,14 @@ to be connected to the industry both physically, in terms of software defined ne
and software companies, and socially, to the Swiss and European networking community. and software companies, and socially, to the Swiss and European networking community.
IPng Networks GmbH provides networking consultancy, hosting, colocation, internet connectivity IPng Networks GmbH provides networking consultancy, hosting, colocation, internet connectivity
options primarily tailored for the Zurich metropolitan area. options primarily tailored for the Zurich metropolitan area. We are experts in self-hosting, and
on principle only use fully open sourced components to build and run our business.
Rather than dazzle you with pictures of clouds, grandiose projections of our "global IP backbone", Rather than dazzle you with pictures of clouds, grandiose projections of our "global IP backbone",
and other claims that small businesses make to appear larger than they are, we're happy to show what and other claims that small businesses make to appear larger than they are, we're happy to show what
we know, what we own, and how we can help you accomplish your goals if you want to work with us. we know, what we own, and how we can help you accomplish your goals if you want to work with us.
### Keywords: SDN, WDM, IP, Network Design and Consultancy, Hosting, and Colocation. ### Keywords: VPP/FD.io, Network Design and Consultancy, (Self-)Hosting, and Colocation.
We are proud of our network and the services we operate, because they allow us to provide We are proud of our network and the services we operate, because they allow us to provide
predictable and reliable performance. We maintain and grow the network judiciously and with the predictable and reliable performance. We maintain and grow the network judiciously and with the

View File

@@ -47,8 +47,8 @@ started his career as a network engineer in the Netherlands, where he worked
for Intouch, Freeler, and BIT. He helped raise awareness for IPv6, for example for Intouch, Freeler, and BIT. He helped raise awareness for IPv6, for example
by launching it at AMS-IX back in 2001. He also operated by launching it at AMS-IX back in 2001. He also operated
[[SixXS](https://www.sixxs.net/)], a global IPv6 tunnel broker, from 2001 through [[SixXS](https://www.sixxs.net/)], a global IPv6 tunnel broker, from 2001 through
to its sunset in 2017. Since 2006, Pim works as a Distinguished SRE at Google to its sunset in 2017. Since 2006, Pim works as a Distinguished Software Engineer at Google
in Zurich, Switzerland. In his free time, he goes [[Geocaching](https://geocaching.com)], in Zurich, Switzerland. In his free time, he goes [[Geocaching](https://geocaching.com)],
contributes to [[open source](https://github.com/pimvanpelt)] projects, and flies contributes to [[open source](https://git.ipng.ch/ipng/)] projects, and occasionally
model helicopters. flies model helicopters.

View File

@@ -8,7 +8,7 @@ Historical context - todo, but notes for now
1. started with stack.nl (when it was still stack.urc.tue.nl), 6bone and watching NASA multicast video in 1997. 1. started with stack.nl (when it was still stack.urc.tue.nl), 6bone and watching NASA multicast video in 1997.
2. founded ipng.nl project, first IPv6 in NL that was usable outside of NREN. 2. founded ipng.nl project, first IPv6 in NL that was usable outside of NREN.
3. attacted attention of the first few IPv6 partitipants in Amsterdam, organized the AIAD - AMS-IX IPv6 Awareness Day 3. attracted attention of the first few IPv6 participants in Amsterdam, organized the AIAD - AMS-IX IPv6 Awareness Day
4. launched IPv6 at AMS-IX, first IXP prefix allocated 2001:768:1::/48 4. launched IPv6 at AMS-IX, first IXP prefix allocated 2001:768:1::/48
> My Brilliant Idea Of The Day -- encode AS number in leetspeak: `::AS01:2859:1`, because who would've thought we would ever run out of 16 bit AS numbers :) > My Brilliant Idea Of The Day -- encode AS number in leetspeak: `::AS01:2859:1`, because who would've thought we would ever run out of 16 bit AS numbers :)
5. IPng rearchitected to SixXS, and became a very large scale deployment of IPv6 tunnelbroker; our main central provisioning system moved around a few times between ISPs (Intouch, Concepts ICT, BIT, IP Man) 5. IPng rearchitected to SixXS, and became a very large scale deployment of IPv6 tunnelbroker; our main central provisioning system moved around a few times between ISPs (Intouch, Concepts ICT, BIT, IP Man)

View File

@@ -185,7 +185,7 @@ function is_coloclue_beacon()
} }
``` ```
Then, I ran the configuration again with one IPv4 beacon set on dcg-1, and still all the bird configs on both IPv4 and IPv6 for all routers parsed correctly, and the generated function on the dcg-1 IPv4 filters file was popupated: Then, I ran the configuration again with one IPv4 beacon set on dcg-1, and still all the bird configs on both IPv4 and IPv6 for all routers parsed correctly, and the generated function on the dcg-1 IPv4 filters file was populated:
``` ```
function is_coloclue_beacon() function is_coloclue_beacon()
{ {

View File

@@ -1,6 +1,8 @@
--- ---
date: "2025-07-26T22:07:23Z" date: "2025-07-26T22:07:23Z"
title: 'Certificate Transparency - Part 1 - TesseraCT' title: 'Certificate Transparency - Part 1 - TesseraCT'
aliases:
- /s/articles/2025/07/26/certificate-transparency-part-1/
--- ---
{{< image width="10em" float="right" src="/assets/ctlog/ctlog-logo-ipng.png" alt="ctlog logo" >}} {{< image width="10em" float="right" src="/assets/ctlog/ctlog-logo-ipng.png" alt="ctlog logo" >}}
@@ -14,7 +16,7 @@ subsequently it issued hundreds of fraudulent SSL certificates, some of which we
man-in-the-middle attacks on Iranian Gmail users. Not cool. man-in-the-middle attacks on Iranian Gmail users. Not cool.
Google launched a project called **Certificate Transparency**, because it was becoming more common Google launched a project called **Certificate Transparency**, because it was becoming more common
that the root of trust given to _Certification Authorities_ could no longer be unilateraly trusted. that the root of trust given to _Certification Authorities_ could no longer be unilaterally trusted.
These attacks showed that the lack of transparency in the way CAs operated was a significant risk to These attacks showed that the lack of transparency in the way CAs operated was a significant risk to
the Web Public Key Infrastructure. It led to the creation of this ambitious the Web Public Key Infrastructure. It led to the creation of this ambitious
[[project](https://certificate.transparency.dev/)] to improve security online by bringing [[project](https://certificate.transparency.dev/)] to improve security online by bringing

View File

@@ -14,7 +14,7 @@ subsequently it issued hundreds of fraudulent SSL certificates, some of which we
man-in-the-middle attacks on Iranian Gmail users. Not cool. man-in-the-middle attacks on Iranian Gmail users. Not cool.
Google launched a project called **Certificate Transparency**, because it was becoming more common Google launched a project called **Certificate Transparency**, because it was becoming more common
that the root of trust given to _Certification Authorities_ could no longer be unilateraly trusted. that the root of trust given to _Certification Authorities_ could no longer be unilaterally trusted.
These attacks showed that the lack of transparency in the way CAs operated was a significant risk to These attacks showed that the lack of transparency in the way CAs operated was a significant risk to
the Web Public Key Infrastructure. It led to the creation of this ambitious the Web Public Key Infrastructure. It led to the creation of this ambitious
[[project](https://certificate.transparency.dev/)] to improve security online by bringing [[project](https://certificate.transparency.dev/)] to improve security online by bringing
@@ -53,11 +53,11 @@ implementations, TesseraCT or Sunlight, he thinks would be a good fit. One thing
with me: "The community needs _any_ static log operator, so if Google thinks TesseraCT is ready, by with me: "The community needs _any_ static log operator, so if Google thinks TesseraCT is ready, by
all means use that. The diversity will do us good!". all means use that. The diversity will do us good!".
To find out if one or the other is 'ready' is partly on the software, but importantly also an the To find out if one or the other is 'ready' is partly on the software, but importantly also on the
operator. So I carefully take Sunlight out of its cardboard box, and put it onto the same Dell R630 operator. So I carefully take Sunlight out of its cardboard box, and put it onto the same Dell R630
that I used in my previous tests: two Xeon E5-2640 v4 CPUs for a total of 20 cores and 40 threads, that I used in my previous tests: two Xeon E5-2640 v4 CPUs for a total of 20 cores and 40 threads,
and 512GB of DDR4 memory. They also sport a SAS controller. In one machine I place 6pcs 1.2TB SAS3 and 512GB of DDR4 memory. They also sport a SAS controller. In one machine I place 6 pcs 1.2TB SAS3
disks (HPE part number EG1200JEHMC), and in the second machine I place 6pcs of 1.92TB enterprise drives (HPE part number EG1200JEHMC), and in the second machine I place 6pcs of 1.92TB enterprise
storage (Samsung part number P1633N19). storage (Samsung part number P1633N19).
### Sunlight: setup ### Sunlight: setup
@@ -70,7 +70,7 @@ tools is easy enough, there are three main tools:
1. ***skylight***: Which serves the read-path. `/checkpoint` and things like `/tile` and `/issuer` 1. ***skylight***: Which serves the read-path. `/checkpoint` and things like `/tile` and `/issuer`
are served here in a spec-compliant way. are served here in a spec-compliant way.
The YAML configuration file is staight forward, and can define and handle multiple logs in one The YAML configuration file is straightforward, and can define and handle multiple logs in one
instance, which sets it apart from TesseraCT which can only handle one log per instance. There's a instance, which sets it apart from TesseraCT which can only handle one log per instance. There's a
`submissionprefix` which `sunlight` will use to accept writes, and a `monitoringprefix` which `submissionprefix` which `sunlight` will use to accept writes, and a `monitoringprefix` which
`skylight` will use for reads. `skylight` will use for reads.
@@ -621,12 +621,13 @@ to the task of serving the current write-load (which is about 250/s).
* ***S3***: When using the S3 backend, TesseraCT became quite unhappy above 800/s while Sunlight * ***S3***: When using the S3 backend, TesseraCT became quite unhappy above 800/s while Sunlight
went all the way up to 4'200/s and sent significantly less requests to MinIO (about 4x less), went all the way up to 4'200/s and sent significantly less requests to MinIO (about 4x less),
while showing good telemetry on the use of S3 backends. while showing good telemetry on the use of S3 backends. In this mode, TesseraCT uses MySQL (in
my case, MariaDB) which was not on the ZFS pool, but on the boot-disk.
* ***POSIX***: When using normal filesystem, Sunlight seems to peak at 4'800/s while TesseraCT * ***POSIX***: When using normal filesystem, Sunlight seems to peak at 4'800/s while TesseraCT
went all the way to 12'000/s. When doing so, Disk IO was quite similar between the two went all the way to 12'000/s. When doing so, Disk IO was quite similar between the two
solutions, taking into account that TesseraCT runs MariaDB (which my setup did not use ZFS solutions, taking into account that TesseraCT runs BadgerDB, while Sunlight uses sqlite3,
for), while Sunlight uses sqlite3 on the ZFS pool. both are using their respective ZFS pool.
***Notable***: Sunlight POSIX and S3 performance is roughly identical (both handle about ***Notable***: Sunlight POSIX and S3 performance is roughly identical (both handle about
5'000/sec), while TesseraCT POSIX performance (12'000/s) is significantly better than its S3 5'000/sec), while TesseraCT POSIX performance (12'000/s) is significantly better than its S3

View File

@@ -14,7 +14,7 @@ subsequently it issued hundreds of fraudulent SSL certificates, some of which we
man-in-the-middle attacks on Iranian Gmail users. Not cool. man-in-the-middle attacks on Iranian Gmail users. Not cool.
Google launched a project called **Certificate Transparency**, because it was becoming more common Google launched a project called **Certificate Transparency**, because it was becoming more common
that the root of trust given to _Certification Authorities_ could no longer be unilateraly trusted. that the root of trust given to _Certification Authorities_ could no longer be unilaterally trusted.
These attacks showed that the lack of transparency in the way CAs operated was a significant risk to These attacks showed that the lack of transparency in the way CAs operated was a significant risk to
the Web Public Key Infrastructure. It led to the creation of this ambitious the Web Public Key Infrastructure. It led to the creation of this ambitious
[[project](https://certificate.transparency.dev/)] to improve security online by bringing [[project](https://certificate.transparency.dev/)] to improve security online by bringing
@@ -34,7 +34,7 @@ and [[TesseraCT]({{< ref 2025-08-10-ctlog-2 >}})], two open source implementatio
protocol. In this final article, I'll share the details on how I created the environment and protocol. In this final article, I'll share the details on how I created the environment and
production instances for four logs that IPng will be providing: Rennet and Lipase are two production instances for four logs that IPng will be providing: Rennet and Lipase are two
ingredients to make cheese and will serve as our staging/testing logs. Gouda and Halloumi are two ingredients to make cheese and will serve as our staging/testing logs. Gouda and Halloumi are two
delicious cheeses that pay hommage to our heritage, Jeroen and I being Dutch and Antonis being delicious cheeses that pay homage to our heritage, Jeroen and I being Dutch and Antonis being
Greek. Greek.
## Hardware ## Hardware
@@ -49,8 +49,8 @@ yet, take a look at [[zrepl](https://zrepl.github.io/)], a one-stop, integrated
replication. This tool is incredibly powerful, and can do snapshot management, sourcing / sinking replication. This tool is incredibly powerful, and can do snapshot management, sourcing / sinking
to remote hosts, of course using incremental snapshots as they are native to ZFS. to remote hosts, of course using incremental snapshots as they are native to ZFS.
Once the machine is up, we pass three four enterprise-class storage, in our case 3.84TB Kioxia NVMe Once the machine is up, we pass four enterprise-class storage drives, in our case 3.84TB Kioxia
drives, model _KXD51RUE3T84_ which are PCIe 3.1 x4 lanes, and NVMe 1.2.1 specification with a good NVMe, model _KXD51RUE3T84_ which are PCIe 3.1 x4 lanes, and NVMe 1.2.1 specification with a good
durability and reasonable (albeit not stellar) read throughput of ~2700MB/s, write throughput of durability and reasonable (albeit not stellar) read throughput of ~2700MB/s, write throughput of
~800MB/s with 240 kIOPS random read and 21 kIOPS random write. My attention is also drawn to a ~800MB/s with 240 kIOPS random read and 21 kIOPS random write. My attention is also drawn to a
specific specification point: these drives allow for 1.0 DWPD, which stands for _Drive Writes Per specific specification point: these drives allow for 1.0 DWPD, which stands for _Drive Writes Per
@@ -131,8 +131,9 @@ logs:
``` ```
In the first configuration file, I'll tell _Sunlight_ (the write path component) to listen on port In the first configuration file, I'll tell _Sunlight_ (the write path component) to listen on port
`16420` and I'll tell _Skylight_ (the read path component) to listen on port `16421`. I've disabled `:16420` and I'll tell _Skylight_ (the read path component) to listen on port `:16421`. I've disabled
the automatic certificate renewals, and will handle SSL upstream: the automatic certificate renewals, and will handle SSL upstream. A few notes on this:
1. Most importantly, I will be using a common frontend pool with a wildcard certificate for 1. Most importantly, I will be using a common frontend pool with a wildcard certificate for
`*.ct.ipng.ch`. I wrote about [[DNS-01]({{< ref 2023-03-24-lego-dns01 >}})] before, it's a very `*.ct.ipng.ch`. I wrote about [[DNS-01]({{< ref 2023-03-24-lego-dns01 >}})] before, it's a very
convenient way for IPng to do certificate pool management. I will be sharing certificate for all log convenient way for IPng to do certificate pool management. I will be sharing certificate for all log
@@ -149,7 +150,7 @@ for Rennet, and a few days later, for Gouda, are operational this way.
Skylight provides all the things I need to serve the data back, which is a huge help. The [[Static Skylight provides all the things I need to serve the data back, which is a huge help. The [[Static
Log Spec](https://github.com/C2SP/C2SP/blob/main/static-ct-api.md)] is very clear on things like Log Spec](https://github.com/C2SP/C2SP/blob/main/static-ct-api.md)] is very clear on things like
compression, content-type, cache-control and other headers. Skylight makes this a breeze, as it read compression, content-type, cache-control and other headers. Skylight makes this a breeze, as it reads
a configuration file very similar to the Sunlight write-path one, and takes care of it all for me. a configuration file very similar to the Sunlight write-path one, and takes care of it all for me.
## TesseraCT ## TesseraCT
@@ -157,16 +158,17 @@ a configuration file very similar to the Sunlight write-path one, and takes care
{{< image width="10em" float="right" src="/assets/ctlog/tesseract-logo.png" alt="TesseraCT logo" >}} {{< image width="10em" float="right" src="/assets/ctlog/tesseract-logo.png" alt="TesseraCT logo" >}}
Good news came to our community on August 14th, when Google's TrustFabric team announced their Alpha Good news came to our community on August 14th, when Google's TrustFabric team announced their Alpha
milestone of [[TesseraCT](https://blog.transparency.dev/introducing-tesseract)]. And the release milestone of [[TesseraCT](https://blog.transparency.dev/introducing-tesseract)]. This release
also moved the POSIX variant from experimental alongside the already further along GCP and AWS also moved the POSIX variant from experimental alongside the already further along GCP and AWS
personalities. After playing around with it with Al and the team, I think I've learned enough to get personalities. After playing around with it with Al and the team, I think I've learned enough to get
us going in a public instance. us going in a public `tesseract-posix` instance.
One thing I liked about Sunlight is its compact YAML file that described the pertinent bits of the One thing I liked about Sunlight is its compact YAML file that described the pertinent bits of the
system, and that I can serve any number of logs with the same process. On the other hand, TesseraCT system, and that I can serve any number of logs with the same process. On the other hand, TesseraCT
can serve only one log per process. Both have pro's and con's, notably if any poisonous submission can serve only one log per process. Both have pro's and con's, notably if any poisonous submission
would be offered, Sunlight might take down all logs, while TesseraCT would only take down the log would be offered, Sunlight might take down all logs, while TesseraCT would only take down the log
receiving the offensive submission. On the other hand, maintaining separate processes is cumbersome. receiving the offensive submission. On the other hand, maintaining separate processes is cumbersome,
and all log instances need to be meticulously configured.
### TesseraCT genconf ### TesseraCT genconf
@@ -179,6 +181,8 @@ Sunlight YAML configuration, and came up with a variant like this one:
``` ```
ctlog@ctlog1:/ssd-vol0/enc/tesseract$ cat << EOF | tee tesseract-staging.yaml ctlog@ctlog1:/ssd-vol0/enc/tesseract$ cat << EOF | tee tesseract-staging.yaml
listen:
- "[::]:8080"
roots: /ssd-vol0/enc/tesseract/roots.pem roots: /ssd-vol0/enc/tesseract/roots.pem
logs: logs:
- shortname: lipase2025h2 - shortname: lipase2025h2
@@ -205,11 +209,11 @@ private key, from which the _Log ID_ and _Public Key_ can be derived. So off I g
``` ```
ctlog@ctlog1:/ssd-vol0/enc/tesseract$ tesseract-genconf -c tesseract-staging.yaml gen-key ctlog@ctlog1:/ssd-vol0/enc/tesseract$ tesseract-genconf -c tesseract-staging.yaml gen-key
Generated /ssd-vol0/enc/tesseract/keys/lipase2025h2.pem Creating /ssd-vol0/enc/tesseract/keys/lipase2025h2.pem
Generated /ssd-vol0/enc/tesseract/keys/lipase2026h1.pem Creating /ssd-vol0/enc/tesseract/keys/lipase2026h1.pem
Generated /ssd-vol0/enc/tesseract/keys/lipase2026h2.pem Creating /ssd-vol0/enc/tesseract/keys/lipase2026h2.pem
Generated /ssd-vol0/enc/tesseract/keys/lipase2027h1.pem Creating /ssd-vol0/enc/tesseract/keys/lipase2027h1.pem
Generated /ssd-vol0/enc/tesseract/keys/lipase2027h2.pem Creating /ssd-vol0/enc/tesseract/keys/lipase2027h2.pem
``` ```
Of course, if a file already exists at that location, it'll just print a warning like: Of course, if a file already exists at that location, it'll just print a warning like:
@@ -226,16 +230,16 @@ of the logs:
``` ```
ctlog@ctlog1:/ssd-vol0/enc/tesseract$ tesseract-genconf -c tesseract-staging.yaml gen-html ctlog@ctlog1:/ssd-vol0/enc/tesseract$ tesseract-genconf -c tesseract-staging.yaml gen-html
Generated /ssd-vol0/logs/lipase2025h2/data/index.html Creating /ssd-vol0/logs/lipase2025h2/data/index.html
Generated /ssd-vol0/logs/lipase2025h2/data/log.v3.json Creating /ssd-vol0/logs/lipase2025h2/data/log.v3.json
Generated /ssd-vol0/logs/lipase2026h1/data/index.html Creating /ssd-vol0/logs/lipase2026h1/data/index.html
Generated /ssd-vol0/logs/lipase2026h1/data/log.v3.json Creating /ssd-vol0/logs/lipase2026h1/data/log.v3.json
Generated /ssd-vol0/logs/lipase2026h2/data/index.html Creating /ssd-vol0/logs/lipase2026h2/data/index.html
Generated /ssd-vol0/logs/lipase2026h2/data/log.v3.json Creating /ssd-vol0/logs/lipase2026h2/data/log.v3.json
Generated /ssd-vol0/logs/lipase2027h1/data/index.html Creating /ssd-vol0/logs/lipase2027h1/data/index.html
Generated /ssd-vol0/logs/lipase2027h1/data/log.v3.json Creating /ssd-vol0/logs/lipase2027h1/data/log.v3.json
Generated /ssd-vol0/logs/lipase2027h2/data/index.html Creating /ssd-vol0/logs/lipase2027h2/data/index.html
Generated /ssd-vol0/logs/lipase2027h2/data/log.v3.json Creating /ssd-vol0/logs/lipase2027h2/data/log.v3.json
``` ```
{{< image width="60%" src="/assets/ctlog/lipase.png" alt="TesseraCT Lipase Log" >}} {{< image width="60%" src="/assets/ctlog/lipase.png" alt="TesseraCT Lipase Log" >}}
@@ -253,12 +257,14 @@ from any other running log instance, so I'll implement a `gen-roots` command:
ctlog@ctlog1:/ssd-vol0/enc/tesseract$ tesseract-genconf gen-roots \ ctlog@ctlog1:/ssd-vol0/enc/tesseract$ tesseract-genconf gen-roots \
--source https://tuscolo2027h1.sunlight.geomys.org --output production-roots.pem --source https://tuscolo2027h1.sunlight.geomys.org --output production-roots.pem
Fetching roots from: https://tuscolo2027h1.sunlight.geomys.org/ct/v1/get-roots Fetching roots from: https://tuscolo2027h1.sunlight.geomys.org/ct/v1/get-roots
2025/08/25 08:24:58 Warning: Failed to parse certificate, skipping: x509: negative serial number 2025/08/25 08:24:58 Warning: Failed to parse certificate,carefully skipping: x509: negative serial number
Creating production-roots.pem
Successfully wrote 248 certificates to tusc.pem (out of 249 total) Successfully wrote 248 certificates to tusc.pem (out of 249 total)
ctlog@ctlog1:/ssd-vol0/enc/tesseract$ tesseract-genconf gen-roots \ ctlog@ctlog1:/ssd-vol0/enc/tesseract$ tesseract-genconf gen-roots \
--source https://navigli2027h1.sunlight.geomys.org --output testing-roots.pem --source https://navigli2027h1.sunlight.geomys.org --output testing-roots.pem
Fetching roots from: https://navigli2027h1.sunlight.geomys.org/ct/v1/get-roots Fetching roots from: https://navigli2027h1.sunlight.geomys.org/ct/v1/get-roots
Creating testing-roots.pem
Successfully wrote 82 certificates to tusc.pem (out of 82 total) Successfully wrote 82 certificates to tusc.pem (out of 82 total)
``` ```
@@ -297,16 +303,16 @@ I can now implement a `gen-env` command for my tool:
``` ```
ctlog@ctlog1:/ssd-vol0/enc/tesseract$ tesseract-genconf -c tesseract-staging.yaml gen-env ctlog@ctlog1:/ssd-vol0/enc/tesseract$ tesseract-genconf -c tesseract-staging.yaml gen-env
Generated /ssd-vol0/logs/lipase2025h2/data/roots.pem Creating /ssd-vol0/logs/lipase2025h2/data/roots.pem
Generated /ssd-vol0/logs/lipase2025h2/data/.env Creating /ssd-vol0/logs/lipase2025h2/data/.env
Generated /ssd-vol0/logs/lipase2026h1/data/roots.pem Creating /ssd-vol0/logs/lipase2026h1/data/roots.pem
Generated /ssd-vol0/logs/lipase2026h1/data/.env Creating /ssd-vol0/logs/lipase2026h1/data/.env
Generated /ssd-vol0/logs/lipase2026h2/data/roots.pem Creating /ssd-vol0/logs/lipase2026h2/data/roots.pem
Generated /ssd-vol0/logs/lipase2026h2/data/.env Creating /ssd-vol0/logs/lipase2026h2/data/.env
Generated /ssd-vol0/logs/lipase2027h1/data/roots.pem Creating /ssd-vol0/logs/lipase2027h1/data/roots.pem
Generated /ssd-vol0/logs/lipase2027h1/data/.env Creating /ssd-vol0/logs/lipase2027h1/data/.env
Generated /ssd-vol0/logs/lipase2027h2/data/roots.pem Creating /ssd-vol0/logs/lipase2027h2/data/roots.pem
Generated /ssd-vol0/logs/lipase2027h2/data/.env Creating /ssd-vol0/logs/lipase2027h2/data/.env
``` ```
Looking at one of those .env files, I can show the exact commandline I'll be feeding to the Looking at one of those .env files, I can show the exact commandline I'll be feeding to the
@@ -316,7 +322,8 @@ Looking at one of those .env files, I can show the exact commandline I'll be fee
ctlog@ctlog1:/ssd-vol0/enc/tesseract$ cat /ssd-vol0/logs/lipase2025h2/data/.env ctlog@ctlog1:/ssd-vol0/enc/tesseract$ cat /ssd-vol0/logs/lipase2025h2/data/.env
TESSERACT_ARGS="--private_key=/ssd-vol0/enc/tesseract/keys/lipase2025h2.pem TESSERACT_ARGS="--private_key=/ssd-vol0/enc/tesseract/keys/lipase2025h2.pem
--origin=lipase2025h2.log.ct.ipng.ch --storage_dir=/ssd-vol0/logs/lipase2025h2/data --origin=lipase2025h2.log.ct.ipng.ch --storage_dir=/ssd-vol0/logs/lipase2025h2/data
--roots_pem_file=/ssd-vol0/logs/lipase2025h2/data/roots.pem --http_endpoint=[::]:16900" --roots_pem_file=/ssd-vol0/logs/lipase2025h2/data/roots.pem --http_endpoint=[::]:16900
--not_after_start=2025-07-01T00:00:00Z --not_after_limit=2026-01-01T00:00:00Z"
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
``` ```
@@ -344,14 +351,14 @@ And thus, `gen-nginx` command is born, and listens on port `:8080` for requests:
``` ```
ctlog@ctlog1:/ssd-vol0/enc/tesseract$ tesseract-genconf -c tesseract-staging.yaml gen-nginx ctlog@ctlog1:/ssd-vol0/enc/tesseract$ tesseract-genconf -c tesseract-staging.yaml gen-nginx
Generated nginx config: /ssd-vol0/logs/lipase2025h2/data/lipase2025h2.mon.ct.ipng.ch.conf Creating nginx config: /ssd-vol0/logs/lipase2025h2/data/lipase2025h2.mon.ct.ipng.ch.conf
Generated nginx config: /ssd-vol0/logs/lipase2026h1/data/lipase2026h1.mon.ct.ipng.ch.conf Creating nginx config: /ssd-vol0/logs/lipase2026h1/data/lipase2026h1.mon.ct.ipng.ch.conf
Generated nginx config: /ssd-vol0/logs/lipase2026h2/data/lipase2026h2.mon.ct.ipng.ch.conf Creating nginx config: /ssd-vol0/logs/lipase2026h2/data/lipase2026h2.mon.ct.ipng.ch.conf
Generated nginx config: /ssd-vol0/logs/lipase2027h1/data/lipase2027h1.mon.ct.ipng.ch.conf Creating nginx config: /ssd-vol0/logs/lipase2027h1/data/lipase2027h1.mon.ct.ipng.ch.conf
Generated nginx config: /ssd-vol0/logs/lipase2027h2/data/lipase2027h2.mon.ct.ipng.ch.conf Creating nginx config: /ssd-vol0/logs/lipase2027h2/data/lipase2027h2.mon.ct.ipng.ch.conf
``` ```
All that's left for me to do is symlink these from `/etc/nginx-sites-enabled/` and the read-path is All that's left for me to do is symlink these from `/etc/nginx/sites-enabled/` and the read-path is
off to the races. With these commands in the `tesseract-genconf` tool, I am hoping that future off to the races. With these commands in the `tesseract-genconf` tool, I am hoping that future
travelers have an easy time setting up their static log. Please let me know if you'd like to use, or travelers have an easy time setting up their static log. Please let me know if you'd like to use, or
contribute, to the tool. You can find me in the Transparency Dev Slack, in #ct and also #cheese. contribute, to the tool. You can find me in the Transparency Dev Slack, in #ct and also #cheese.
@@ -494,7 +501,7 @@ allow the Static CT logs (regardless of being Sunlight or TesseraCT) to serve ve
## What's Next ## What's Next
I need to spend a little bit of time thinking about rate limites, specifically write-ratelimits. I I need to spend a little bit of time thinking about rate limits, specifically write-ratelimits. I
think I'll use a request limiter in upstream NGINX, to allow for each IP or /24 or /48 subnet to think I'll use a request limiter in upstream NGINX, to allow for each IP or /24 or /48 subnet to
only send a fixed number of requests/sec. I'll probably keep that part private though, as it's a only send a fixed number of requests/sec. I'll probably keep that part private though, as it's a
good rule of thumb to never offer information to attackers. good rule of thumb to never offer information to attackers.

View File

@@ -54,9 +54,25 @@ successor to Trillian's CTFE.
Our TesseraCT logs: Our TesseraCT logs:
* A staging log called [[Lipase](https://lipase2025h2.log.ct.ipng.ch/)], incepted 2025-08-22, * A staging log called [[Lipase](https://lipase2025h2.log.ct.ipng.ch/)], incepted 2025-08-22,
starting from temporal shared `lipase2025h2`. starting from temporal shard `lipase2025h2`.
* A production log called [[Halloumi](https://halloumi2025h2.log.ct.ipng.ch/)], incepted 2025-08-24, * A production log called [[Halloumi](https://halloumi2025h2.log.ct.ipng.ch/)], incepted 2025-08-24,
starting from temporal shared `halloumi2025h2`. starting from temporal shard `halloumi2025h2`.
* Shard `halloumi2026h2` incorporated incorrect data into its Merkle Tree at entry 4357956 and
4552365, due to a [[TesseraCT bug](https://github.com/transparency-dev/tesseract/issues/553)]
and was retired on 2025-09-08, to be replaced by temporal shard `halloumi2026h2a`.
## Archived logs
Logs are archived in the [[c2sp.org/static-ct-api@v1.0.0](https://c2sp.org/static-ct-api@v1.0.0)] format,
although if they were originally served through RFC 6962 APIs, leaves might miss the LeafIndex extension.
IPng archives its static log shards at least two weeks after the _notafterlimit_, and removes the DNS
entries at least two weeks after archiving.
Our archived logs are:
* halloumi2026h2.log.ct.ipng.ch - [[checkpoint](https://ct.ipng.ch/archive/halloumi2026h2/checkpoint)] - [[log.v3.json](https://ct.ipng.ch/archive/halloumi2026h2/log.v3.json)] - [[data](https://ct.ipng.ch/archive/halloumi2026h2/)]
We also submit them to [[github.com/geomys/ct-archive](https://github.com/geomys/ct-archive)].
## Operational Details ## Operational Details

View File

@@ -56,6 +56,14 @@ can help broker a deal that is tailored to your needs.
You can read more about how we built our own colocation from scratch in this [[informative post]( You can read more about how we built our own colocation from scratch in this [[informative post](
{{< ref "2022-02-24-colo" >}})]. {{< ref "2022-02-24-colo" >}})].
### Self-Hosting
For IPng it's important to take back a little bit of responsibility for our online presence, away
from centrally hosted services and to privately operated ones. We are experts at self-hosting, with
services such as [[Mastodon](https://ublog.tech)], [[Pixelfed](https://pix.ublog.tech/)],
[[Loops](https://flx.ublog.tech/)], [[PeerTube](https://video.ipng.ch/)], [[Mail]({{< ref
2024-05-17-smtp >}})] and myriad others.
## Project Design / Execution ## Project Design / Execution
{{< image width="15em" float="right" src="/assets/pdu19.png" alt="19 inch PDU" >}} {{< image width="15em" float="right" src="/assets/pdu19.png" alt="19 inch PDU" >}}

View File

@@ -12,7 +12,7 @@ params:
showBlogLatest: false showBlogLatest: false
mainSections: ["articles"] mainSections: ["articles"]
showTaxonomyLinks: false showTaxonomyLinks: false
nBlogLatest: 14 # number of blog post om the home page nBlogLatest: 20 # number of blog post om the home page
Paginate: 30 Paginate: 30
blogLatestHeading: "Latest Dabblings" blogLatestHeading: "Latest Dabblings"
footer: "Copyright 2021- IPng Networks GmbH, all rights reserved" footer: "Copyright 2021- IPng Networks GmbH, all rights reserved"
@@ -20,10 +20,8 @@ params:
social: social:
email: "info+www@ipng.ch" email: "info+www@ipng.ch"
mastodon: "@IPngNetworks" mastodon: "@IPngNetworks"
twitter: "IPngNetworks"
linkedin: "pimvanpelt" linkedin: "pimvanpelt"
github: "pimvanpelt" github: "pimvanpelt"
instagram: "IPngNetworks"
rss: true rss: true
taxonomies: taxonomies: