I run an SKS keyserver for OpenPGP (GnuPG, GPGME, etc.) here.
|SKS pool status||(Here)|
|Initial Dump||Dec 10, 2017|
|Dumps Offered||(See below)|
|# of Keys on Initial Turnup||4768850|
|Location||Matawan, NJ, USA|
|Admin Key ID||0x748231EBCBD808A14F5E85D28C004C2F93481F6B|
|Peering Info||See this|
|Location||Matawan, NJ, USA|
|Tier||2 (Eligible for inclusion as Tier 1)|
Several speed testing faculties are available, depending on what protocol you're looking to use.
iperf(2)/iperf3 are tools used to test raw throughput of a connection/route. They are advantageous over other speed tests offered because they have very little protocol overhead, allowing you to get a more accurate reading. They also support e.g. UDP transmission/testing.
NOTE: your distribution may call the iperf binary "iperf2", or it may call the iperf3 binary "iperf". Be sure to run iperf --version to determine which.
I offer an iperf(2) instance (both TCP and UDP, IPv4 and IPv6) running on default ports (5001 TCP/UDP). To perform a TCP test:
iperf -c mirror.square-r00t.net
And for a UDP test:
iperf -c mirror.square-r00t.net -u
I offer an iperf3 instance (both TCP and UDP, IPv4 and IPv6) running on default ports (5201 TCP/UDP). To perform a TCP test:
iperf3 -c mirror.square-r00t.net
And for a UDP test:
iperf3 -c mirror.square-r00t.net -u
To test via HTTP or HTTPS, you can use one of the following:
# 1GB HTTP fetch via curl: curl -Lo /dev/null http://mirror.square-r00t.net/speedtest/1GB.dat # 10MB HTTPS fetch via wget: wget --output-document=/dev/null https://mirror.square-r00t.net/speedtest/10MB.dat
Rsync can actually provide some helpful statistics. The files can be found at rsync://mirror.square-r00t.net/speed/. Here are some examples:
# Test 100MB of data: rsync --info=progress2 rsync://mirror.square-r00t.net/speed/100MB.dat . rm -f 100MB.dat # Test 1GB of data with compression rsync --info=progress2 -z rsync://mirror.square-r00t.net/speed/1GB.dat . rm -f 1GB.dat
NOTE: when downloading, you'll probably want to use the dated directory (I use 2017-09-01 in the examples below) rather than the current directory. If I happen to be updating the keydump while you're downloading, you'll have to start all over and that's a waste of time for you and a waste of bandwidth for both of us.
I offer FULL keydumps. Currently these generate every day at 1000 UTC. However, since they are generated from an off-site private SKS peer (due to requiring the keyserver be brought down while the dump takes place), they then need to be rsync'd to the server and this can take some time (currently give or take about 1-3 hours).
They are currently (Sep 1, 2017) about 8.7GB in total (and 9.5GB uncompressed) but are in line with the upper bound number of keys reported by the SKS Keyservers Statuses. (Again - at the time of writing this, Sep 1, 2017, that's 4772034 keys.) You can, of course, look at however many keys I have available by viewing the statistics/status page (but this may be slightly off from the dump, as new keys may have synced before the dump finished).
You can fetch with something like this:
mkdir -p /var/lib/sks/dump wget -P /var/lib/sks/dump --continue \ -r \ --page-requisites \ --execute robots=off \ --timestamping \ --level=1 \ --cut-dirs=3 \ --no-host-directories \ http://sks.mirror.square-r00t.net/dumps/2017-09-01/
They are also available via direct rsync at rsync://sks.mirror.square-r00t.net/sks/. To use rsync, you would call it via something like this:
mkdir -p /var/lib/sks/dump rsync -a --info=progress2 rsync://sks.mirror.square-r00t.net/sks/2017-09-01/. /var/lib/sks/dump/.
NOTE: Compression is currently disabled while I sort out some issues with memory consumption on the compressor box. (2017-09-14)
I use lrzip to compress the dump files. It tends to yield a little better speed/compression ratio than XZ. I may switch to XZ in the future, though; I need to do some end-result filesize comparisons.
To decompress the dumps, you'll need to do the following (after installing lrzip - it should be in your distribution's repositories):
You can then confirm the checksums of the files:
cd /var/lib/sks/dump md5sum -c metadata-keydump.*.txt
Once the dumps have been downloaded and decompressed, you need to import them. We'll be using a full import so you don't need to keep them around afterwards (as you would with a fast import). This assumes the user you run your sks db service as is called sks - if it isn't, change the sudo command accordingly.
And by the way, each of these can take hours to run, so don't fret if it seems to take a while.
sudo -u sks -i sks build /var/lib/sks/dump/*.pgp -n 10 -cache 100 # See Note #1 sks cleandb sks pbuild -cache 20 -ptree_cache 70 # See Note #2
NOTE 1: If this fails (typically it's because you have a small amount of memory), try changing -n to 5 (or lower) and -cache to 50.
NOTE 2: If this fails (less likely than the build), try changing -cache to 10 and -ptree_cache to 50.
I wrote my own script to do this. It's written in python3 and uses all stdlib modules (unless you want to use lrzip compression - if you do, you'll need the module (source). You'll also need to build and install a git checkout of lrzip (the current release version as of Sep 1, 2017 doesn't play nicely with the python bindings). If you're on Arch, I have packaged both the git version and the python module.
Once you're all set there, you can download the script; it's in my OpTools repository (direct raw link).
Once you have that downloaded, you'll probably want to configure it. On first run (even a --help), it will create a ~/.sksdump.ini - be sure to edit that with your preferred values. Once that's done, just set it to run every day from cron (or every week, whatever). I welcome feedback/bugs. Make sure you read the "IMPORTANT:" note at the beginning of the INI file, too; it provides some tips and hints that will let it play more nicely on your system. It also only works for systemd systems, but since distros are moving there anyways, it shouldn't be an issue for most people out there.
Contact (GPG key info here)