1 | // Needed this to get SSL working properly... https://blog.ldev.app/running-wordpress-behind-ssl-and-nginx-reverse-proxy/ |
1 | # Prerequisites: |
Install this script in the router and use the scheduler to run it as frequently as you wish.
It will automatically keep your router online by distancing the route of the primary, allowing your secondary to take over routing.
Upon return, your primary will have the route distance reduced, thus taking over from the secondary.
]]>Scenario: We have two backend webservers powering our website via nginx. We have some risky new feature we want to rollout.
Can we know which subdomain is being used, and choose a specific backend just for that subdomain?
Yes, see http://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html
An example use case can be seen here too: https://serverfault.com/questions/1078484/nginx-stream-block-with-wildcard-filtering-of-subdomains
But let’s lab it out.
Let’s get a basic python server to act as our origin service:
1 | from flask import Flask, request |
This will listen on ports 8001 to 8004 and reply with subdomain and port number as it sees it.
It’ll be useful to know if our rules worked.
After using acme.sh to get a cert quickly for our domain, we can customize the nginx conf to wrap our backend service with SSL, and then use our stream proxy with this subdomain selection mechanism that we are testing:
1 | user www-data; |
Finally let’s create the backends.map file (this could be named anything except that it cannot end in .conf, otherwise nginx will try to parse directives and thus error out since this is merely a way to externalize the map to its own file for easy editing without affecting the rest of the config.)
1 | y.kebcom.com y; |
And this works! Thanks to RFC 6066 Section 3 this was easy to pull off.
]]>Last year I created secureput.com, although I never wrote about it on this website, to stop typing in the same long passwords over and over.
In my Windows version, which is written in Go, and has an installer, I still could not streamline the creation of a service and eventually ran out of steam to keep hacking on it. It was easier just to put a shortcut in shell:startup
and call it a day.
This is okay but not ideal because:
Before servicifying secureput, be sure to run it at least once directly so that you can perform the pairing process. This is not possible in service mode!
For windows users there is a very easy way to turn any EXE into a service using NSSM - the Non-Sucking Service Manager
Get it and put it somewhere on your computer and update your PATH variable so it is available.
nssm install SecurePut
nssm start SecurePut
Now check your mobile app, and you should see your host.
Now for some unknown reason I cannot actually have SecurePut perform its main duty of typing in text for me, so I have to back out of this idea, but at least the above serves as an example of using nssm.
I suspect this has something to do with the change to service security described here https://www.coretechnologies.com/blog/windows-services/interact-with-desktop/
]]>1 | ## Templated statements to execute when creating a new table. |
We can alter this to fit a timescaledb setup like so:
1 | ## Templated statements to execute when creating a new table. |
Here is what I am using on windows to send my nvidia gpu metrics:
C:\Program Files\Telegraf\telegraf.d\telegraf.conf
1 | [global_tags] |
As of this writing, you want to view the official Nvidia cuda installation guide:
https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
Use that and avoid installing the default ubuntu versions of the drivers. It is very outdated and the cuda it ships is incompatible with the driver it ships. The solution is to use the official nvidia ubuntu repos as described in the link above.
As of this writing, this script captures the essence of what the guide dictates:
1 | #!/bin/bash |
After rebooting, check nvidia-smi
and nvcc
commands are available.
In Creating a retention policy in TimescaleDB after the fact and realizing it's not even a hypertable we discussed the fact that the Telegraf TimescaleDB Postgres output plugin does not properly create hypertables automatically. I wrote the following query to reveal them easily:
1 | SELECT |
At this point you can use the truncation method to quickly make them into hypertables, e.g.:
1 | BEGIN; |
Materials used:
Step 1: Dropping the 12v “Arm” port to 5v
The first step in the process is to drop the 12v input to 5v to power the Adafruit Trinket.
Step 2: Spoofing the Dell Power Supply
The Trinket is then used to spoof the Dell power supply’s 1-Wire-based identification protocol. The Trinket is programmed to emulate a DS2502 - 1kbit EEPROM, which is used by Dell laptops to identify the power supply.
Step 3: Turning on the Power to the miniPC
Once the spoof is ready, the Trinket turns on the power to the miniPC. This is done by connecting the Trinket to the main power button of the miniPC. The power is then supplied to the miniPC, and it can be used as normal.
Step 4: Printed Assembly
Assembling everything onto the 3d print
Source Code:
Code Repo: https://github.com/comma-hacks/dell-power-supply
Model Repo: https://github.com/comma-hacks/accessories/tree/master/meshes/backpack/dell-power-supply
This is fine but I realized with a bit more effort we could eliminate the router, reducing complexity and maybe improving latency further. Notice the much cleaner look too:
This guide sparked a more sophisticated bit of tooling which can be found https://github.com/kfatehi/comma-body-hacks but the following techniques apply to linux machines in general and so it is worth publishing before further pigeonholing it into highly customized scripts. Hope you find the information in a random google search and it helps you!
Login to Comma Connect to get your dongle ID https://connect.comma.ai
1 | export DONGLE_ID=<DONGLE ID> |
1 | ssh -o ProxyCommand="ssh -W %h:%p -p %p %h@ssh.comma.ai" comma@$DONGLE_ID |
1 | sudo ip addr add 192.168.0.1/24 dev eth0 |
1 | cat <<EOF > /tmp/dnsmasq.conf |
1 | sudo dnsmasq -d -C /tmp/dnsmasq.conf |
1 | ssh -t -o ProxyCommand="ssh -W %h:%p -p %p %h@ssh.comma.ai" comma@$DONGLE_ID ssh -t "body@\$(cat /tmp/dnsmasq.leases | awk '{print \$3}')" |
1 | nmcli dev wifi list |
Further work on this topic can be found here https://github.com/kfatehi/comma-body-hacks
]]>1 |
|
The results:
1 | PID: 2081628, Age: 00:10:49 |
Let’s use the cron method described in this StackOverflow answer from dland to solve for this. dland write:
To be honest, logging long-running queries seems via cron does seem like the most KISS solution to me:
psql -tc "select now() as t, pid, usename, query from pg_stat_activity where state != 'idle'" > /tmp/pg.running.txt
Then you can spot the queries that stay there for several hours and run EXPLAIN on them.
This means we will end up creating a separate log file so we should think about how to organize this information so that it’s useful and not overwhelming. To this point, I think long running queries should instead be dumped into the default log file occasionally, so that operators can grep the same log file, except that now it will be more enriched.
We need to decide upon what frequency to invoke the check. I think 1 minute is fine, that’s too long for a database query. If something is running for a long time it will get printed over and over, once every minute.
We do not want to see active queries that are faster than 1 minute, which happen to be active at the time of the check, so we’ll make sure to only log queries whose runtime is exceeding 1 minute.
Finally I would like to avoid using an external program and do this all within postgres, this way we can use the existing logging functions and keep things self-contained.
annotate
method which injects a SQL comment into a query. We are looking for an automated solution, however, since our app has a lot of queries and we do not wish to edit all (any) of them.Nowadays we can chat with ChatGPT to get a jumpstart on unfamiliar problem-spaces. It did not disappoint in producing a solution. Note that this solution uses the internal pg_cron
extension, so I am adding the steps and then we’ll get into installing and using ChatGPT’s solution. Let’s begin:
Create Dockerfile
1 | FROM postgres:15.1-bullseye |
Build it. Note I tried to use postgres:11 but apt was broken there, so we’ll be using postgres 15. Depending on when you’re reading this, you may have the same issue with postgres 15, and maybe bumping the versions will work for you too, future person.
1 | docker build -t pgwcron . |
Create postgresql.conf
1 | listen_addresses = '*' |
Start the server
1 | docker run --rm --name trace-postgres -v $PWD/postgresql.conf:/etc/postgresql/postgresql.conf -e POSTGRES_PASSWORD=pass pgwcron -c 'config_file=/etc/postgresql/postgresql.conf' |
Start a client
1 | docker exec -it $(docker inspect --format="{{.Id}}" trace-postgres) psql -U postgres |
Install the pg_cron
extension by executing the following SQL statement:
1 | CREATE EXTENSION pg_cron; |
And now we can use ChatGPT’s solution. Note that I had to fix some small errors it made, so the solution you see here is tested working:
This function will use the
RAISE NOTICE
statement to output a message for each active query in thepg_stat_activity
view that has been running for more than 1 minute, with the current timestamp, process ID, username, and query text. These messages will be written to the PostgreSQL log file, which is specified in thelog_destination
configuration parameter in thepostgresql.conf
file.
1 | CREATE OR REPLACE FUNCTION log_active_queries() |
To schedule this modified function to run every minute, you can use the
pg_cron
extension as follows:
1 | SELECT cron.schedule('log_active_queries', '* * * * *', $$SELECT log_active_queries();$$); |
To stop a scheduled job that was created using the pg_cron extension, you can use the cron.unschedule function as follows:
1 | SELECT cron.unschedule('log_active_queries'); |
Now let’s create some long running jobs with annotations and see if it gets logged. I will run this a few times simultaneously using multiple terminal shells.
1 | docker exec -it $(docker inspect --format="{{.Id}}" trace-postgres) psql -U postgres -c 'SELECT pg_sleep(120) /* Hello */;' |
Looks like it works! In our logs we can see the long running query along with its tracer comment:
1 | NOTICE: Active queries as of 2022-12-28 01:18:00.057665+00 at pid 114: SELECT pg_sleep(120) /* Hello */; |
We should want to improve the function now to show possibly other columns from pg_stat_activity
which might be useful, or any other changes. But let’s move on from here having proved this approach.
Here is a clean monkey patch which annotates SQL queries executed by the PG::Connection class (the underlying connection used by ActiveRecord):
Create sqlannotator.rb in config/initializers
1 | # https://blog.daveallie.com/clean-monkey-patching |
To test it, perform any SQL query, or directly use the connection instance like so:
1 | ActiveRecord::Base.connection.raw_connection.async_exec("SELECT now();") |
This works! You can emulate a lockup situation easily like so, and then check if it’s being logged.
1 | BEGIN; |
1 | User.all |
It should get stuck…
psql
shell to check pg_stat_activity:1 | app=# select pid,query from pg_stat_activity where state = 'active'; |
There’s our annotation! We’re done. Just one more thing and that’s cleaning up this mess.
At this point I wanted to cancel the queries and found a nice answer from Andong Zhan was helpful:
What I did is first check what are the running processes by
1 | SELECT * FROM pg_stat_activity WHERE state = 'active'; |
Find the process you want to kill, then type:
1 | SELECT pg_cancel_backend(<pid of the process>) |
This basically “starts” a request to terminate gracefully, which may be satisfied after some time, though the query comes back immediately.
If the process cannot be killed, try:
1 | SELECT pg_terminate_backend(<pid of the process>) |
this is an idea suggested to me by buu in the postgres IRC channel (thanks). let’s prove it step by step:
1 | docker pull postgres:11 |
1 | docker run --rm -it postgres:11 cat /usr/share/postgresql/postgresql.conf.sample > postgresql.conf |
focus on the sections talking about logs and ignore the rest.
primarily, we want to set log_statement = 'all'
as that is what ensures our queries are printed
1 | docker run --rm --name trace-postgres -v "$PWD/postgresql.conf":/etc/postgresql/postgresql.conf -e POSTGRES_PASSWORD=pass postgres:11 -c 'config_file=/etc/postgresql/postgresql.conf' |
we should see something like this:
1 |
|
1 | docker exec -it $(docker inspect --format="{{.Id}}" trace-postgres) psql -U postgres -c 'select now();' |
the logs should show this query:
1 | 2022-12-22 04:44:36.654 GMT [87] LOG: statement: select now(); |
1 | docker exec -it $(docker inspect --format="{{.Id}}" trace-postgres) psql -U postgres -c 'select now()/*trace_parent=job:12345*/;' |
now the logs show this query including the comment!
1 | 2022-12-22 04:50:18.032 GMT [106] LOG: statement: select now()/*trace_parent=job:12345*/; |
Thus it appears that it is possible. All that remains is to facilitate injection of a smart identifier into query interface on the application’s postgres client.
Further questions…
All around us, every moment, we miss out on potential to know the truth now and in the future. We have so much richness all around us that we think this is fine and normal, to capture even a fraction of it is hard to imagine. Our vision for example, can only capture a small fovea at some frequency that we reproduce in video in terms of frames per second… capture all of it and you don’t get an emergent pattern, you get the product of a fully exposed aperture: pure washout due to overabundance of information causing clipping in the sensors.
Nevertheless, where it makes sense, we have the means to capture data and the ability to capture that which is otherwise left to oblivion (even photography falls under this, following my previous example, and it could be part of what makes instagram so appealing as compared to twitter, the trash heap of the internet) is like magic.
NASA’s New Horizons might just be my favorite IoT device in the universe.
Enjoying the aesthetic and beauty of a photograph on your phone, or the immersion of a 6-DoF VR experience in a photogrammetrically captured environment, are great examples of creating future value through the capture of that which is otherwise ephemeral. There is beauty in automated feedback-loop control systems, or the manual work of a designer making important decisions based on observed patterns of data, using tools like linear regression to be prepared.
If IoT wasn’t such a security problem, I’d probably never have quit, but I’ve written about how dwelling on security issues is paralyzing in a bad way. Especially for me, because I see connected sensors as an important substrate (or overlay) to my other real-world projects. Not all sensors exist or are affordable, of course, and so we make up for it with failure, collecting those experiences in our brains. Since IoT is inherently less secure than no IoT, when being ultra-conservative I tend to throw the baby out with the bathwater. A better idea is to raise the baby well, with good security posture in mind, and use the bathwater in the garden.
My gainful employment over the past couple years (I think I started this full-time job around September 2021) relit the spark and has kept me sharpening the tools without really realizing it so consciously. Particularly, TimescaleDB, Grafana, and Telegraf, have become as obvious as the hammer, nail, and screwdriver. The pressures (amount of data, as well as the hammering of users of the grafana dashboards) have caused me to figure out all the details of Timescale that a small operation might not need to know about: creating the right indices, setting proper retention policies, managing jobs to move to alternative hard drives by means of postgres’ tablespaces.
I’m grateful about it because if there’s any skill worth having (and that stays useful in industry), it’s knowing how to gather metrics and I don’t care if it’s about some software system, hardware system, done with a computer, or done with your eyes, pen, paper and brain.
]]>The default OS provided by the official Raspberry Pi Imager software still works on units this old. I still went for the “legacy” Buster lite (no desktop) image, but I first tested the Bullseye desktop version and it worked fine, albeit very slow on the 2011 model. Even the configuration settings in the Imager program, where you can set hostname, wifi, public key, etc, worked perfectly.
NodeJS does not build official armv6 binaries after version 11. If you need NodeJS on a Pi this old, use this technique:
1 | curl https://nodejs.org/dist/latest-v11.x/node-v11.15.0-linux-armv6l.tar.gz | sudo tar xzvf - --strip-components=1 -C /usr/local |
I ended up verifying several old raspberry pis I have salvaged from various use cases around the house, and even an old Pi Zero (very first version) verified true with statement #1 above. It, too, is an armv6l and so statement #2 is also true.
Finally, do not forget to use overlay-fs feature before leaving your Pi to operate for years on end! This will help prevent your SD card from wearing out. Here’s a discussion about that feature: https://forums.raspberrypi.com/viewtopic.php?t=294427
]]>Unrelated to Earthly, I hope this PR gets merged where I push docker a bit further to make AGNOS (the ubuntu-based OS that powers the Comma 3) buildable on an M1 machine.
A friend informed me that this chip screams, I didn’t believe them at first, but after building a few things with it (agnos, openconnect, and others) I ended up asking the internet why this chip is so fast!
AGNOS usually takes a day to build with an emulated arm environment from a Ryzen 7 chip. On my M1 Air it took about 20 minutes. I think it spent more time sparsifying the final disk image than actually compiling…
]]>I need to see “your truth” with my own eyes before I cargo-cult your imposed commandments to perform some upgrades. If I’m vulnerable, prove it, else how will I know I solved the problem? Just keep taking your word for it, yeah?
This experience had synchrony with the disputations relating to COVID-19, specifically how we cargo-cult the vaccinations without proof of vulnerability nor proof of patching of said vulnerability. I don’t talk about politics on this site but damn this was such funny parallel it’s impossible not to point it out.
Some of us just need evidence… we need to exploit or at least see proof of exploitability, before drastic action is warranted, for the drastic action could cause undesired side effects to a system that is otherwise working fine. This is scientific. Anything else is unscientific. I am so tired of appeal to authority or majority “consensus” being considered anything but fallacious. Sadly not everyone takes Logic.
When I’m lost I turn to IRC. Here’s my nice experience sharing (extending the brainstorming beyond my team) this with #elasticsearch on libera.chat IRC
keyvan:
good afternoon. im looking for help reproducing the log4shell hack on my ES 5 instance. i have enabled slowlogger and can see my jdni-infected query in the logs, but it does not seem to call out to the external server. i have verified the external setup (im using log4shell.huntress.com) with a small example project containing only various log4j versions and it does work; but it seems elasticsearch isn’t affected? but it should be? thanks
wwalker:
keyvan: does ES 5 run a new enough log4j to be affected? ES5 seems like it probably ran log4j rather than log4j-2
keyvan:
wwalker: i think it uses log4j 2.9, i know 5.6.10 there’s a PR that bumps it to 2.11. there is a table here that shows 5 is supposed to be vulnerable https://discuss.elastic.co/t/apache-log4j2-remote-code-execution-rce-vulnerability-cve-2021-44228-esa-2021-31/291476 but what is interesting is that it appears if on Java 9 (on ES6) you’re not vulnerable? so i am wondering if i am not able to repro because im on Java9 on ES5…
keyvan:
i will be downgrading to Java 8 to verify this (the “more vulnerable java”) according to that table…
keyvan:
haHA!!!!!!!! yep, my ES5 with java 8 was vulnerable… but with Java 9 was NOT..
Here we have our basic Log4J tester project in which the log4j dependency can be changed easily in the build.gradle file: https://github.com/kfatehi/log4shell-test-log4j-intellij-idea-project
Then, for the elasticsearch side of things, we’ve got here a java8 container which is vulnerable to the JNDI attack: https://github.com/kfatehi/docker-elasticsearch5-java8
Finally, we can swap Java 8 for Java 9 (link in the Dockerfile) and witness that the attack does not work.
1 | curl -XPUT 'http://localhost:9200/_all/_settings?preserve_existing=true' -d '{ |
1 | curl -XPUT 'http://localhost:9200/my-index' -d '{ |
Apple has documented these details here, but it boils down to adherence to strict encoding and serving requirements.
I am encoding like this now for iPhone embedding. The filter-complex is because it also appears to me that resolution must be standard, so when I crop out a strange resolution (ironically, I do this directly on an iPhone…) I seem to have to fill it back in, so that’s what this filter does, using a blur effect:
1 | #!/bin/bash |
I ended up creating this issue for Hexo https://github.com/hexojs/hexo/issues/4829 to support range headers.
It was easy to figure out what’s going on by using wireshark to see the embedded server was disregarding Apple’s strict expectation of adherence to Range headers.
Here we see Hexo’s development server disregarding the range headers:
When using the http-server
module we can see the range header is respected:
The project I wished to PoC was to see how we could use the driver monitoring model (which is used to detect the distraction level of the driver) to guide a reprojection of one of the back-facing cameras so as to achieve the illusion of transparency.
There is a nice paper with three prototypes here: https://dan.andersen.name/publication/2016-09-19-ismar
These implementations rely on depth information, but on the comma 3 platform we do not necessarily have depth data out of the box. I say it this way because there are numerous techniques to acquire depth, see comma/depth10k.
Working directly on the C3 is possible and easy to do. I will document the workflow. These days even the hardcore tmux/vim users like me are tempted into the Remote-SSH extension in VSCode, so first we’ll fix that.
The /data/media directory in the C3 is where you have persistent read-write to the large NVMe drive. I create a developer
folder in there.
When I boot the C3, before I do any work, I run the following script:
1 |
|
The transform bit is application-specific to how I was learning about the matrix transformation. Say I want to work on that again, this is its content:
1 | { |
This matrix is applied to the shader and was an opportunity to manipulate the surface to which the camera was projected… I learned that these were the meanings of the fields by way of editing the transform while my program was running:
1 | the transform: |
Anyway, on the C3, when it boots, everything runs in tmux. We want to stop the UI. We also want to stop updaterd
because it can randomly reset our working tree.
Now is a good time to replace openpilot with a branch containing the code for this particular experiment.
1 | cd /data && rm -rf openpilot && git clone --recursive --branch seethru https://github.com/kfatehi/openpilot |
You can tmux attach
and Ctrl-C to kill it. Who runs tmux on boot? Systemd does, there is a service called comma that launches tmux with the launch script inside the first shell.
All services will be stopped, and now we can leverage openpilot’s interprocess communications ourselves, with purpose. Let’s block two services, and manually start the camera and driver monitoring service. Do this in different tmux windows.
BLOCK="ui,updated" ./launch_openpilot.sh
cd /data/openpilot/selfdrive/camerad && ./camerad
cd /data/openpilot/selfdrive/modeld && ./dmonitoringmodeld
Finally, our iterative command to compile and run our binary:
cd /data/openpilot/selfdrive/ui && scons -u -j4 && ./seethru
The file to hack on is openpilot/selfdrive/ui/qt/widgets/seethrucameraview.cc
or view it on GitHub here
The comments I have written in seethrucameraview.cc describe the odds of pulling this off properly are low, but that is ameliorated, as far as the use case of driving in a car, by the fixed positions of the driver’s head and the c3’s mounted location. So it’s possible and probably worth continuing in order to achieve transparency through the C3’s display when driving with it.
]]>acme.sh
because it saved me one day when I was desperately searching for a tool I could use without having to fumble withI did all of this as root on a Vultr VM. Install acme.sh
per https://github.com/acmesh-official/acme.sh/wiki/How-to-install
Let’s experiment with the DNS API feature of acme.sh
per the documentation here https://github.com/acmesh-official/acme.sh/wiki/dnsapi
To take advantage of this, we must start using Cloudflare for DNS. We want to use this for a few reasons:
acme.sh
, hence Cloudflare.If your domain belongs to some other registrar, you can switch your nameservers over to Cloudflare.
This is important as Cloudflare’s DNS API is well-supported by acme.sh
as this article will demonstrate.
Generate an API token at Cloudflare here https://dash.cloudflare.com/profile/api-tokens
This is one of three inputs required by acme.sh
; in these next few steps we wish to establish these environment variables. Once you issue the cert, they will be stored in acme.sh
‘s configuration for future use.
1 | export CF_Token="" # API token you generated on the site. It should have Zone.DNS edit permission for at least one Zone being the domain you're generating certs for |
Once you have set your API token the following will help you get the remaining two. You may want to apt install -y jq
if you’re pasting these commands so the JSON is parsed out for you.
1 | curl -X GET "https://api.cloudflare.com/client/v4/zones" -H "Authorization: Bearer $CF_Token" | jq |
If you can’t read jq
selectors, you will now, as I’m showing you which key paths get you the AccountID and ZoneID below:
1 | zone id: ... | jq '.result[0].id' |
1 | account id: ... | jq '.result[0].account.id' |
Export those variables too and now you can move on to issuing the cert.
1 | acme.sh --issue -d keyvan.pw -d '*.keyvan.pw' --dns dns_cf |
We got our cert! Install apache now too, enabling SSL while we’re at it.
1 | apt install -y apache2 |
Decide on a location where the certs should be installed to by acme.sh
and read from by apache, I’m choosing the following:
mkdir -p /etc/ssl/keyvan.pw
Make apache point to the files that will exist there very soon. I did this in the default-ssl virtual host apache creates:
1 | SSLCertificateFile /etc/ssl/keyvan.pw/keyvan.pw.cer |
Now we will use acme.sh
to install the certs. acme.sh
is storing all this information for future runs, it’s nice like that.
1 | acme.sh --install-cert -d keyvan.pw -d '*.keyvan.pw' \ |
Confirm it worked by hitting the website. Did you even bother creating your A record yet? I hadn’t yet at this point. This is a nice aspect of using DNS API. It is nice not to actually need a server, yet simply show ownership of the DNS.
Pretty amazing… people used to pay a lot of money and go through a lot more hassle to get this capability. But now within minutes I have proper wildcard and naked domain encryption.
Let’s install the cron so this automatically renews.
1 | 0 0 * * * acme.sh --cron |
Nice. We can test it with –force too, which I have done. It seems that acme will do everything per previous commands upon renewal including running your reloadcmd, e.g.:
[Sun 12 Sep 2021 02:38:25 AM UTC] Your cert is in: /root/.acme.sh/keyvan.pw/keyvan.pw.cer
[Sun 12 Sep 2021 02:38:25 AM UTC] Your cert key is in: /root/.acme.sh/keyvan.pw/keyvan.pw.key
[Sun 12 Sep 2021 02:38:25 AM UTC] The intermediate CA cert is in: /root/.acme.sh/keyvan.pw/ca.cer
[Sun 12 Sep 2021 02:38:25 AM UTC] And the full chain certs is there: /root/.acme.sh/keyvan.pw/fullchain.cer
[Sun 12 Sep 2021 02:38:26 AM UTC] Installing cert to: /etc/ssl/keyvan.pw/keyvan.pw.cer
[Sun 12 Sep 2021 02:38:26 AM UTC] Installing key to: /etc/ssl/keyvan.pw/keyvan.pw.key
[Sun 12 Sep 2021 02:38:26 AM UTC] Installing full chain to: /etc/ssl/keyvan.pw/fullchain.cer
[Sun 12 Sep 2021 02:38:26 AM UTC] Run reload cmd: systemctl reload apache2
[Sun 12 Sep 2021 02:38:26 AM UTC] Reload success
[Sun 12 Sep 2021 02:38:26 AM UTC] ===End cron===
SSL has never been so cheap, easy, and automatable…
]]>I used this technique to clean out a few thru-holes that I had soldered. Not sure how you’re supposed to remove solder from a previously-soldered thru-hole, but this worked surprisingly well given it is exactly the correct size.
]]>