Earthly is so useful. Native linux/arm64 builds (which will be the default when running Docker Desktop on an Apple M1 chip) can be shipped directly to and run fine on a Comma 3!
Unrelated to Earthly, I hope this PR gets merged where I push docker a bit further to make AGNOS (the ubuntu-based OS that powers the Comma 3) buildable on an M1 machine.
AGNOS usually takes a day to build with an emulated arm environment from a Ryzen 7 chip. On my M1 Air it took about 20 minutes. I think it spent more time sparsifying the final disk image than actually compiling…
I need to see “your truth” with my own eyes before I cargo-cult your imposed commandments to perform some upgrades. If I’m vulnerable, prove it, else how will I know I solved the problem? Just keep taking your word for it, yeah?
This experience had synchrony with the disputations relating to COVID-19, specifically how we cargo-cult the vaccinations without proof of vulnerability nor proof of patching of said vulnerability. I don’t talk about politics on this site but damn this was such funny parallel it’s impossible not to point it out.
Some of us just need evidence… we need to exploit or at least see proof of exploitability, before drastic action is warranted, for the drastic action could cause undesired side effects to a system that is otherwise working fine. This is scientific. Anything else is unscientific. I am so tired of appeal to authority or majority “consensus” being considered anything but fallacious. Sadly not everyone takes Logic.
When I’m lost I turn to IRC. Here’s my nice experience sharing (extending the brainstorming beyond my team) this with #elasticsearch on libera.chat IRC
Fishing Line to Finish Line
keyvan: good afternoon. im looking for help reproducing the log4shell hack on my ES 5 instance. i have enabled slowlogger and can see my jdni-infected query in the logs, but it does not seem to call out to the external server. i have verified the external setup (im using log4shell.huntress.com) with a small example project containing only various log4j versions and it does work; but it seems elasticsearch isn’t affected? but it should be? thanks
wwalker: keyvan: does ES 5 run a new enough log4j to be affected? ES5 seems like it probably ran log4j rather than log4j-2
I am encoding like this now for iPhone embedding. The filter-complex is because it also appears to me that resolution must be standard, so when I crop out a strange resolution (ironically, I do this directly on an iPhone…) I seem to have to fill it back in, so that’s what this filter does, using a blur effect:
It was easy to figure out what’s going on by using wireshark to see the embedded server was disregarding Apple’s strict expectation of adherence to Range headers.
Here we see Hexo’s development server disregarding the range headers:
When using the http-server module we can see the range header is respected:
On the weekend of November 12-14 2021 I attended comma hack which was a “hackathon” (“build something in a very short period of time by means of overexertion”-marathon).
The project I wished to PoC was to see how we could use the driver monitoring model (which is used to detect the distraction level of the driver) to guide a reprojection of one of the back-facing cameras so as to achieve the illusion of transparency.
These implementations rely on depth information, but on the comma 3 platform we do not necessarily have depth data out of the box. I say it this way because there are numerous techniques to acquire depth, see comma/depth10k.
Working directly on the C3 is possible and easy to do. I will document the workflow. These days even the hardcore tmux/vim users like me are tempted into the Remote-SSH extension in VSCode, so first we’ll fix that.
The /data/media directory in the C3 is where you have persistent read-write to the large NVMe drive. I create a developer folder in there.
When I boot the C3, before I do any work, I run the following script:
The transform bit is application-specific to how I was learning about the matrix transformation. Say I want to work on that again, this is its content:
This matrix is applied to the shader and was an opportunity to manipulate the surface to which the camera was projected… I learned that these were the meanings of the fields by way of editing the transform while my program was running:
Anyway, on the C3, when it boots, everything runs in tmux. We want to stop the UI. We also want to stop updaterd because it can randomly reset our working tree.
Now is a good time to replace openpilot with a branch containing the code for this particular experiment.
You can tmux attach and Ctrl-C to kill it. Who runs tmux on boot? Systemd does, there is a service called comma that launches tmux with the launch script inside the first shell.
All services will be stopped, and now we can leverage openpilot’s interprocess communications ourselves, with purpose. Let’s block two services, and manually start the camera and driver monitoring service. Do this in different tmux windows.
BLOCK="ui,updated" ./launch_openpilot.sh
cd /data/openpilot/selfdrive/camerad && ./camerad
cd /data/openpilot/selfdrive/modeld && ./dmonitoringmodeld
Finally, our iterative command to compile and run our binary:
cd /data/openpilot/selfdrive/ui && scons -u -j4 && ./seethru
The file to hack on is openpilot/selfdrive/ui/qt/widgets/seethrucameraview.cc or view it on GitHub here
The comments I have written in seethrucameraview.cc describe the odds of pulling this off properly are low, but that is ameliorated, as far as the use case of driving in a car, by the fixed positions of the driver’s head and the c3’s mounted location. So it’s possible and probably worth continuing in order to achieve transparency through the C3’s display when driving with it.
There are many other ACME clients out there, here’s a list https://letsencrypt.org/docs/client-options/#acme-v2-compatible-clients but I like acme.sh because it saved me one day when I was desperately searching for a tool I could use without having to fumble with package managers, so we will explore some more of its capabilities now.
To take advantage of this, we must start using Cloudflare for DNS. We want to use this for a few reasons:
No need to listen on a port on a server to generate valid certs. In fact you don’t need any records in your zone at all to do this!
We want to generate wildcard certificates. Only the DNS API appears to support this feature, so we need a compatible DNS provider with an API supported by acme.sh, hence Cloudflare.
If your domain belongs to some other registrar, you can switch your nameservers over to Cloudflare.
This is important as Cloudflare’s DNS API is well-supported by acme.sh as this article will demonstrate.
This is one of three inputs required by acme.sh; in these next few steps we wish to establish these environment variables. Once you issue the cert, they will be stored in acme.sh‘s configuration for future use.
1 2 3
export CF_Token="" # API token you generated on the site. It should have Zone.DNS edit permission for at least one Zone being the domain you're generating certs for export CF_Account_ID="" # We will get this in the next step export CF_Zone_ID="" # We will get this in the next step
Once you have set your API token the following will help you get the remaining two. You may want to apt install -y jq if you’re pasting these commands so the JSON is parsed out for you.
1
curl -X GET "https://api.cloudflare.com/client/v4/zones" -H "Authorization: Bearer $CF_Token" | jq
If you can’t read jq selectors, you will now, as I’m showing you which key paths get you the AccountID and ZoneID below:
1
zone id: ... | jq '.result[0].id'
1
account id: ... | jq '.result[0].account.id'
Export those variables too and now you can move on to issuing the cert.
Confirm it worked by hitting the website. Did you even bother creating your A record yet? I hadn’t yet at this point. This is a nice aspect of using DNS API. It is nice not to actually need a server, yet simply show ownership of the DNS.
Pretty amazing… people used to pay a lot of money and go through a lot more hassle to get this capability. But now within minutes I have proper wildcard and naked domain encryption.
Let’s install the cron so this automatically renews.
1
0 0 * * * acme.sh --cron
Nice. We can test it with –force too, which I have done. It seems that acme will do everything per previous commands upon renewal including running your reloadcmd, e.g.:
[Sun 12 Sep 2021 02:38:25 AM UTC] Your cert is in: /root/.acme.sh/keyvan.pw/keyvan.pw.cer [Sun 12 Sep 2021 02:38:25 AM UTC] Your cert key is in: /root/.acme.sh/keyvan.pw/keyvan.pw.key [Sun 12 Sep 2021 02:38:25 AM UTC] The intermediate CA cert is in: /root/.acme.sh/keyvan.pw/ca.cer [Sun 12 Sep 2021 02:38:25 AM UTC] And the full chain certs is there: /root/.acme.sh/keyvan.pw/fullchain.cer [Sun 12 Sep 2021 02:38:26 AM UTC] Installing cert to: /etc/ssl/keyvan.pw/keyvan.pw.cer [Sun 12 Sep 2021 02:38:26 AM UTC] Installing key to: /etc/ssl/keyvan.pw/keyvan.pw.key [Sun 12 Sep 2021 02:38:26 AM UTC] Installing full chain to: /etc/ssl/keyvan.pw/fullchain.cer [Sun 12 Sep 2021 02:38:26 AM UTC] Run reload cmd: systemctl reload apache2 [Sun 12 Sep 2021 02:38:26 AM UTC] Reload success [Sun 12 Sep 2021 02:38:26 AM UTC] ===End cron===
SSL has never been so cheap, easy, and automatable…
Funny how sometimes certain things are exactly perfect. Did you know that the blue Pentel 0.7mm mechanical pencil (sometimes it’s marketed as for “engineering” now I know another reason why…) is the exact size of the inner diameter of a PCB thru-hole?
I used this technique to clean out a few thru-holes that I had soldered. Not sure how you’re supposed to remove solder from a previously-soldered thru-hole, but this worked surprisingly well given it is exactly the correct size.
Your adversaries will see you connect to Frankfurt. Why Frankfurt? Well perhaps it’s a modern 1984 and you’ve found yourself in Oceana, Eurasia, Eastasia, and the safest hop to which your adversaries might find acceptable for you to communicate some soccer video game traffic to is Germany. I’m of course kidding around, gotta have fun right?
Anyway let’s configure our first hop, which is the most interesting part of the puzzle:
Frankfurt VPS
This is the entrypoint from the private location. (Entrypoint… hmm, sounds like I’ve been influenced by Docker)
First step let’s create the wireguard server that private client will connect to.
We’ll use an innocuous port commonly used for a popular video game (some kind of soccer game that’s popular over there, I don’t remember which, and it does not matter)… and NAT all our traffic through this server.
Forgot how to generate wireguard keys? Go to the official site, it’s all there. https://www.wireguard.com/quickstart/ If you want to use PreShared keys it’s wg genpsk
# Privacy centric for world travel... # https://stanislas.blog/2019/01/how-to-setup-vpn-server-wireguard-nat-ipv6/ # seems the nat was wrong, took a tip from this one: # https://www.cyberciti.biz/faq/how-to-set-up-wireguard-firewall-rules-in-linux/
But we don’t want to stop there… We want traffic to flow on to a US-based server that we can also control. This traffic will be private, within the wireguard network, so your adversaries will just see your game server playing another video game… Create a wireguard config that will connect to our New Jersey VPS:
/etc/wireguard/wg2.conf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
# Connect us up to the NJ Vultr to NAT Frankfurt's traffic
[Interface] PrivateKey = 4CAO8fbl44iJLUmDzL2/CIyylrc9a4GFb/OWgvJ3M1g= Address = 192.168.73.2/32 # We create static routes for NJ endpoint so wireguard can # directly connect to it from frankfurt. PreUp = ip route add 149.28.238.111/32 via 136.244.90.1 dev enp1s0 PostDown = ip route del 149.28.238.111/32 via 136.244.90.1 dev enp1s0
We will have routing table issues, so let’s handle that with some ruby…
As I wrote in the comment at the time…
This watches for wg0 connections and automatically add/removes routes so that i can be mobile yet still appear from NJ regardless of where I connect to the frankfurt rig from
#!/usr/bin/ruby # basically reason for the script is that the peer might connect from different ips # and we need to make sure this ip is exempted from being routed through NJ # because it's trying to connect by way of this server, so this server needs # to make sure to route its packets back directly to the peer rather than to # pass it through to the NJ. ok make sense? lets do it # the caveat here is we need to delete the routes so the way we will do this # is detect superfluous routes: defthe_business deleting = `ip route | grep 'via 136.244.90.1 dev enp1s0'`.split("\n").reject{|a| a.include? "default"or a.include? "149.28.238.111"or a.include? "dhcp" }.map(&:strip) # k now get the ones we wanna add... adding = [] `wg show wg0 endpoints`.split("\n").each do|line| a = line.match(/\s(.+):/); adding << "#{a[1]} via 136.244.90.1 dev enp1s0"if a and a[1] end
operations = [];
deleting.each do|aa| if adding.include? aa # no op # operations << "ip route add #{aa}" else operations << "ip route del #{aa}" end end
adding.each do|aa| if deleting.include? aa # no op # operations << "ip route add #{aa}" else operations << "ip route add #{aa}" end end
operations.each do|op| puts op `#{op}` end end
whiletrue the_business sleep 1 end
In the above script, the hardcoded IP and device name would need to change. These are just the IP address given by the platform company (Vultr in this case), and the network interface name.
Moving on to the systemd service unit…
/etc/systemd/system/wg-route.service
1 2 3 4 5 6 7 8 9
[[Unit] Description=automatic route modifier for wireguard connections After=network.target
[Service] ExecStart=/usr/local/bin/wg-route
[Install] WantedBy=multi-user.target
Frankfurt is done, start up the services and move on to the New Jersey VPS…
Where’s wg1? I had used that to connect back to a Dallas VPS which connects a few other things like home, office, etc.
It’s really great to learn how basic routing works, and with wireguard it seems anything is possible with relative ease, as this exercise seems to reveal.
Anyway, NJ:
NJ VPS
Picking another gaming port just to throw them off some more. Frankly I can’t tell the difference anymore.
Everything here has since been deleted, so don’t judge me for not scrubbing keys, IPs, etc, again this was just an exercise and is here for reference to those that actually might need it and don’t have time to hack and slash/search their way to a working setup!
Perhaps in the future I’ll expand on this to have another port to effectively replace NJ with a home computer since certain websites will flag “normal web traffic” coming from a VPS/datacenter but not residential ISPs.
Something like this: Private Location <-> Frankfurt VPS <-> USA Home Computer <-> USA websites
I hope this showcases how powerful wireguard can be and provides some examples for those searching and dealing with this problem-set looking for reference.
Luckily I do not have a need for such capabilities, but it is good to know it can be done so easily and how. Reinforces my faith in humanity in some ways, and given world events over the last few years man do I appreciate the reinforcement.
Since learning mikrotik last year, and wireguard this year, I’ve found myself using this routing knowledge and especially wireguard daily for both home (mobile access to home network resources) and work (work from home problem-sets, connecting servers together to expose a private service to some employees, etc), this exercise is just another example of an important use case which wireguard solves perfectly. What an incredible piece of software.
Are you looking for the official how-to guide from timescaledb about retention policies? Here is a link
Firstly I want to know which tables I have are actually hypertables. You can list hypertables (check that link as they talk about it having changed) by doing:
But I have a lot of other tables that are created by telegraf (specifically, phemmer’s fork which adds postgres output support). Why aren’t they hypertables?
This is important because you can’t do retention policies on regular tables with timescaledb. You also lose out on other important timescaledb features which in my case of concern about disk space may be important, i.e. compression.
Turns out that the telegraf plugin is not automatically creating a hypertable, so that’s a todo on my telegraf fork.
Regardless, we want to solve for disk space to avoid a 3 AM pagerduty alert. We want a retention policy of 12 months for the rails requests and 2 weeks for everything else.
Looks good. But we need to free disk space NOW and we have many tables that are not hypertables…
What we can do is create hypertables through migration which will lockup tables for an unknown amount of time, or we can drop the table and rebuild it.
I am going to opt for scripting for the latter option because it’s not a big deal to get a fresh start on these other tables. List them with \d+:
cpu disk diskio elasticsearch_breakers elasticsearch_fs elasticsearch_http elasticsearch_indices elasticsearch_indices_stats_primaries elasticsearch_indices_stats_shards elasticsearch_indices_stats_shards_total elasticsearch_indices_stats_total elasticsearch_jvm elasticsearch_os elasticsearch_process elasticsearch_thread_pool elasticsearch_transport ipvs_real_server ipvs_virtual_server mem net passenger passenger_group passenger_process passenger_supergroup postgresql processes procstat procstat_lookup rails_requests swap system websockets
Eliminating the two tables that we already dealt with, the plan is to take these tables, and for each one, truncate the table, create the hypertable, and then set a 2 week retention policy… Let’s do it manually with the biggest (and least interesting) table we have, the cpu table:
I needed to do this in order to make a readonly user to consume TimescaleDB from Grafana. In Grafana users can write arbitrary SQL directly to the DB so for safety reasons a readonly user makes sense.
I realized that adding a read-only user forced a bit more understanding than what’s necessary to use PostgreSQL from an application or conduct performance analysis, the typical developer tasks, so I would not be surprised if many devs, like me, haven’t bothered, so I really recommend doing it.
Rather than post my own notes, I prefer the comments on this gist from Tomek, which I will archive the main gist here now and link to the GitHub hosted gist down below which is worth a look for the excellent follow-up discussions.
1 2 3 4 5 6 7 8 9 10 11 12 13
-- Create a group CREATE ROLE readaccess;
-- Grant access to existing tables GRANT USAGE ON SCHEMA public TO readaccess; GRANTSELECTONALL TABLES IN SCHEMA public TO readaccess;
-- Grant access to future tables ALTERDEFAULT PRIVILEGES IN SCHEMA public GRANTSELECTON TABLES TO readaccess;
-- Create a final user with password CREATEUSER tomek WITH PASSWORD 'secret'; GRANT readaccess TO tomek;
By understanding this I was able to upgrade my mental model about PostgreSQL to the following: the world must be composed of users and groups called roles which have certain privilege to databases, schemas, and tables (also known as relations). Databases contain schemas and schemas contain tables.
I was relatively ignorant about the role of schemas before, but here we can see it is the mechanism by which future table privileges can be indicated. Do you know other reasons for which it may be valuable to know about schemas and their nuances?
This is part of a series about PostgreSQL & TimescaleDB – in the next one I will show how to deal with a TimescaleDB/PostgreSQL instance that is running out of disk space. We’ll do this by adding retention policies to certain relations after doing a bit of analysis.
Go to VMS, click Add, choose Linux Switch to Form View if you are in XML view. Set Name to RouterOS Set BIOS as SeaBIOS (Very important!!!) Set Disk to Manual with Format qcow2 and Path to the qcow2 file and bus SATA Set Network to the Bridge you want to use and/or pass in any PCI devices
Uncheck “Start VM after creation” and click Create. If you forgot to uncheck it just Stop the VM
edit the VM (click the logo of your VM for edit dropdown button to appear)
switch to XML view and locate the network card. looks something like this:
You may also find it's a good time to apply some basic firewall rules... depending on what you're planning on doing.
The example below makes sense for my purposes and may be helpful to examine:
/ip firewall filter add action=accept chain=input comment=”Accept new input on tcp/8291 from computer running winbox” connection-state=new dst-port=8291 protocol=tcp src-address=10.27.0.198 add action=accept chain=input comment=”Accept new input from gateway” connection-state=new src-address=10.14.52.1 add action=accept chain=input comment=”Accept established or related input” connection-state=established,related add action=drop chain=input comment=”Drop all input on ether1” in-interface=ether1 log=yes ```