Discovering Unraid

Another blog post while I wait for Array to be created.

How does Unraid being non-free square against the circle of retaining data sovereignty?

You should retain access to your own stuff without having to ask permission by some third party who may or may not even be reachable let alone necessarily agreeable in the future.

How does this work? How can I be OK with this?

My guess is that at the end of the day Unraid can be thought of as merely a very advanced WebGUI management layer between you and the free XFS file system, on top of the free Linux kernel.

Therefore by purchasing Unraid I am not giving up too much freedom, rather I am only deferring learning and building what would be an ideal WebGUI (Unraid) for managing the underlying complexity. I am paying for the hard work by the Unraid developers and to support the community.

This system helps users to enjoy a more convenient usage of the underlying system which is too complex for mere mortal like me. There is no ethical issue here about data sovereignty because XFS is free. If Unraid team stops supporting Unraid, I can access the regular linux shell and recover my data.

By supporting Unraid, the negative outcome I’m already prepared for becomes less likely.

This is OK, but what happens in the future as I grow reliant on that UI in a disaster scenario where I can’t receive an update – does it stay alive?

A couple of relevant communication in the Unraid Wiki FAQ sort of answer.

How hard is unRAID to use if I don’t know Linux?
Not hard at all. Although the unRAID server software is based on a slimmed-down Linux, it is managed almost entirely from your web browser, typically on a Windows or Mac computer. Some users are happy with that, but many want to take advantage of the many tweaks and addons for unRAID, and these usually require a little hands-on work. But they are completely optional, and are generally accompanied with lots of help and instructions, and there is a very helpful user community. Many users find this makes for a good introduction to Linux, at their own pace. See also this thread, especially this post, for more comments.

Does unRAID need Internet access?
The unRAID server software generally does not require Internet access. Of course, you will need Internet access from another desktop to download the software and software updates, and to read the unRAID forums and this Wiki!
However, there are many benefits to providing Internet access to your unRAID server. The unRAID software and your plugins and Docker containers can all be updated from within the software. Usability and manageability are improved with email and other notifications. unRAID supports NTP, the Internet time service, so if you enable the NTP service, your server will keep accurate time. In addition, expert unRAID users have created many addons for unRAID, such as plugins, Dockers, and VM’s, that can provide numerous application services such as torrent support, etc. All of these benefits are completely optional. See also this.

Can Unraid replicate?

I didn’t see any official support for the concept of pairing two servers together, no. Not yet.

Unraid really helps with learning about Disk Health

Having a UI for easily inspecting disk health, performing tests, and downloading SMART reports and a community from which to learn about these topics is wonderful.

Paranoid about Flash Drive going bad?

Better take a backup of it soon as described in the wiki

More reading here on that subject:

Paranoid about disks themselves?

Read about that here

Path to Unraid

Let me start with a bit of meta-blogging since it has been awhile since I’ve edited my website…

This is really just a test post to get my feet wet with editing my site again after all this time.

It seems that I don’t write here anymore, like I used to.

When creating this blog site my goal was to keep it purely technical and apolitical.

This has been mostly a success, but as our political climate grows more tumultuous, this site grows silent. Why?

This is due in part to the realization that even the technical work is political, and growing fear that kept me silent (apolitical) in my writing goals from the outset.

I’ll focus only on the former briefly, and segue to the main thrust of this post.

For example, when it comes to choosing where to store my (~ 10 TB) data – do I put it on Google? DropBox?

What causes do they support? Do I support them? Do I trust them? Even if I did, do I trust their provider to keep them online? To secure my data? What about my own ISP?

Of course if you really follow the plot you realize you’ve got your work cut out for you.

Instead, I’ll self-host and then that’s fodder for a blog post to help others.

I’m currently investigating this problem and am excited to buy into Unraid.

The biggest selling point of Unraid (and there are many) is the community application system.

This system is what I dreamed for around 2015 when I learned Docker.

But let’s be clear that the Docker Hub is for developers; using it or pushing it onto home users is a corruption of its intended purpose.

This is why you have such a rich ecosystem around Docker (and by rich I mean complicated and hard to understand, case in point: k8s).

I think the community application system within Unraid understands and addresses this gap, and so I’ll begin documenting this process here as it unfolds.

Blocking Ads with OPNsense's internal dnsmasq

I first tried to use UnboundDNS, but it seemed unreliable once modified for adblocking. I later discovered that dnsmasq does everything I expected from Unbound, but with the familiar configuration interface. It’s been battle-tested for adblocking, and so as a pre-requisite, enable and configure that.

Once you’re done, enable SSH and connect to your OPNsense box.

I used my phosphor user’s home directory to store my adblock files. Replace my username with yours where applicable

Steven Black maintains a nice hosts file that blocks a lot of things. We will download that and strip out the comments (dnsmasq requires this when loading extra hosts files).

mkdir adblock
cd adblock
curl -sSL "" | grep | grep -v '#' > hosts

Next go to your OPNsense Web GUi and navigate to Services -> Dnsmasq DNS -> Settings

In the Advanced section add the following, replacing my username with yours, or wherever you put your hosts file:


You can add multiple hosts files this way if you wish. Finally click Save and then Apply Configuration.

opnsense dnsmasq settings

Now you can test the adblock. You may need to reset your DNS cache on the clients you are testing. I like to use this site to test:

test results

Hunting DNS queries to block

So dnsmasq can also log queries if you add log-queries to the advanced configuration section. Then, the opnsense dnsmasq logs will show queries.


How to setup Apple push notifications for Riot using your own Sygnal instance

You will need:

  • An active apple developer license
  • A server in which to deploy sygnal

Push notifications in Matrix are configured by the client.

This is necessary due to the fact that that push is controlled by the device vendor. In other words, the iOS app tells the homeserver where the Sygnal server with the certificate corresponding to the App’s “App ID” is.

So you need to deploy a Sygnal server if you haven’t already, and load it with the certificate matching your the App ID you’ve compiled Riot as.

  1. Modify the plist file for Riot, configuring the AppID you’ll use and the Push Gateway (Sygnal) URL. This URL is relative to Synapse, as synapse will be the one receiving the messages through other matrix users. It will use this URL to push the notification to your device.

  2. Login to the apple developer portal at

  3. Go to the Certificates area, then go to “All”, and click the plus button

    apple-id-certs-area.png sidebar-all-certs.png new-cert-btn.png
  4. Choose the correct certificate for push, highlighted in green: correct-selection.png

  5. Go through the process of acquiring the certificate

    request-csr.png request-csr-2.png download-cert.png
  6. Locate the certificate and keypair, and export it as p12.

    locate-cert.png export-cert.png export-name-cert.png
  7. Convert the p12 file to a pem file:

    [keyvan@airframe ~]$ openssl pkcs12 -in ~/Desktop/Certificates.p12 -out apns.pem -nodes -clcerts
    Enter Import Password:
    MAC verified OK
  8. Configure Sygnal to use this pem file:

    pw.keyvan.riot.type = apns
    pw.keyvan.riot.platform = sandbox
    pw.keyvan.riot.certfile = /mnt/storage/apns.pem


You may want to do some maintenance on existing pushers in your Synapse database.

To list existing pusher entries:

select * from pushers;

To delete any existing pusher entries:

delete from pushers;

A simple React pagination component

I had to do pagination in a react app today. Often times for things like this, it’s easier to write your own thing than to use a library. Here’s the Pager component I ended up with:


import React from 'react';
export const Pager = React.createClass({
render: function() {
const {
} = this.props;
const checkDisabled = (change) => {
let newPage = pageNumber+change;
if ( newPage < 1 ) return true;
if ( newPage > lastPageNumber ) return true;
return false
const jumpToPage = (pageNo) => query({ perPage, showPageNumber: pageNo });
const changePage = (change) => (event) => jumpToPage(pageNumber+change);
return <span>
<span className="total">
total {totalRows} #{((pageNumber-1)*perPage) + 1}
<span className="pager">
<button disabled={checkDisabled(-1)} onClick={changePage(-1)}>prev</button>
<select value={pageNumber} onChange={({target:{value}})=>jumpToPage(value)}>
{ Array(lastPageNumber).fill().map((_,i)=><option key={i} value={i+1}>
page: {i+1}/{lastPageNumber}
<button disabled={checkDisabled(1)} onClick={changePage(1)}>next</button>

This pager component calls the query prop function (a redux action, say) in response to previous and next buttons, and direct page selection, with an object like so { perPage: 10, showPageNumber: 1 }

Naturally, this being react, how this is used and what query does is out of scope, unknown and irrelevant, but here’s a little screenshot of the UI in which this is being used, for kicks:

pager screenshot

Chicken breast and red potatoes single oven recipe


chicken breast and potatoes


  • 2 half chicken breasts
  • 1/4 cup butter
  • 6 cloves crushed garlic
  • 2 cups seasoned dry bread crumbs
  • 8 red potatoes


  • Preheat oven to 400 degrees F
  • prep chicken
    • In a small saucepan melt butter/margarine with garlic. Dip chicken pieces in butter/garlic sauce, letting extra drip off, then coat completely with bread crumbs.
    • Place coated chicken in a lightly greased baking dish. Combine any leftover butter/garlic sauce with bread crumbs and spoon mixture over chicken pieces.
  • prep potatoes
    • Toss potatoes with oil, salt and pepper. Arrange, cut side down, on a large lipped cookie sheet or jellyroll pan. tossed potatoes
  • Put the potatoes on the lowest rack, and chicken around half-way. Bake for 45 minutes. chicken mid and potatoes low


came out great! the oven configuration and timing seemed perfect.

the chicken was boring in the middle because of no marination, also i think the breading was pretty unnecessary / hid the taste of the garlic butter.

the potatoes were pretty amazing, especially how they became crispy on the bottoms


Homemade Mirin Recipe

yield: 1/2 cup


  • 5 tablespoons (65 g) sugar, such as organic cane sugar
  • 1/2 cup (120 ml) sake
  • 1 1/2 teaspoons pure cane syrup, such as Steen’s (optional)


  1. Combine the ingredients in a very small saucepan, such as butter warmer/melter. Bring to boil over medium heat, give things a stir to ensure the sugar has dissolved.
  2. Remove from the heat and set aside to cool. Taste and add the cane syrup for depth, if you like.


Yakitori Recipe

yield: 1 cup


  • 1/2 cups soy sauce
  • 1/4 cups sugar
  • 1/2 cups mirin
  • 1/4 cups sake
  • 1 garlic cloves crushed
  • 1 slices fresh ginger peeled 1/8 inch thick
  • 1 tablespoons water
  • 1 tablespoons cornstarch


  1. In a small saucepan, combine soy sauce, sugar, mirin, sake, garlic and gingerroot.
  2. Cook over medium high heat 3 to 4 minutes.
  3. In a small bowl, blend water and cornstarch.
  4. Stir constarch mixture into soy sauce mixture.
  5. Cook until thickened, stirring constantly.
  6. Strain sauce.
  7. Keep at room temperature for up to 24 hours.
  8. Refrigerate.


Implementing Link Header Pagination on the Node.js Server

In the past few years, more and more APIs have begun to follow the RFC5988 convention of using the Link header to provide URLs for the next page. We can do this too and it’s quite easy.

Here is a function I recently wrote to do this for the simple case of a big array:

function paginate(sourceList, page, perPage) {
var totalCount = sourceList.length;
var lastPage = Math.floor(totalCount / perPage);
var sliceBegin = page*perPage;
var sliceEnd = sliceBegin+perPage;
var pageList = sourceList.slice(sliceBegin, sliceEnd);
return {
pageData: pageList,
nextPage: page < lastPage ? page+1 : null,
totalCount: totalCount

To demonstrate the usage, imagine you have defined a function getMovies which provides a movieList array you wish to paginate.
You have an express route /movies which serves as the web API to your movie library.
You might create a paginated route like this:

app.get('/movies', function(req, res) {
var pageNum = parseInt( || 0);
var perPage = parseInt(req.query.per_page || 50);
getMovies(function(err, movieList) {
if (err) throw err;
var page = paginate(movieList, pageNum, perPage);
if (page.nextPage) {
res.set("Link", "/movies?page="+page.nextPage);
res.set("X-Total-Count", movieList.length);

Note that in most cases, you would not be paginating from a big array. This was my first time paginating a fairly large set which was not from a database. In the case of database access, your function won’t be so general since it will depend on using the database API to create an efficient query by offset and limit.

Eavesdropping on your iPhone's network traffic with your Mac and Wireshark

A few days after this writing, a relevant item appeared on HackerNews discussing the use of an HTTP proxy for this purpose, which allows you to see TLS traffic in most circumstances, a shortcoming of my approach here with wireshark. Here is the link. The top comment recommends mitmproxy which looks like the better tool for the job in this case than wireshark! Still it is very good to learn so that you can intercept the traffic when lower level network functions are used directly, although this is becoming quite rare I think.

“Pokemon Go” is a mobile phone game in which little mofos spawn in various places in the real world (on a map) and you have to be within proximity to (a) discover them and (b) “catch” them by throwing a ball at them.

Finding these little mofos is a hassle because you don’t know where the optimal populations might be at any moment and/or you may be looking for a specific type of little mofo. If only you could see all the locations at once!

Someone created about a week prior to the writing of this article, however it is currently not working. It looks like this, showing you the exact locations and spawn timeouts of the little mofos anywhere:


I believe that pokevision was created by reverse engineering the communication between the mobile app and the backend game server, determining the API, and then using that artificially from the pokevision servers, caching the responses appropriately in a little-mofo-location-database.

If we are to do the same reverse engineering task, and this applies to any traffic on your mobile device (or any device with wifi, but restricted access, a mobile phone being just that), we need to setup a wifi hotspot that we control and monitor.

On macOS, this is very easy. A simple checkbox abstracts away the creation and configuration of a bridge in which your wifi becomes an infrastructure access point and NAT and DHCP are handled for you automatically:


Next up we open wireshark and select the bridge as our capture interface. This allows us to eavesdrop on the iPhone, assuming it is connected to our Mac via wifi.


Now the packets start flowing in:


Notice the protocol is TLSv2. It’s probably HTTP beneath that encryption layer. Wireshark lets us follow the connection, so the data stream is more readable than just straight packets:


In this view we can see that we have correctly identified traffic originating from the “Pokemon Go” app, but that a handshake is underway and in order to view anything else, we’d need to decrypt the encryption layer.


This all took some 20 minutes or so and got us an environment in which at least the ciphertext traffic was available to us, and with the right keys, plaintext-observable. I think that the pokevision team took this to the next level, using an android phone (probably rooted) to harvest the keys required to decrypt the traffic.

Because pokevision was created through reverse engineering, it probably won’t last. This explains why we are seeing this error despite the fact that the “Pokemon Go” app itself is currently operational. If I was niantic (owner of Pokemon Go), I would crack down on pokevision and add an in-app purchase in which the powers of pokevision were temporarily granted to a player.