Shinobi – Home NVR Software That Eats your MySQL Databases

This is a quick gripe about https://shinobi.video/.

tl;dr looking to install a home surveillance system using PoE cameras, prefer to use a software package instead of buying a NVR appliance. Tried installing “Shinobi” but became frustrated at poor documentation and an install script that is careless with your system.

Shinobi came recommended by a Redditor on a post.

I *tried* to install it but found it difficult, opaque, and a bit user-hostile.

I’m sure I will get flamed for my opinions but oh well I’m going to express my opinions:

  • The website discourages container-based installations, for reasons not clearly stated. A container-first approach would have avoided the problem I had in the latter half of this list.
  • The container installer, like the bare metal installer, is a script rather than a set of instructions. You shouldn’t trust install scripts and developers should not use them, or if they do, the install script should very clearly explain what step(s) are about to be performed.
  • The container installer only recognizes docker, it does not recognize podman. Podman is a mostly-compatible fork of docker created by Red Hat and is the preferred container engine on RHEL-based distros, and is an available option in Ubuntu. (Note that Docker doesn’t even publish Red Hat packages anymore -they’ve given up)
  • You can alias docker to podman, but the lack of awareness on the part of the dev seems to suggest they haven’t done testing on Red Hat distros like RHEL or Fedora.
  • podman-compose isn’t (AFAIK) 100% compatible with docker-compose commands. It might work, I don’t know, but I don’t use *-compose and am not interested in it.
  • They provide some docker-run commands but documentation is lacking.
  • They want you to download a shell script and run the shell script to perform the installation. The actual installation steps are buried within sub-scripts that get downloaded from a git repo. Sketchy.
  • The installer gives you several choices, i.e. a “touchless” installation or an “advanced” installation.
  • The touchless installation is anything but. It performs many apt-gets without warning or prompting the user what is happening. In my case, it tried to download mariadb. However I am already running mysql which conflicts with mariadb.
  • The apt-get commands in their scripts will not stop the installation when the conflict is detected, instead they added a force flag which uninstalls mysql and removes your existing data with no warning.

Look, I get that it is a community project. I’m sympathetic to the fact that it’s all free labor and I accept responsibility for running scripts without more careful inspection.

I got frustrated with the (lack of) good podman support and tried installing the “bare metal” version and got burned by it.

YMMV and you might have a great experience with it as opposed to me.

Apple iOS IKEv2/IPSEC VPN Breaks After New Let’sEncrypt SSL Certificates Issued

certbot renewed my SSL certificates and automatically loaded them into strongswan. I am currently unable to establish an IPSEC tunnel from my iOS devices to my home VPN server with these new certificates.

Unfortunately in typical Apple tradition, there are no readily accessible logs to understand why the underlying IKE daemons in iOS are unhappy. Strongswan has no problems with the new certificates.

I examined the certificates (working vs. non-working) and see that Let’s Encrypt is now using an EC instead of RSA to generate the keypair.

Apple’s documentation states they support ECDSA certificates for IKEv2 authentication.

< Public Key Algorithm: rsaEncryption
< Public-Key: (2048 bit)
>             Public Key Algorithm: id-ecPublicKey
>                 Public-Key: (256 bit)
>                 ASN1 OID: prime256v1
>                 NIST CURVE: P-256

<                 Digital Signature, Key Encipherment
>                 Digital Signature

I found a similar reported issue here from last year: https://apple.stackexchange.com/questions/412089/ios-native-ikev2-client-and-ecdsa-server-certificates

Only other thing that is different is the Key Usage field does not include “Key Encipherment” as an allowed usage.

Changing Let’s Encrypt to force the generation of an RSA certificate resolves the issue.

fr24feed updated to 1.0.34

I run an ADS-B monitoring station using RTL-SDR USB dongles. Normally people do this with Raspberry Pis but I have a perfectly good AMD64 box that runs 24/7.

These are the new .deb packages for this version:

http://repo.feed.flightradar24.com/linux_x86_binaries/fr24feed_1.0.34-0_i386.deb x86

http://repo.feed.flightradar24.com/linux_x86_64_binaries/fr24feed_1.0.34-0_amd64.deb amd64

I run these in a podman (docker) container. fr24feed failed to load because CAP_SYS_NICE was not enabled on the container.

You can add the capability to the container or remove the capability from the /usr/bin/fr24feed binary using the instructions here: https://gitlab.alpinelinux.org/alpine/aports/-/issues/11992. Doing either of those options will allow fr24feed to start up.

Mini-Tutorial: You got PXE boot working on Raspberry Pi, but how to make the root mount as NFSv4 instead of NFSv3?

Part of the reason I’ve started this blog is so I can write down my tech notes with important details instead of forgetting the details and having to spend a couple of hours figuring things out again when I dust off a project I put on the backburner months ago.

I apologize if you came here looking for a complete guide to net booting a Pi; I only intended to write this blog to fill in a missing detail that bugs me. Here’s a couple of places that might serve as good starting points if that’s what you need:

https://tp4348.medium.com/netboot-raspberry-pi-using-ubuntu-20-04-os-cb3973ff65b0

https://xunnanxu.github.io/2020/11/28/PXE-Boot-Diskless-Raspberry-Pi-4-With-Ubuntu-Ubiquiti-and-Synology-1-DHCP-Setup/

Anyway, to the point: I got PXE boot to work. Everything boots up, I can ssh to the Pi, etc. But as someone who has deployed NFS to clients before, I noticed root is mounted as NFSv3:

# mount | grep nfs
192.168.29.2:/pxe/storepi on / type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,proto=tcp,port=2049,timeo=600,retrans=10,sec=sys,local_lock=all,addr=192.168.29.2)

Is this really a big deal? Not really. But NFSv4 has a lot of improvements over NFSv3, including features that make the protocol more efficient. You probably won’t notice it unless you’re benchmarking. But I think it’s important to be exposed to the latest technologies so I can be familiar and apply what I’ve learned elsewhere. So thus, I *demand* that this be mounted as NFS v4.

Why is this happening? Linux will normally mount things with the highest possible NFS version that both client and server support. The client and sever both run Ubuntu 22.04 and should happily allow a NFSv4.2 mount.

The cause is found in the initramfs environment. The initramfs contains a very small working system, i.e. a few utilities that are just enough to prepare and boot into the *real* system. This includes filesystem mounting tools.

The “nfsmount” utility that ships in Debian (and is sourced by Ubuntu) does not support NFSv4. There are some ugly hacks and attempts to make it work, if you care to read the comments.

https://bugs.launchpad.net/ubuntu/+source/linux-raspi/+bug/1954716

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=409272

I found an easier solution: copy the /sbin/mount.nfs utility into the initramfs and call that instead of the weak-sauce “nfsmount” utility that comes by default.

What are the downsides to doing this, and why doesn’t Debian / Ubuntu just do that? Probably because the utilities that are in initramfs are intended to be *very small*. They do not even use glibc, they use klibc which is intended to produce tiny binaries.

Is there an actual downside to having a larger-than-normal initramfs? In this scenario I don’t see it but I’m guessing Debian and Ubuntu don’t want to do that due to the bloating of the initramfs.

Here are the steps:

  1. Do the necessary steps to get TFTP / NFS booting of your Ubuntu / Raspbian system working. Make sure it actually works and boots successfully before proceeding further.
  2. Create a “/usr/share/initramfs-tools/hooks/nfs” file with the contents shown in the code blob below. Make sure it is executable.
  3. Edit the existing file “/usr/share/initramfs-tools/scripts/nfs”. Refer to the code blob below.

    In the function “nfs_mount_root_impl()”, comment out the “nfsmount” command and add the “mount.nfs” command that I have shown. Those are the only changes you need to make here.

    Note that I snipped out all but the last few lines of the function. Don’t actually snip out all of this in your file, I did that just to keep the code blob tidy. You still need all of that for this to work correctly.
  4. Regenerate your initramfs using this command:
    update-initramfs -c -k all
  5. Reboot. If you did this correctly, the system should come up and you should see a similar output from the “mount” command as per the below code blob showing an nfs vers=4.x mount.
root@storepi:/usr/share/initramfs-tools/hooks# cat nfs
#!/bin/sh
set -e
PREREQ=""
prereqs () {
        echo "${PREREQ}"
}  
case "${1}" in
        prereqs)
                prereqs
                exit 0
                ;;
esac

. /usr/share/initramfs-tools/hook-functions

copy_exec /sbin/mount.nfs /sbin
exit 0
from /usr/share/initramfs-tools/scripts/nfs
# parse nfs bootargs and mount nfs
nfs_mount_root_impl()
{       
        .. snipped ..
        # shellcheck disable=SC2086
        mount.nfs ${NFSROOT} ${rootmnt} -o nolock ${roflag} ${NFSOPTS}
        #nfsmount -o nolock ${roflag} ${NFSOPTS} "${NFSROOT}" "${rootmnt?}"
}
192.168.29.2:/pxe/storepi on / type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=10,sec=sys,clientaddr=192.168.29.16,local_lock=none,addr=192.168.29.2)

Running RHEL Docker Containers in Ubuntu (or other non-RHEL)

My home server runs Ubuntu 22.04. I do a lot of RHEL development at work. Naturally I would like to have a more diverse environment at home.

If you use Red Hat’s UBI (universal base image), you get free access to a reasonable subset of what is available from a RHEL subscription.

If the UBI subset fits your needs, that’s great. However, in the process of setting up a RHEL 9 container to host this very WordPress blog, I found some packages that WordPress suggested I install were missing. I noticed when I was trying to install the PHP ImageMagick plugin from EPEL that it had a dependency on graphviz. graphviz is available in RHEL but not available in the UBI subset. Dang.

I have a free RHEL developer subscription, however, actually subscribing a UBI image to a RHEL subscription is somewhat tricky if you aren’t running the container on a RHEL host.

If you run your container on a RHEL host, RHEL’s clone of docker, podman, will (to my understanding) bind-mount the host’s subscription certificates into the container, allowing the container to inherit the host’s subscription without having to register the container as a separate instance and consuming a subscription.

To make this work on Ubuntu, one cannot simply just do a “subscription-manager register”:

subscription-manager is disabled when running inside a container. Please refer to your host system for subscription management.

But my host system is Ubuntu!

Red Hat seems to prefer stability over compatibility so they prefer to not support containers running on non-RHEL container runtimes, hence they make it difficult to put a RHEL subscription into a container running on a non-RHEL runtime host.

I found this article: https://access.redhat.com/discussions/5889431

Apparently the simplest thing to do is run this sed one-liner from a user in that discussion to modify the Python code that checks if the container is running on a RHEL host to always convince it that it’s not in a container:

sed -i 's/\(def in_container():\)/\1\n    return False/g' /usr/lib64/python*/*-packages/rhsm/config.py

Note that this is fine if you’re putting this kind of container into a test / dev / lab environment, but not a good idea to make it “production” because it’s not supported and support will turn you away if you try to open a case on such a container.

However it is working fine for my home lab use case and I am happy to have it as a way to “broaden my horizons” so to speak.

qemu + Windows NT 4 + Multiprocessor HAL = Crash. Fixed.

I spent a lot of time running old OSes, mainly for personal amusement and not for doing any practical work. I guess the lack of practicality is why I’m running into bugs / issues with qemu and older OSes.

Windows NT 4, released in 1996, does in fact support SMP. However, qemu with KVM acceleration will crash if you install the Multiprocessor HAL in Windows NT 4.

Stack trace of thread 2072708:
#0  0x00007f2e8a90ba7c __pthread_kill_implementation (libc.so.6 + 0x96a7c)
#1  0x00007f2e8a8b7476 __GI_raise (libc.so.6 + 0x42476)
#2  0x00007f2e8a89d7f3 __GI_abort (libc.so.6 + 0x287f3)
#3  0x000055cbf99545a2 do_patch_instruction (qemu-system-i386 + 0x2f85a2)
#4  0x000055cbf995b1d4 process_queued_cpu_work (qemu-system-i386 + 0x2ff1d4)
#5  0x000055cbf9d762f8 kvm_vcpu_thread_fn (qemu-system-i386 + 0x71a2f8)
#6  0x000055cbf9ef4560 qemu_thread_start (qemu-system-i386 + 0x898560)
#7  0x00007f2e8a909b43 start_thread (libc.so.6 + 0x94b43)
#8  0x00007f2e8a99ba00 __clone3 (libc.so.6 + 0x126a00)

do_patch_instruction is located in kvmvapic.c. It contains a switch block that evaluates a CPU instruction given to it, based on opcode, and if the opcode of the instruction does not match anything in the switch block, it will fall through to the default case and call libc’s abort().

Whatever NT is doing, it goes through this code and feeds it an instruction it doesn’t understand. The qemu devs had their reasons for calling abort() I suppose.

Comment out, recompile, and bam Windows NT will boot correctly now!

402 static void do_patch_instruction(CPUState *cs, run_on_cpu_data data)
403 {   
404     X86CPU *x86_cpu = X86_CPU(cs);
405     PatchInfo *info = (PatchInfo *) data.host_ptr;
406     VAPICHandlers *handlers = info->handler;
407     target_ulong ip = info->ip;
408     uint8_t opcode[2];
409     uint32_t imm32 = 0;
410     
411     cpu_memory_rw_debug(cs, ip, opcode, sizeof(opcode), 0);
412     
413     switch (opcode[0]) {
414     case 0x89: /* mov r32 to r/m32 */
415         patch_byte(x86_cpu, ip, 0x50 + modrm_reg(opcode[1]));  /* push reg */
416         patch_call(x86_cpu, ip + 1, handlers->set_tpr);
417         break; 
418     case 0x8b: /* mov r/m32 to r32 */
419         patch_byte(x86_cpu, ip, 0x90);
420         patch_call(x86_cpu, ip + 1, handlers->get_tpr[modrm_reg(opcode[1])]);
421         break; 
422     case 0xa1: /* mov abs to eax */
423         patch_call(x86_cpu, ip, handlers->get_tpr[0]);
424         break; 
425     case 0xa3: /* mov eax to abs */
426         patch_call(x86_cpu, ip, handlers->set_tpr_eax);
427         break; 
428     case 0xc7: /* mov imm32, r/m32 (c7/0) */
429         patch_byte(x86_cpu, ip, 0x68);  /* push imm32 */
430         cpu_memory_rw_debug(cs, ip + 6, (void *)&imm32, sizeof(imm32), 0);
431         cpu_memory_rw_debug(cs, ip + 1, (void *)&imm32, sizeof(imm32), 1);
432         patch_call(x86_cpu, ip + 5, handlers->set_tpr);
433         break; 
434     case 0xff: /* push r/m32 */ 
435         patch_byte(x86_cpu, ip, 0x50); /* push eax */
436         patch_call(x86_cpu, ip + 1, handlers->get_tpr_stack);
437         break;
438     default:
439         //abort();
440     }
441     
442     g_free(info);
443 }

There is another workaround for this: you can not use KVM and just use qemu’s TCG acceleration. Which is likely slower. Otherwise you have to limit yourself to one CPU if you want KVM.

One last problem with Windows NT 4.0: the multiprocessor HAL, unlike the uniprocessor HAL, does not sent a HLT instruction to idle CPUs. Meaning, like back in the Windows 95 days, your CPU will burn cycles doing nothing.

This person discusses the issue and resolved it by disassembling the hal.dll and adding the HLT instruction. He offers a replacement version of the hal.dll, but I did not have luck (the system boots and sends the HLT as expected, but it’s extremely sluggish).

Maybe I’ll investigate further. Or maybe not, it’s not that important.