Archive for the 'Technology' Category

HOWTO: Using your GoPro Hero8, 9 and 10 as a webcam on Linux over USB-C WITHOUT the Media Mod. Now you can!

This was a challenge/goal of mine and true to form, I rarely give up until I’ve figured something out, or coded a workaround. As Mark Rober so eloquently said, 2m10s into this video about the NICEST Car Horn Ever:

“The good news is that as an engineer, if something isn’t exactly how you want it… you just make it exactly how you want it.”

Some points straight from the start:

  • The GoPro does not support connecting TO WiFi networks. Full stop. Period. It can’t be a wireless client, only an AP.
    • The hardware is fully capable of connecting to existing WiFi networks but GoPro restricts this, and there’s no foreseeable way around it without hacking into the firmware and reflashing it with a replacement.
    • You can connect TO the GoPro as it presents its own WiFi network, but you cannot connect your GoPro to any existing WiFi network in range.
    • A $12 smart plug can connect to an existing WiFi network, but a $400 action cam from GoPro cannot. #facepalm
  • To connect the GoPro via HDMI directly, you need their $79 Media Mod hardware.
    • It’s essentially a frame that wraps around the GoPro, and exposes USB-C, micro-HDMI and an audio port for the device.
    • All of these are accessible via the single USB Type-C port that the Media Mod docks into. A USB Type-C to micro-HDMI adapter will not work when plugged into the bare GoPro USB Type-C port.
    • I tried 3 different models, they were all rejected and ignored.
  • GoPro Hero 5, 6 and 7 models supported native HDMI out. The Hero 8, 9 and 10 do not.
    • You need the additional Media Mod to get native HDMI out.
    • Anything greater than 1080p will require HDMI out.

So let’s dive in and get this working!

First thing you’ll need, is a USB cable. This can be a native USB Type-C to USB Type-C cable, or USB Type-C to USB Type-A, whatever your specific hardware (laptop or PC) requires.

You’ll want this cable to be relatively long, if you’re using this as a webcam, so you can position it where you need it, without being limited by cable length. There are an infinite number of choices and colors for these cables on Amazon and other retailers.

Just make sure you get a good quality, shielded cable for this purpose.

Once you have that, you’ll need to open up the battery door and pop that off, or, alternatively you can leave it ajar, with the battery inserted. I don’t like the door hanging half-off at a 45-degree angle, so I pried mine off. Since I also own the Media Mod, this was something I’ve already done 100 times.

To remove the door, you just open the door all the way until it won’t go any higher, than you give it gentle twist from front of camera to back, and it will pop off the hinge. If you’re used to how a Garmin watch band is removed from the watch face, it’s similar to that.

Next, you’ll want to go into the GoPro settings, and make sure the connection type is not “MTP” (used when mounting your GoPro as a “storage” device to retrieve photos, videos from your device. We’re not doing that here, so go the menus and swipe up, go to Settings ? Connections ? USB Connection ? GoPro Connect.

Now let’s make a quick change to your host’s networking to support giving this device a DHCP address when you connect it to your machine. To do that, you’ll use one of the following constructs:

If using netplan,. your configuration should look something like this, in a new file called /etc/netplan/02-gopro.yaml:

network:
  version: 2
  renderer: networkd
  ethernets:
    usb0:
      dhcp4: yes

If you’re using the legacy ifupdown style configuration, you’ll want to add the following to /etc/network/interfaces (or /etc/sysconfig/network for RPM-based distributions):

auto usb0
iface usb0 inet dhcp

To activate that, you can do a sudo netplan apply and it will render that configuration and restart systemd-networkd for you to acquire a DHCP lease when the camera is plugged in via USB. For legacy ifupdown, you’ll want to just restart your networking service with systemctl or service.

When you do plug your camera in, you should see something like the following in dmesg:

[20434.698644] usb 1-1.4.2: new high-speed USB device number 24 using xhci_hcd
[20434.804279] usb 1-1.4.2: New USB device found, idVendor=2672, idProduct=0052, bcdDevice= 4.04
[20434.804283] usb 1-1.4.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[20434.804284] usb 1-1.4.2: Product: HERO9
[20434.804285] usb 1-1.4.2: Manufacturer: GoPro
[20434.804285] usb 1-1.4.2: SerialNumber: Cxxxxxxxxxx123
[20434.811891] cdc_ether 1-1.4.2:1.0 usb0: register 'cdc_ether' at usb-0000:00:14.0-1.4.2, CDC Ethernet Device, 22:68:e2:ca:88:37

If you plugged in your camera and netplan/ifupdown assigned it a DHCP lease, you should now see something like:

15: usb0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 1000
    link/ether aa:80:23:e6:6d:9a brd ff:ff:ff:ff:ff:ff
    inet 172.23.141.54/24 brd 172.23.141.255 scope global dynamic usb0
       valid_lft 864000sec preferred_lft 864000sec
    inet6 fe80::a880:23ff:fee6:6d9a/64 scope link 
       valid_lft forever preferred_lft forever

You’re almost there!

Now we need to check out an upstream Github repository that includes some helper scripts/code to bring this camera online, attach a running ffmpeg process to it, and begin streaming it to a device in /dev/.

Clone the gopro_as_webcam_on_linux repository somewhere on your machine that will persist (not in /tmp which gets purged at each new boot).

$ git clone https://github.com/jschmid1/gopro_as_webcam_on_linux
Cloning into 'gopro_as_webcam_on_linux'...
remote: Enumerating objects: 78, done.
remote: Counting objects: 100% (78/78), done.
remote: Compressing objects: 100% (57/57), done.
remote: Total 78 (delta 34), reused 46 (delta 17), pack-reused 0
Unpacking objects: 100% (78/78), 32.04 KiB | 1.69 MiB/s, done.

If you go into that newly cloned repository and run the following command, you should see a new device get created that you can talk to. Make sure to change the IP to the one you observed when running the ip a s usb0 command above (bolded above and below for emphasis).

$ sudo ./gopro webcam -i 172.23.141.54 -a -n 
Running GoPro Webcam Util for Linux [0.0.3]

       Launch Options     
==========================
 * Non-interactive:  1
 * Autostart:        1
 * Preview:          0
 * Resolution:       3840p
 * FOV:              linear
 * IP Address:       172.23.141.54
==========================

v4l2loopback is loaded!
v4l2loopback was unloaded successfully.
v4l2loopback was successfully loaded.

Further down in the output, just before it begins to launch ffmpeg and start encoding the stream, you should see two lines that look like this:

[swscaler @ 0x56073e173740] deprecated pixel format used, make sure you did set range correctly
Output #0, video4linux2,v4l2, to '/dev/video42':

That /dev/video42 is the important part for your streaming tools. If you use Open Broadcaster Studio (OBS), that’s the camera device you’ll want to connect to when you choose “Video Capture”, when creating a camera in your scene.

If you want to preview/play this stream that ffmpeg is creating for you, you can use mplayer to do that, as follows:

mplayer tv:// -tv driver=v4l2:device=/dev/video42 outfmt=mjpg

This will open a new, live, preview window you can use to fine-tune your scene, layout and positioning of the camera.

I have my camera sitting on an extremely long microphone boom arm with an extra ‘forearm’ to reach up and over my monitors from behind (many thanks go to ATARABYTE for this idea and the link to the Heron 5ft Articulating Arm Camera Mount [note: affiliate link to give her credit]).

So that’s it!

Once you have the camera on the network with an IP you can route to, you can use the gopro script from Joshua Schmid’s GH repository to create a device that can then be used by ffmpeg or OBS to stream that camera’s feed to other locations.

Keep in mind, there is a subtle 0.5s delay, but it’s not terrible. If you’re recording the stream, you can adjust the audio so it lines up in OBS with the latency/delay features. If you’re using this live, I would recommend not using this as a front-facing camera, and only use it as a secondary/backup or overhead cam (my specific use case is as an overhead cam for close-up work).

The GoPro, being an “Action Cam” also does not have autofocus, so anything closer than about 12″, will start to blur out. If you need autofocus, consider using a mirrorless DSLR camera and a decent lens. To connect that to your Linux machine, you can use an Elgato CamLink 4k device. I use this as well, with my main streaming setup, and it works fantastic on Linux, with no drivers or setup required.

HOWTO: Remove Burn-in and Ghosting on your LCD Panel

With the pandemic continuing and no real end in sight, those of us who work from home are putting in extended hours at work, and extended hours on our computers and office equipment, including our eyes and monitors.

My monitors have started to develop burn-in and ghosting on parts of the screen where I keep common apps running like my browser windows, chat apps and other services.

Modern monitors lack the “degaussing” feature that was popular years ago on older CRT monitors to help remove those ghost images.

Fear not! There’s still a way to help remove ghosting, and it doesn’t involve running a screensaver! You can run an “LCD scrub” behind your windows as you work (or in a cron job, systemd unit or other time-based scheduler). Here’s how!

First, install the xscreensaver-data-extra package from your favorite package archive. In Ubuntu, that’s a simple sudo apt install xscreensaver-data-extra command. Then, run the following one-liner to target your ‘root’ window (Desktop):

/usr/lib/xscreensaver/lcdscrub -random -noinstall -window -window-id \
   $(xwininfo -root -tree | grep Desktop | awk '{print $1}') -spread 10

This will run the lcdscrub utility in your ‘root’ display, scrubbing the screen behind all of your current windows. Of course, you can close those windows around and make it more effective.

Here’s a quick video showing how this works:

lcdscrub saving my desktop display from ghosting and burn-in

Running this on a regular basis is a good idea, so you can avoid the kind of burn-in that can prematurely age your display. Since it may be difficult (or expensive) to replace monitors with chip and electronic shortages, you’ll want to extend the life as long as you can, before inventory becomes available for replacements.

Using an Elgato Stream Deck XL for Desktop and Livestream Productivity

I recently rebooted my home office to support a lot more professional, studio-quality AV.

This included moving away from the onboard laptop webcam to a dedicated USB webcam for better quality. I chose the Logitech BRIO Ultra HD because it could do 1080p as well as 4k, if needed, and had a very wide FOV. I also moved from a USB microphone to an XLR microphone. I originally started with an Audio -Technica AT2020USB (Cardioid Condenser) USB mic, but moved over to the XLR version of the same mic, the Audio-Technica AT2041SP (Condenser) mic.

At the same time, I added an audio mixer, a Mackie ProFx6 v3. I originally bought the Focusrite Scarlett 18i8 (3rd Gen), but I could never get it to do anything at all, and even with a TRiTON FedHead attached, the audio was so low, and full of hissing background noise, it was unacceptable.

So that covered audio and video, but I needed a better way to present these into my meetings, work and personal, using Zoom, Teams, Google Meet and other tools.

Enter OBS, the Open Broadcaster Software. Since I use Linux, a lot of the free and commercial Windows alternatives were off the list, so I had to use OBS. With OBS, I can connect my cam to my Linux laptop and then create a ‘virtual’ camera that would be shared out via OBS, that I can configure in each video app I need. This allows me to add ‘scenes’ (more on this in another blog post) as well as overlays and other features to my video feed.

Currently, I have several cams set up, and can switch between them with a single keystroke, mouse click, or (as I’ll explain shortly), with a button-press on my Elgato Stream Deck XL. I have these set up with captions, an active clock (ticking each second visibly on my camera) and some other features.

I still use the Logitech Brio camera, but it’s now a secondary cam, replaced by a new primary cam, the Sony A6100.

I bought this camera with the kit lens, and quickly realized I needed a better f-stop than the kit lens had. I wanted to go down to f/1.4 or f/1.2, from the f/3.5-5.6 that the kit lens had. I upgraded that lens to a Sigma 16mm f/1.4 about a week later, and I couldn’t be happier with the results. It’s shockingly crisp and the AF (AutoFocus) is the fastest I’ve seen on a lens in this class.

There’s much more to my environment I’ll talk about later, but this is the main pillars of my tools and studio.

I don’t have the mic going through OBS at the moment, but that’s coming soon. Once I do that, I can do some pre-processing of the audio and clean up background noise, increase gain and make the sound quality MUCH better.

That’s the high level change: Upgraded cams, mic, added a mixer, routed it all through OBS and manage it there. There’s so much that can be done with OBS, and I’ll do a whole series on that later.

Now let’s talk about how I’ve incorporated the Stream Deck XL into my workflow. This is normally a Windows/Mac only device, with dedicated software for those platforms. That won’t work for me, since I use Linux for everything. I found Timothy Crosley‘s project ‘streamdeck-ui‘, a Python project, does almost exactly what the Elgato native software does, with some additional features that Elgato doesn’t have. It was drop-in simple to get it up and running.

The XL has 32 buttons on its face, and with streamdeck-ui, I can have up to 10 pages of actions for those buttons, giving me a total of a whopping 320 possible buttons/actions to choose from.

I started configuring the first page for the most-used actions I would need with OBS, including:

  • Launch OBS itself
  • Open my normal work/desktop productivity apps, including Mattermost, Slack, Telegram, IRCCloud, Discord
  • Open media apps I need; Google docs/sheets/drive, Spotify, Pandora, Dropbox and others

Then the other fun began. I wanted a way to target specific applications and stuff keystrokes into those apps. The first need was to be able to DM any of my teammates with a single button press.

I needed to find a way to “find” the Mattermost (or Slack) window on the desktop, target that window, raise it, then send keystrokes to that window, for example /msg SuperManager Good morning! I have a question...

To do this, I needed to create a shell script as a wrapper around wmctrl, xwininfo, and xdotool to do what I needed. I had to create a second script, similar to the first, to target specific public channels in Mattermost. Each of these is subtly different; one uses /msg and the other uses /join ~$channel before stuffing in keystrokes for actions.

Here’s an example:

#!/bin/bash 

dm=$1
wmctrl -xa Mattermost 
mm=$(xwininfo -root -tree | awk '/Mattermost/ {print $1}'); 
xdotool windowfocus ${mm} type "/msg $dm";
xdotool windowfocus ${mm} key KP_Enter

I can then call that from a streamdeck-ui button action with: mm-dm @SuperManager and it will find and open Mattermost, target that private conversation window, and I can start typing away.

The next extension of this of course, was to create custom buttons on the Stream Deck itself, for each member of my team. I have a separate page (32 buttons) with photos from our internal corporate directory, one photo per-button, for each member of my team, cross-teams, management and so on. A single press on their photo, will find Mattermost, target that window, and begin a DM with them.

It’s the Stream Deck equivalent of a visual phone directory.

I also created forward/back buttons for switching between pages on the XL, which you can see here in the screenshot below. I have the ‘Switch Page’ action configured to switch to the previous or next pages, as needed. On Page 1, there’s only one button there, ‘Next Page’, which switches to Page 2. On all other pages, it goes forward or back, and page 10 wraps back to Page 1.

I also have the XL set up for my streaming environment, Govee Lyra and Govee Aura lights that live behind me on camera, uxplay for using my iPad Mini as a ‘lightboard’ during meetings. Here’s an example of how this looks with an actual glass lightboard.

I figured out how to do this without any of the complexities of actual glass, markers or extra hardware. Just Linux, OBS, my iPad, Stream Deck and uxplay. Works fantastic! I’ll do a whole post on that later.

The most-recent addition I figured out, literally this afternoon after fumbling to find my active Meet window and mute my mic, was how to use a single button on the Stream Deck XL, to mute and unmute my Google Meet calls. It’s similar to the way I’m targeting my Mattermost (Slack, Telegram, IRCCloud, etc.) windows, but instead, I’m targeting the top-most Google Chrome window that has Meet running in it. The script, tied to a ‘mute/unmute’ button, looks like this:

wmctrl -xa Chrome
chrome=$(xwininfo -root -tree | awk '/Meet .* Google Chrome/ {print $1}')
xdotool windowfocus ${chrome} key "ctrl+d";

That’s it. I configured streamdeck-ui with a single button press to toggle that on and off.

So that’s it for now, some great ideas to use a Stream Deck XL along with your regular desktop apps and productivity tools, to enhance even more productivity out of your environment.

Deploying Firefox and Thunderbird Policies to Prevent auto-updates and Tune Other Features

Long-time Firefox and Thunderbird user here. I’ve tried dozens and dozens of other browsers, including the much lauded Google Chrome, but always come back to Firefox. It’s just much faster, lighter on memory, 100x more feature rich, flexible and more secure than the alternatives. Chrome by comparison, is slow, an extreme memory hog, questionable security model, and lacks any powerful features that I’ve come to user over the years.

I tend to run the latest “Developer” or “Nightly” editions of these tools, and by doing so, I agree to certain constraints (daily, enforced upgrades being one example), but with that sometimes comes product changes that cause new, undiscovered issues, breakage and undefined behavior.

My Thunderbird mail folders for example, go back 20 years and contain well over 200,000 archived and active emails. I’ve purged all of the garbage, junk, unnecessary emails as they come in, being a big proponent of Merlin Mann’s “Inbox Zero” methodology for almost 15 years, but it’s important that mail be available and accessible on-demand. Something that breaks my ability to read an IMAP folder or search across those folders and tags, would not be good.

Enter Policies!

With policies deployed, you can govern what behavior is turned on, off and supported by your Firefox browser or Thunderbird mail client. For Firefox, there’s an easy add-on called “Enterprise Policy Generator” written by Sören Hentzschel that I use to start off the policies I’m interested in. Here’s just a small sample of what’s available in the tool:

Two of the first items I turn off, is the use of “Pocket” and the constantly daily upgrade notices. I do upgrade frequently, but I make sure I back up my profile, add-ons and browser data before testing an upgrade, so I have a means to downgrade if the new version breaks my add-ons or use of the browser. To do that, you can create a policy that disables these with the EPG, or you can just create a policies.json and add the following to it:

{
    "policies": {
        "DisableAppUpdate": true
    }
}

This will stop the browser from requesting updates on a daily basis. There is a feature in Firefox under about:config called app.update.auto which can be set to “False”, but it doesn’t work. Likewise, blanking out the app.update.url in the same configuration pane does not work either. The only way to do this, is to deploy a policy that forbids it.

The policies.json file has to go into a specific directory in the application directory, not the user’s profile (where it could be altered or modified by each user). Here’s where those need to go:

On macOS

/Applications/Firefox Developer Edition.app/Contents/Resources/distribution

On Linux

If you’re using packages:

/usr/lib/firefox/distribution

If you’re using the tarball or nightly releases:

/opt/firefox/distribution

On Microsoft Windows

C:\Program Files\Firefox Developer Edition\distribution

The important part is that it lives in a new directory called distribution inside the same directory that holds the main Firefox data files. You’ll need to create this directory if it doesn’t already exist. For Thunderbird, the process is similar, just a slightly different directory:

On macOS:

/Applications/Thunderbird.app/Contents/Resources/distribution

or

/Applications/Thunderbird Daily.app/Contents/Resources/distribution

Follow the same model and paths you did with Firefox for Linux and Microsoft Windows.

You’ll know if you put the policies.json in the correct directory, if you close and relaunch your Firefox or Thunderbird client, go to Help -> About, and see the following, near the top of the About dialog:

Here is a copy of an expanded policies.json that I use on my production systems:

{
  "policies": {
    "DisableAppUpdate": true,
    "DisableFeedbackCommands": true,
    "DisableFirefoxStudies": true,
    "DisablePocket": true,
    "DisableSystemAddonUpdate": true,
    "DisableTelemetry": true,
    "ExtensionUpdate": false,
    "NetworkPrediction": true,
    "Preferences": {
      "browser.fixup.dns_first_for_single_words": true,
      "browser.tabs.warnOnClose": true
    },
    "PromptForDownloadLocation": true
  }
}

You can use this for both Firefox and Thunderbird.

If you want a full breakdown of every possible policy item, you can visit the Mozilla Policy Templates github page for detailed explanations.

While we’re on the subject of Git, you might also want to investigate using Git to manage these policies and configurations, so you can easily deploy them across multiple machines that you use your browser or mail client in.

Hope that helps. Good luck!

Converting SuperMicro BMC Sensor Temperatures from Celsius to Fahrenheit

If you’ve ever used a SuperMicro BMC before, you’ve no-doubt seen the temperatures section under Server Health => Sensor Readings. These are always expressed in Celsius, but sometimes you want to quickly convert those to Fahrenheit so you can compare them with other data/sensors.

Enter Tampermonkey! I’ve been using Tampermonkey under Firefox for the last few years to re-skin/re-theme Salesforce, Greenhouse and 1/2 dozen other sites I use, some of them in very extreme ways, adding features and functions that the parent site itself doesn’t have or support.

In this case, this is a very simple snippet that will parse the sensor table and convert the Celsius values to Fahrenheit for you, just by loading the page. The code is:

// ==UserScript==
// @name           SuperMicro Sensor Conversion
// @namespace      https://192.168.4.50/
// @description    Convert the SMC Sensor outputs to Fahrenheit vs. Celsius
// @include        /^https?://192.168.4.50/.*$/
// @author         setuid@gmail.com
// @version        1.00
// ==========================================================================
//
// ==/UserScript==


'use strict';

setTimeout(() => {
    document.querySelectorAll('div[id="HtmlSensorTable"] > table > tbody > tr > td').forEach(node => {
        if (node.innerText.includes(' degrees C')) {
            var temp = node.innerText.match(/(\d+) \w+ \w/)
            var fah = (parseInt(temp, 10) * 9 / 5 + 32).toFixed(1);
        }
        node.innerText = node.innerText.replace(/(.*?)(\d+) degrees C/, `$1 ${fah}° F)
    });
}, 500);

I tuned that a little more, by adding the degree symbol, instead of the words ‘degrees’, which now looks like:

It could be refined even further, targeting the inner iframes that this table resides in, or converting to React, but this was a quick 30-minute hack to solve a specific need I had.

Note, you can also get these same temperature values programmatically, via the RedFish API, if your chassis is properly licensed to permit it.

My homelab gets VERY warm during the day when the gear is running at full tilt, so I picked up a Govee Temp/Humidity sensor [Amazon link, not a referral or affiliate link][Govee main website product link], and it’s been very enlightening, showing me more about the trends in my office than I had visibility into before.

Here’s the last week’s temps and humidity in my office/homelab:

The only downside, is I can’t figure out a way to automate pulling/exporting this data, so I can import it into my Prometheus server and graph it with Grafana. Of note: I just taught myself Prometheus + Grafana tonight while adding all of my servers + UPS into it for monitoring. The UPS took a bit more effort, as it’s only using SNMP. I’ll go into more detail on that in future blog posts.

After nearly 22 years together, I had to let my roommate Monk go.

At 12:45pm today, April 13th 2021, my long-time buddy Monk, my roommate for the last 22 years and many relocations, had to be put down. It was a really rough last couple of days, much more difficult than letting Dart go almost 10 years earlier.

His condition over his last days, really degraded quickly. He went from being a bit ‘stiff’ and difficult to walk over the last few months, to no longer being able to control the entire rear half of his body in his last day.

His last, full night together with us consisted of me picking him up to help him stagger to the litterbox only a few feet away in the bathroom, and holding him up while he urinated and defecated all over himself, and then cleaning him up in the tub right after.

I woke up a couple of hours later, to see him trying to drag himself by his front paws back into his padded bed on the floor, with urine leading back from there to the litterbox. He couldn’t muster enough energy to lift his head much, or even to chew his food or drink water.

His body was so limp, frail, it was hard to hold his weight up while he ate or went to the litterbox, without causing him pain, because he had no real muscle tone left to keep his own bones straight.

It was time, I couldn’t wait any longer, without causing him to really suffer even more than he was already suffering. He was in fantastic health for those 22 years, with the exception of those last few months. Many tests and prescription diet changes later, I couldn’t stop the slide of his health failures.

My life with Monk was a long, amazing life. I will remember every moment with razor sharp clarity. From his sharing baskets next to Dart through laying on my back while I slept, or curling up under my arm while I read.

His name was a perfect choice from the start of his life, straight through to the end. He was always watching, inspecting, learning, waiting his turn at the food, water or the window to watch the birds and big world outside.

I couldn’t quite get him to play fetch like Dart, or chase the laser dot as much as other cats, but he had his own, deeply introspective appeal. We’d spend hours together each week in our own “Zen”, just listening to the sounds of nothing, taking in the world, being active observers and participants.

He was the only cat that Seryn had known for her entire life, there before she was born, and there to the end of his days. He would watch her in the crib, curl up around her head when she would sleep, and make sure she was safe, much like a trusted family canine.

He spent some years with Dart, Cooper Coombs, Ashe and Tink. He’s been a friend to all who have met him. He was nothing short of the smartest, most introspective, calm, patient, Buddhist of cats I’ve ever owned.

As an albino cat, he had his share of weird health difficulties starting from the first day we adopted him. He was found by a coworker of my girlfriend at the time drinking antifreeze out of the driveway in the middle of winter. He was treated for frostbite and ever since then would never, ever allow anyone to touch his paws, because they were super-sensitive to touch. He suffered from cat acne later in life, weight gain and loss, gingivitis and several teeth surgically removed, many diet challenges and food allergies, several unexpected surgeries and many tests.

His lack of one of his canine teeth caused a unique “yarr! Pirate Face” as his lips got stuck on his teefs.

He was one-of-a-kind, unique, and the most intelligent, sweetest, friendly, cuddler of a roommate I could have ever asked for.

Monk, you will be missed. I can only hope, if there’s ever another place after this life, that you’ve found Dart in that place, and you’re happily playing Meow-co-Polo with him like you did for so many years.

HOWTO: Run Proxmox 6.3 under VMware ESXi with networked guest instances

One of my machines in my production homelab is an ESXi server, a long-toothed upgrade from the 5.x days.

I keep a lot of legacy VMs and copies of every version of Ubuntu, Fedora, Slackware, Debian, CentOS and hundreds of other VMs on it. It’s invaluable to be able to spin up a test machine on any OS, any capacity, within seconds.

Recently, the need to ramp up fast on Proxmox has come to the front of my priority list for work and specific customer needs. I don’t have spare, baremetal hardware to install Proxmox natively, so I have to spin it up under my existing VMware environment as a guest.

The problem here, is that running one hypervisor under another hypervisor as a guest, requires some specific preparations, so that the networking of the nested guest, will have its packets correctly and cleanly routed through the parent host’s physical network interfaces.

Read on for how to configure this in your own environment!

VMware ESXi

In VMware ESXi, there are a few settings that you need to adjust, to enable “Promiscuous Mode”, “Forged Transmits” and “MAC Changes”. These are found under the “VM Network” section of your ESXi web-ui:

Once you’ve made these changes, you need to restart your VMware host in order to enable them for newly-created VMs under that host.

VMware Workstation

If you’re running VMware Workstation instead of ESXi, you need to make sure your ‘vmnet’ devices in /dev/ have the correct permissions to permit enabling promiscuous mode. You can do that with a quick chmod 0777 /dev/vmnet* or you can adjust the VMware init script that creates these nodes. Normally these would be adjusted in ‘udev’ rules, but those rules are run before the VMware startup, so changes are overwritten by VMware’s own automation.

In /etc/init.d/vmware, make the following adjustment:

vmwareStartVmnet() {
   vmwareLoadModule $vnet
   "$BINDIR"/vmware-networks --start >> $VNETLIB_LOG 2>&1
   chmod 666 /dev/vmnet* # Add this line
}

Now that you have your host hypervisor configured to support nested guest hypervisors, let’s proceed with the Proxmox installation.

Download the most-recent Proxmox ISO image and create a new VM in your VMware environment (ESXi or Workstation). Make sure to give your newly created VM enough resources to be able to launch its own VMs. I created a VM with 32GB RAM and 2TB of storage, configured as a ZFS RAIDZ-3 array (5 x 400GB disks). That configuration looks like this:

Read the rest of this entry »

Thoughts about cheating on Zwift

Tags: , ,

Let’s talk about cheating for a moment. There, I’ve said it. Throw the tomatoes, the Park wrench or the AppleTV remote at me. Save the criticism for the comment section!

I’m a huge data nerd. Many of you already know that. Having clean and correct data on Zwift, only helps us improve as athletes and riders.

I’ve put a LOT of thought into this over the last few years, and have had personal conversations with Eric, Steve Beckett., Jon and others about it, including sharing some of the ideas I’ve had to mitigate it. I’ve read the rants, the promoters, the detractors, all of it from all sides. So has Zwift HQ.

Forget streaming video of riders, or putting trainers on a platform with integrated scales or integrating weight-in equipment into the bikes themselves. When you do that, you kill the enjoyment for others who can’t reach that echelon, but still want to “race” on Zwift. You’d be excluding people who might have the ability, but not the means.

So here’s my proposal, a draft that I’ve been cooking up for a few years, which hopes to not only help curb cheating, but also increase the adoption of Zwift in local centers, the LBS, as well as make sure you don’t constrain race events to KICKR or TacX Neo only events.

  1. Begin the distribution of Certified Zwift Engineers (aka “ZCE”). These would initially probably be the bike mechanic at your LBS to start with. They’re already there, they have the gear and they’re probably fixing your bike or adding equipment each season already. The ZCE would be able to train up on all aspects of Zwift, including app/game configuration, optimizing the experience for the end user. Oh, you have a Dell laptop with an integrated video card? Here’s some settings you can apply to make that work for you during crowded group events.They’d also be trained in how to configure and validate bike fit, power meters and sensors that tie back to the machine/device used to run Zwift. Having drop-outs? Here’s the tools to identify drop-outs and some workarounds that can help.

    This engages the LBS mechanics and the LBS itself to be a part of the growing Zwift ecosystem, not only just as an endpoint for bike upgrades and repair, but a full, end-to-end solution for building out a Zwift environment for the riders.

    Incentivizing those LBS mechanics to become ZCE then has the potential to ensure that more people come into the shop for bike fit, possible recommendations, upgrades, etc. I haven’t met a single bike mechanic who hates cycling. They do it because they have a passion for it, and they, like others, want to grow that passion. Who wouldn’t turn down the ability to learn something new and exciting about your passion?

  2. Those same LBS that have their mechanics certified as ZCE, can now brand their shop as “Zwift Certified Training Center”, and teach riders how to use Zwift (ala spin class? LBS Fondo?). Tactics, when to drain your power-up so you can pull the next one over that hill. Buying a trainer at Best Buy won’t have the same overall value as buying it at your Zwift Certified LBS, even if Best Buy has them for 10% cheaper.
  3. Those same LBS + ZCE, can now perform equipment certification and qualification. They can properly calibrate your Power Meter + trainer combination, regardless of what you’re using. Forget trusting Qalvin on your iOS device to calibrate your Quarq PM or trusting your Garmin Vector pedals to be accurate out of the box, let the ZCE at your LBS (ZBS?) handle that for you.

Trust, but verify, as we say in my field.

They can also do the weigh-in right there at the shop, after calibrating your gear. The output of that now-calibrated Zwift setup and weigh in, is a printed certificate of authenticity of your bike, trainer, gear and your own fitness.

A piece of paper, so what you say? But wait, there’s more. What can you do with that?

Printed ON that certificate, is a unique code, generated by Zwift itself (this service does not yet exist, and would have to be created, more on that in a moment). You would then be responsible for making sure that your gear is not “altered” before or during a race. Alterations like that can be detected (ZwiftPower + formerly ZADA have tools to do this already).

This unique code would be entered before you join a race event, either at the time you sign up, or right as the event starts. It would be entered much like we do for jersey promo codes today. This is your “Zwift Race Number” (ZRN? Too many TLAs yet?).

If your gear is found to be ‘suspect’, you are unable to qualify until you remediate your gear. Your ZRN is now locked, and you can’t use it to enter any ‘official’ race events until address it. To do so, you get one free re-calibration at the ZBS, and they can unlock your ZRN for you, before further re-calibrations come at a cost.

So, what’s missing from this approach?

For starters, Zwift does not have the ability to generate these unique codes, nor any way to manage them in your user account record. Yet.

But the scaffolding to enter codes to unlock capabilities is already there. They’d have to design and build that frame work, and work with partners to make sure fits the needs of their own roadmap. It’s not something to be taken lightly, but neither is eSports or the growing community of cheaters who are going undetected.

They also don’t have a ‘Certification’ program, defined criteria, training modules or anything like that. That curriculum would have to be developed, tested and disseminated amongst the interested LBS/ZBS, training centers, bike mechanics and anyone else who wants to open up their own Zwift Certified Training Center.

But having the certification program begins to create a standard, that all trainers and eSports athletes have to begin to adhere to. It’s a great position for Zwift to be in right now, helping to define the standards and at the same time, increasing their market share by pushing eSports and ZCEs/ZRNs into the LBS.

You, as a potential eSports athlete, would now be held accountable for making sure your own gear is calibrated, your weight accurate and true, and that you manage that ZRN with all the power that comes with it.

As eSports moves up the ladder and starts adding purses for winning, and actual financial incentives, sponsorships, team selection criteria, it becomes more and more important to take steps like this.

So sure, throw your streaming camera up there, show people you’re really the 70kg your profile says you are, that’s fine. But if you want to compete in a race that has value, actual impact, financial incentives to win, then grab your trainer, bike and head to your local LBS, get weighed in, certified, and enter that ZRN the next time you want to join those events.

At some regular interval, or when you upgrade gear, bike, power meter, or the start of a new season, you go back to the LBS/ZBS, schedule an appointment for a bike tune-up, equipment review and re-certify with your new ZRN, ready to smash those Zwift Racing Event records online!

I think this has some real potential, by engaging the participating LBS’ to get onboard with certifying Zwift equipment, trainers, power meters, but also bringing them into the fold of eSports.

It’s very unlikely someone who has the intent to cheat, is going to take all the effort to get their ZRN at their local ZBS, take that gear home, and alter it to gain an advantage. If they do, there are checks-and-balances in place to DQ them, invalidate their ZRN until they go back and re-certify, and keep those events clean.

It also helps validate those riders who TRULY want to compete, and will make sure their gear is dialed in.

So let the cheaters can go ahead and tinker with their gear, take the effort to certify and then falsify their gear and get DQ’d. They only do that to the embarrassment of themselves, not Zwift as a growing eSports platform.

Your thoughts? Let’s discuss.

HOWTO: Back up your Android device with native rsync

Android
Recently, one of my Android devices stopped reading the memory card. Opening the device, the microSD card was so hot I couldn’t hold it in my hand. The battery on that corner of the device had started to swell slightly. I’ve used this device every day for 3+ years without any issues. Until this week.

I also use TitaniumBackup to back up my Android to this external memory card, but since the device can’t read the card, I can’t back it up to the card.

The card is fine, and works in my other devices, as well as being seen from the desktop. Other, blank microSD card can’t be read in the device and similarly overheat within seconds. It’s bad.

Enter rsync, the Swiss-Army Knife of power, to back up my Android device!

Here’s how:

Read the rest of this entry »

HOWTO: Purge Amazon Echo History with iMacros

Amazon Echo IoT Companion

This one is quick and easy… Have you ever wanted to go back into your Amazon Echo device and delete the history of all commands you asked Alexa to do for you? All the searches? All the weather requests?

Well, you can… manually from the mobile app, or from the Amazon Alexa Configuration page, but that can take hours, because each card you wan to remove is a minimum of two taps or clicks.

But there’s an even easier way… iMacros!

Load up the iMacros browser extension (Chrome version) (Firefox version) and create a new macro. You can edit it ‘raw’, if you wish, but you want only these lines in your macro:

VERSION BUILD=8970419 RECORDER=FX
TAB T=1
URL GOTO=http://alexa.amazon.com/spa/index.html#cards
TAG POS=1 TYPE=BUTTON ATTR=TXT:More
TAG POS=1 TYPE=SPAN ATTR=TXT:Removecard

Now when you load up the Amazon Alexa Configuration page, you can just launch your macro from iMacros and play it in a loop to progressively delete each and every one of those cards in seconds.

I personally wiped out over 5,000 cards in under 2 minutes with this approach. It works great!

Comment below if you have any luck with it, or modify it in a way that becomes more useful to others.

Bad Behavior has blocked 1182 access attempts in the last 7 days.