Archive for the 'Technology' Category

HOWTO: Run Proxmox 6.3 under VMware ESXi with networked guest instances

One of my machines in my production homelab is an ESXi server, a long-toothed upgrade from the 5.x days.

I keep a lot of legacy VMs and copies of every version of Ubuntu, Fedora, Slackware, Debian, CentOS and hundreds of other VMs on it. It’s invaluable to be able to spin up a test machine on any OS, any capacity, within seconds.

Recently, the need to ramp up fast on Proxmox has come to the front of my priority list for work and specific customer needs. I don’t have spare, baremetal hardware to install Proxmox natively, so I have to spin it up under my existing VMware environment as a guest.

The problem here, is that running one hypervisor under another hypervisor as a guest, requires some specific preparations, so that the networking of the nested guest, will have its packets correctly and cleanly routed through the parent host’s physical network interfaces.

Read on for how to configure this in your own environment!

VMware ESXi

In VMware ESXi, there are a few settings that you need to adjust, to enable “Promiscuous Mode”, “Forged Transmits” and “MAC Changes”. These are found under the “VM Network” section of your ESXi web-ui:

Once you’ve made these changes, you need to restart your VMware host in order to enable them for newly-created VMs under that host.

VMware Workstation

If you’re running VMware Workstation instead of ESXi, you need to make sure your ‘vmnet’ devices in /dev/ have the correct permissions to permit enabling promiscuous mode. You can do that with a quick chmod 0777 /dev/vmnet* or you can adjust the VMware init script that creates these nodes. Normally these would be adjusted in ‘udev’ rules, but those rules are run before the VMware startup, so changes are overwritten by VMware’s own automation.

In /etc/init.d/vmware, make the following adjustment:

vmwareStartVmnet() {
   vmwareLoadModule $vnet
   "$BINDIR"/vmware-networks --start >> $VNETLIB_LOG 2>&1
   chmod 666 /dev/vmnet* # Add this line
}

Now that you have your host hypervisor configured to support nested guest hypervisors, let’s proceed with the Proxmox installation.

Download the most-recent Proxmox ISO image and create a new VM in your VMware environment (ESXi or Workstation). Make sure to give your newly created VM enough resources to be able to launch its own VMs. I created a VM with 32GB RAM and 2TB of storage, configured as a ZFS RAIDZ-3 array (5 x 400GB disks). That configuration looks like this:

Read the rest of this entry »

Thoughts about cheating on Zwift

Tags: , ,

Let’s talk about cheating for a moment. There, I’ve said it. Throw the tomatoes, the Park wrench or the AppleTV remote at me. Save the criticism for the comment section!

I’m a huge data nerd. Many of you already know that. Having clean and correct data on Zwift, only helps us improve as athletes and riders.

I’ve put a LOT of thought into this over the last few years, and have had personal conversations with Eric, Steve Beckett., Jon and others about it, including sharing some of the ideas I’ve had to mitigate it. I’ve read the rants, the promoters, the detractors, all of it from all sides. So has Zwift HQ.

Forget streaming video of riders, or putting trainers on a platform with integrated scales or integrating weight-in equipment into the bikes themselves. When you do that, you kill the enjoyment for others who can’t reach that echelon, but still want to “race” on Zwift. You’d be excluding people who might have the ability, but not the means.

So here’s my proposal, a draft that I’ve been cooking up for a few years, which hopes to not only help curb cheating, but also increase the adoption of Zwift in local centers, the LBS, as well as make sure you don’t constrain race events to KICKR or TacX Neo only events.

  1. Begin the distribution of Certified Zwift Engineers (aka “ZCE”). These would initially probably be the bike mechanic at your LBS to start with. They’re already there, they have the gear and they’re probably fixing your bike or adding equipment each season already. The ZCE would be able to train up on all aspects of Zwift, including app/game configuration, optimizing the experience for the end user. Oh, you have a Dell laptop with an integrated video card? Here’s some settings you can apply to make that work for you during crowded group events.They’d also be trained in how to configure and validate bike fit, power meters and sensors that tie back to the machine/device used to run Zwift. Having drop-outs? Here’s the tools to identify drop-outs and some workarounds that can help.

    This engages the LBS mechanics and the LBS itself to be a part of the growing Zwift ecosystem, not only just as an endpoint for bike upgrades and repair, but a full, end-to-end solution for building out a Zwift environment for the riders.

    Incentivizing those LBS mechanics to become ZCE then has the potential to ensure that more people come into the shop for bike fit, possible recommendations, upgrades, etc. I haven’t met a single bike mechanic who hates cycling. They do it because they have a passion for it, and they, like others, want to grow that passion. Who wouldn’t turn down the ability to learn something new and exciting about your passion?

  2. Those same LBS that have their mechanics certified as ZCE, can now brand their shop as “Zwift Certified Training Center”, and teach riders how to use Zwift (ala spin class? LBS Fondo?). Tactics, when to drain your power-up so you can pull the next one over that hill. Buying a trainer at Best Buy won’t have the same overall value as buying it at your Zwift Certified LBS, even if Best Buy has them for 10% cheaper.
  3. Those same LBS + ZCE, can now perform equipment certification and qualification. They can properly calibrate your Power Meter + trainer combination, regardless of what you’re using. Forget trusting Qalvin on your iOS device to calibrate your Quarq PM or trusting your Garmin Vector pedals to be accurate out of the box, let the ZCE at your LBS (ZBS?) handle that for you.

Trust, but verify, as we say in my field.

They can also do the weigh-in right there at the shop, after calibrating your gear. The output of that now-calibrated Zwift setup and weigh in, is a printed certificate of authenticity of your bike, trainer, gear and your own fitness.

A piece of paper, so what you say? But wait, there’s more. What can you do with that?

Printed ON that certificate, is a unique code, generated by Zwift itself (this service does not yet exist, and would have to be created, more on that in a moment). You would then be responsible for making sure that your gear is not “altered” before or during a race. Alterations like that can be detected (ZwiftPower + formerly ZADA have tools to do this already).

This unique code would be entered before you join a race event, either at the time you sign up, or right as the event starts. It would be entered much like we do for jersey promo codes today. This is your “Zwift Race Number” (ZRN? Too many TLAs yet?).

If your gear is found to be ‘suspect’, you are unable to qualify until you remediate your gear. Your ZRN is now locked, and you can’t use it to enter any ‘official’ race events until address it. To do so, you get one free re-calibration at the ZBS, and they can unlock your ZRN for you, before further re-calibrations come at a cost.

So, what’s missing from this approach?

For starters, Zwift does not have the ability to generate these unique codes, nor any way to manage them in your user account record. Yet.

But the scaffolding to enter codes to unlock capabilities is already there. They’d have to design and build that frame work, and work with partners to make sure fits the needs of their own roadmap. It’s not something to be taken lightly, but neither is eSports or the growing community of cheaters who are going undetected.

They also don’t have a ‘Certification’ program, defined criteria, training modules or anything like that. That curriculum would have to be developed, tested and disseminated amongst the interested LBS/ZBS, training centers, bike mechanics and anyone else who wants to open up their own Zwift Certified Training Center.

But having the certification program begins to create a standard, that all trainers and eSports athletes have to begin to adhere to. It’s a great position for Zwift to be in right now, helping to define the standards and at the same time, increasing their market share by pushing eSports and ZCEs/ZRNs into the LBS.

You, as a potential eSports athlete, would now be held accountable for making sure your own gear is calibrated, your weight accurate and true, and that you manage that ZRN with all the power that comes with it.

As eSports moves up the ladder and starts adding purses for winning, and actual financial incentives, sponsorships, team selection criteria, it becomes more and more important to take steps like this.

So sure, throw your streaming camera up there, show people you’re really the 70kg your profile says you are, that’s fine. But if you want to compete in a race that has value, actual impact, financial incentives to win, then grab your trainer, bike and head to your local LBS, get weighed in, certified, and enter that ZRN the next time you want to join those events.

At some regular interval, or when you upgrade gear, bike, power meter, or the start of a new season, you go back to the LBS/ZBS, schedule an appointment for a bike tune-up, equipment review and re-certify with your new ZRN, ready to smash those Zwift Racing Event records online!

I think this has some real potential, by engaging the participating LBS’ to get onboard with certifying Zwift equipment, trainers, power meters, but also bringing them into the fold of eSports.

It’s very unlikely someone who has the intent to cheat, is going to take all the effort to get their ZRN at their local ZBS, take that gear home, and alter it to gain an advantage. If they do, there are checks-and-balances in place to DQ them, invalidate their ZRN until they go back and re-certify, and keep those events clean.

It also helps validate those riders who TRULY want to compete, and will make sure their gear is dialed in.

So let the cheaters can go ahead and tinker with their gear, take the effort to certify and then falsify their gear and get DQ’d. They only do that to the embarrassment of themselves, not Zwift as a growing eSports platform.

Your thoughts? Let’s discuss.

The Enormous Dating Fraud: Match.com, Plenty of Fish, Tinder and OkCupid

The Top 4 dating sites out there; Match.com, Plenty of Fish, Tinder and OkCupid are so completely overrun with fraud now, it’s appalling.

Note: Match.com, Plenty of Fish, Tinder and OkCupid are all owned by the same parent company, along with roughly 40 other dating site properties.

I’ve been a free and paid member of these sites for 8 years (with 3 years off in the middle as I was dating someone). I have spent hundreds of hours pouring through profiles, code, APIs, mobile apps and other interactions with these specific sites.

Right from the top, I’m going to stronly suggest you do not give any of these sites your money! Do not subscribe, do not give them a credit card, do not let them bill you, do not give them a single dollar. None.

I’ll break down exactly why below..

Let’s start with the biggest and worst offender: Match.com:

Read the rest of this entry »

HOWTO: Back up your Android device with native rsync

Android
Recently, one of my Android devices stopped reading the memory card. Opening the device, the microSD card was so hot I couldn’t hold it in my hand. The battery on that corner of the device had started to swell slightly. I’ve used this device every day for 3+ years without any issues. Until this week.

I also use TitaniumBackup to back up my Android to this external memory card, but since the device can’t read the card, I can’t back it up to the card.

The card is fine, and works in my other devices, as well as being seen from the desktop. Other, blank microSD card can’t be read in the device and similarly overheat within seconds. It’s bad.

Enter rsync, the Swiss-Army Knife of power, to back up my Android device!

Here’s how:

Read the rest of this entry »

HOWTO: Purge Amazon Echo History with iMacros

Amazon Echo IoT Companion

This one is quick and easy… Have you ever wanted to go back into your Amazon Echo device and delete the history of all commands you asked Alexa to do for you? All the searches? All the weather requests?

Well, you can… manually from the mobile app, or from the Amazon Alexa Configuration page, but that can take hours, because each card you wan to remove is a minimum of two taps or clicks.

But there’s an even easier way… iMacros!

Load up the iMacros browser extension (Chrome version) (Firefox version) and create a new macro. You can edit it ‘raw’, if you wish, but you want only these lines in your macro:

VERSION BUILD=8970419 RECORDER=FX
TAB T=1
URL GOTO=http://alexa.amazon.com/spa/index.html#cards
TAG POS=1 TYPE=BUTTON ATTR=TXT:More
TAG POS=1 TYPE=SPAN ATTR=TXT:Removecard

Now when you load up the Amazon Alexa Configuration page, you can just launch your macro from iMacros and play it in a loop to progressively delete each and every one of those cards in seconds.

I personally wiped out over 5,000 cards in under 2 minutes with this approach. It works great!

Comment below if you have any luck with it, or modify it in a way that becomes more useful to others.

HOWTO: Run multiple Zwift sessions on the same PC (Windows only)

Zwift LogoMany people have asked me to write this up and I’m happy to be the first person to push Zwift this far with multiple, simultaneous sessions.

I can say with confidence that up to this point, I’m actually the only person who has this working correctly without overwriting or clobbering critical logs and data files. Others have tried some hacky methods, but they all result in instability and data loss (see “What does NOT work, and why” below).

I started this quest because I am working on a product design (“Secret Sauce” to be withheld in this HOWTO) that involves running multiple Zwift sessions on a single, 100% wireless PC, with the only wire being the single power cable to the wall. No USB cables, no video cables, no HDMI cables, no network cables.

Let’s get some general housekeeping out of the way first…

Read the rest of this entry »

HOWTO: Fully automated Zwift login on Mac OS X

Zwift LogoQuite a few riders on the Facebook Zwift Riders group have expressed an interest in this, so I decided to take a couple of hours, learn AppleScript and knock this out. Done! (if you’re on Windows, you want this other HOWTO instead)

What this code does, is allows you to create a single icon that will log you into Zwift, with no human interaction needed. It will put in your email, password, click the “Start Ride” button and away you go!

This also leverages the OS X Keychain to store your Zwift email address and password, so it’s secure, not leaked into the filesystem and is able to be called on by any other apps that might need it (ahem, like… Zwift itself!) :D

So here’s how to get it working…

First, we need to create a separate keychain to store the Zwift credentials. You could store them in the main keychain, but I’m a fan of credential separation, so let’s use that.

Read the rest of this entry »

HOWTO: Enable Docker API through firewalld on CentOS 7.x (el7)

centos-dockerPlaying more and more with Docker across multiple Linux distributions has taught me that not all Linux distributions are treated the same.

There’s a discord right now in the Linux community about systemd vs. SysV init. In our example, CentOS 7.x uses systemd, where all system services are spawned and started.

I am using this version of Linux to set up my own Docker lab host for tire-kicking, but it needs some tweaks.

I also wanted to see if I could use the Docker API from my Android phone, using DockerDroid, which (after configuring this) works famously!

Here’s what you need to do:

  1. Log into your CentOS machine and update to the most-current Docker version. The version shipped with CentOS 7 in the repo as I write this post, is “docker-1.3.2-4.el7.centos.x86_64”. You want to be using something more current, and 1.4 is the latest. To fetch that (and preserve your existing version), run the following:
    $ su -
    # cd /bin && mv /bin/docker /bin/docker.el7
    # wget https://get.docker.com/builds/Linux/x86_64/docker-latest -O docker
    # systemctl restart docker
    # exit
    $ 
    

    Now you should have a working Docker with the right version (current). You can verify that:

    $ sudo docker version
    Client version: 1.4.1
    Client API version: 1.16
    Go version (client): go1.3.3
    Git commit (client): 5bc2ff8
    OS/Arch (client): linux/amd64
    Server version: 1.4.1
    Server API version: 1.16
    Go version (server): go1.3.3
    Git commit (server): 5bc2ff8
  2. So far, so good! Now we need to make sure firewalld has a rule to permit this port to be exposed for external connections:
    $ sudo firewall-cmd --zone=public --add-port=4243/tcp --permanent
    $ sudo firewall-cmd --reload
    success
    

    You can verify that this new rule was added, by looking at /etc/firewalld/zones/public.xml, which should now have a line that looks like this:

    <port protocol="tcp" port="4243"/>
  3. Now let’s reconfigure Docker to expose the API to external client connections, by making sure the OPTIONS line in /etc/sysconfig/docker looks like this (note the portion in bold):
    OPTIONS=--selinux-enabled -H fd:// -H tcp://0.0.0.0:4243
    
  4. Restart the Docker service to enact the API on that port (if successful, you will not see any output):
    sudo systemctl restart docker
  5. To test the port locally, install telnet and then try telnet’ing to the port on localhost:
    $ sudo telnet localhost 4243
    Trying ::1...
    Connected to localhost.
    Escape character is '^]'.
    
    HTTP/1.1 400 Bad Request
    
    Connection closed by foreign host.

    All looks good so far!

  6. Lastly, install DockerDroid and configure it to talk to your server on this port:

    DockerDroid connecting to CentOS via API

  7. Now you should be able to use DockerDroid to navigate your Images, Containers and API.

    Good luck!

Tuesday Tip: rsync Command to Include Only Specific Files

I find myself using rsync a lot, both for moving data around, for creating backups using rsnapshot (yes, even on Windows!) and for mirroring public Open Source projects and repositories.

I used to create all sorts of filters and scripts to make sure I was getting only the files I wanted and needed, but I found a better way, and it wasn’t exactly intuitive.

--include="*/" --include="*.iso" --exclude="*"

In order for this to work as intended, the “include” patterns have to come before the “excludes”. This is because the very first pattern that matches is the one that gets evaluated. If your intended filename matches the specified exclude pattern first, it gets excluded from the scope.

When dealing with a very large, possibly unknown remote directory structure, you either have to include all of the remote subdirectories individually like this:

--include="/opt" --include="/var" --include="/home"

Or you can use the following syntax to include all directories (not files) in the scope:

--include="*/"

Once you’ve included every directory below your target scope, you can pass the filespec you’re interested in (in this case, I wanted every bootable ISO file from a remote CentOS mirror), and then you exclude everything else that doesn’t match that filespec. It looks like this:

1.) Include every directory:

--include="*/"

2.) Include *.iso as your intended matching scope

--include="*.iso"

3.) Exclude everything else

--exclude="*"

That’s the magic sauce.

Some of these options and the order they appear in may seem very non-intuitive, so please read the rsync documentation carefully paying specific attention to the “EXCLUDE PATTERNS” section of the docs.

When in doubt, always use “–dry-run –stats” to check your work before copying or modifying any data.

Measure twice, cut once.

HOWTO: Run boot2docker in VMware Fusion and ESXi with Shipyard to Manage Your Containers

fbbb494a7eef5f9278c6967b6072ca3eThis took me awhile to piece together, and I had to go direct to the maintainers of several of these components to get clarity on why some things worked, while others did not, even following the explicit instructions. Here, I present the 100% working HOWTO:

I started with a post I found written by someone on the boot2docker project page, describing how to get this working in VMware. But he missed some crucial steps, and the syntax is wrong. Also, Shipyard has gone to a new version, and the method of starting the containers is very different from the steps posted.

Creating the boot2docker VM Instance

First, we need to create a VM inside VMware Fusion and/or ESXi. If you’re using VMware Workstation, the steps are roughly the same, but the screenshots may differ slightly.

You’ll create a new VM, and add two NICs and a single IDE HDD to the VM. Something like 10GB should be fine to hold all of your containers, build scripts and any other persistent data you might need. Follow the screenshots below for some specifics and details. There are a few subtle tweaks you’ll need to maximize your boot2docker VM.

Read the rest of this entry »

Bad Behavior has blocked 894 access attempts in the last 7 days.