All posts by zero

A geek, poet, and person who refuses to grow up. Most of my behavior comes down to being glorious accidents, not strategic campaigns.

Using Ubuntu touch on a Nexus 5

Well, this is an interesting one. I decided to go ahead and dig through my old parts bin and came across my much loved Nexus 5 android phone. Since I had just recently went ahead and discovered that OpenWRT has RTL-SDR capabilities, I wondered whether I could achieve a similar feat on the Nexus 5. This became an interesting rabbit hole to go down. I have recently been using a Raspberry Pi 3 as a base-station for my RTL-SDR hobbies (capturing NOAA satellite passes, attempting to receive SSTV images from the International Space Station, listening to ADS-B traffic, decoding pager traffic, and decoding information on 433 MHz). It works quite well when it is within range of my home network, but away from there, it becomes a bit more tricky due to the lack of a real-time clock, my propensity for configuring everything through SSH, and not wanting the additional battery draw from a dedicated screen. The Nexus 5 has a screen, decent battery, ability to connect to USB devices using an OTG cable, and most importantly runs Android (This is where my opportunity starts).

Android is, in my opinion, a great mobile operating system. Unfortunately, for my needs it is too locked down since I enjoy running non-standard drivers and thoroughly enjoy a tool called AutoWX2 to capture satellites. AutoWX2 is a collection of python scripts that greatly simplifies capturing satellite passes, and in the downtime between passes, listens to other radio traffic. Keep in mind, all of the following steps will be using Linux to perform all of the commands referenced.

The first step in all of this is to get a full Linux system installed on the Nexus 5. This is accomplished easily enough with the UBPorts tool to install Ubuntu Touch available here. Following the on-screen directions will install Ubuntu Touch to the Nexus 5.

Next up, problem number 1. The root file system for Ubuntu Touch has a size of 2 GB, out of which 130 MB are available for additional software. Well, this presents an issue as the first time we “sudo apt update && sudo apt upgrade -y && sudo apt dist-upgrade -y” we fill this 130 MB with apt cache files and downloads, and then the phone stops attempting to install updates. This is even before we attempt to install Python, git, build-essential, and additional software required to get our all in one system running.

Looking through various forums provided possible options for getting a larger root partition, or failing that, Ubuntu Touch has Libertine available, which allows a sandboxed Linux environment. All of the options I attempted, rewriting the ubuntu.img or system.img (either should work as they are hard links pointing to the same inode) file to append additional data to the end and then resizing the filesystem afterwards failed. I was about to attempt installing the software through Libertine, when I had an epiphany. The file system is mounted on boot. With the file system mounted on boot, I cannot change the underlying system, and rebooting after changing it won’t help as on reboot, it still sees the ubuntu.img as 2GB although it shows as larger when using “ls -hl /userdata/ubuntu.img”. The necessary steps are to first install a recovery firmware that supports adb connections. For this I used TWRP. To install the recovery, reboot the phone to fastboot by holding volume down + power, and upon boot you should see the andoid. Once connected, navigate to where TWRP downloaded in terminal and issue “fastboot flash recovery twrp-3.3.0-0-hammerhead.img” changing the twrp part to match the version downloaded. Once the recovery is flashed, we can “Recovery Mode” by using the volume up/down keys to select that option, and using the power button to perform the reboot.

Once inside the recovery is loaded, we want to mount the partition of /dev/block/mmcblk0 that contains our file system. This is the point where we have to find out where the file system resides. First verify that the phone is recognized as an adb device using “adb devices”. If recognized, you will see List of devices attached, a hex number and recovery. You may need to disconnect and reconnect the USB cord a few times to have it recognized as an adb device. By default, the system/ubuntu image is accessible on a mounted partition located at /data/ . In order to get the image off the phone where we can modify it, we again rely on adb, this time using “adb pull /data/ubuntu.img ./” This will take a few minutes to transfer as the file is 2 GB and will be transferring by USB.

Once the transfer is complete, we can resize the image file using dd. To make the image approximately 6GB we use “dd if=/dev/null of=./ubuntu.img bs=1M seek=6000 count=0” to make the file larger or smaller, modify the seek parameter to be how many MB you want the image to be. Once this completes (should be extremely quick) run “ls -hl” to verify that the file is now resized.

Should look like this if using 6000

Next step is to check the filesystem using “e2fsck -p ./ubuntu.img”. Using the -p switch automatically fixes any errors found. Once the file check is complete, we need ensure the file system is informed about the changes. To do this we use “resize2fs ./ubuntu.img”. Once complete we then use adb to transfer the image back to the phone. This is accomplished with “adb push ./ubuntu.img /data/ubuntu.img”.

When the file has completed transferring we have one last thing to do. We need to recreate the hardlink from ubuntu.img to system.img. Connect to the phone using adb using “adb shell” and then “cd data”. If we “ls” here we should see the following.

Almost done\

Since we have destroyed the hardlink, we need to remove the system.img, and then recreate the hardlink. This is done with “rm system.img && ln ubuntu.img system.img”. This command first deletes the system.img file, and then recreates the file as a hardlink to ubuntu.img. This can be confirmed using “ls -lhi” which lists the files, sizes in human readable format, and lists the inode (where on the disk the file is stored).e2fsck -p /dev/sdb1

On my phone, the inode of both system.img and ubuntu.img is 15 (the first column) showing that both files are the same as they reside in the same location on the disk. Don’t worry that the size is misreported as it is a limitation of busybox. Upon reboot into the full Linux distro, it is reported correctly.

Once complete we can reboot the phone normally, set a passcode (needed to enable developer mode), and enable developer mode under the about section in settings. At this point, any software available in the repositories is available to be installed to your shiny Ubuntu phone!

After fussing with this setup for multiple days, I came to the realization that autowx2 will not run on the Nexus 5 due to a lack of 64-bit python to run pypredict, which relies on pyradiomics. If anyone has any suggestions on an alternative automated weather satellite and SSTV from the ISS receiver, please offer suggestions.

Loftek LK5200 as an rtl-sdr server

The sacrificial offering.

I have had this router sitting in a bag for approximately 4 years. About 2 years ago, I had the idea that it would be a good platform to try to build a pineapple if it supported OpenWRT as I had already done a TP-Link MR3020 to be one, and the Loftek had a built in battery, making it easier to deploy. I went through various attempts to find a way to install OpenWRT on the device to no avail. I finally came back to the LK5200 today, and decided to dig a little deeper. To start with today, I opened the case.

You can see its bare circuits

An AR9331-AL1A is the processor that this device uses. Well, we know that it is an ARM processor, and it is very similar to the one in the MR-3020. (Exactly the same as the V1.8 of the hardware). This bodes well for me. Before I go and attempt to flash this though, let’s see what else I can find out about the device. Connecting the router to my network, the defaults are to not use DCHP and instead assign itself a static IP of 192.168.168.1, so I’ll use that address and run an nmap scan on it with “nmap 192.168.168.1 -vv”. The image below shows the results.

Hmmm. These look normal. Almost.

This gives us a listing of all of the open ports. Port 53 is a DNS server (expected), port 80 is the admin interface webserver (expected), ports 139 and 445 are used for Samba shares (expected as there is a usb port for file sharing on the network), and port 8181 is … Wait, what is that? It’s up in the non-privileged area. Most times i’ve come across this port it’s another webserver. Let’s try and get to it in the browser.

Odd, it looks like a command prompt and a banner for OpenWRT.

That is interesting. It seems as if the Loftek LK5200 is already running OpenWRT. Let’s try connecting to the same address and port with telnet using telnet 192.168.168.1 8181

Well, that was easier than I thought it would be.

Now I have access to the root shell on the router. First thing to do is update the package listing because I don’t believe it has ever been done. This is accomplished with opkg update . Once done, I install rtl_sdr with opkg install rtl_sdr. This installed all of the tools for using my neSDR smart as a SDR receiver. One last thing to do is blacklist the original driver built into the kernel. On this device there was no /etc/modprobe.d/ folder, so it had to be created with mkdir /etc/modprobe.d/ . Then we needed to create the blacklist.conf file underneath the directory we had just created. To do this use, echo “blacklist dvb_usb_rtl28xxu” >> /etc/modprobe.d/blacklist.conf . This command puts what is inside the quotes into a the file “blacklist.conf” in the directory “/etc/modprode.d/”.

A simple reboot later, and we can plug in our USB SDR stick, login over telnet as before and run the rtl_tcp program to feed the data to another device on the network. This is accomplished with rtl_tcp -a 192.168.168.1 . This command effectively creates a server that feeds the data received by the SDR to another machine on the network. The -a in the command tells rtl_tcp which address to serve it on. Now we can load up whatever our preferred application to view the stream (which for me is GQRX). If it’s your first time loading GQRX, you will be greeted by this screen, which should be filled in thusly.

Look at all the numbers!

After clicking OK, you may then press the play button and search the waterfall for interesting things.

This is definitely an interesting thing.

POCSAG on the Raspberry Pi

Back in October of 2018, almost immediately after being laid off, I finally achieved a long time goal of getting my ticket punched. Since then I have been a proud owner of a general class amateur radio license. While I greatly enjoy being able to broadcast, passively listening to all of the devices around me has become something of a passion. While the FCC license is not required to listen, studying for the exam shored up my knowledge on antenna theory and provided a path to build my own antennas tuned to the frequencies I wished to capture.

Over the past weekend, an interesting topic arose. Medical/emergency pagers are still used nationwide: Can we receive these signals and decode them using an approximately $20 RTL-SDR adapter? Some quick research revealed that the software exists, and it is incredibly easy to do. Let’s get started. I am starting out with a fresh install of Raspbian Stretch Lite available at https://www.raspberrypi.org/downloads/raspbian/ , a Raspberry Pi Model 2, and the NooElec NESDR Smart. The same process can be done with any variation of the Raspberry Pi, you will most likely require a powered USB hub to use the SDR.

I won’t go into the details of writing the image to an SD card, nor setting up the pi to be accessed headless with networking enabled on first boot, as that has been covered more times than I care to count, although I do need to lookup the formatting of the wpa_supplicant file on each new install (note: keep a copy for future use).

After the initial boot and required resizing of the file system, log in over ssh without having the SDR plugged in. As always, we want to update our fresh install so that we aren’t pulling in outdated packages. Connect via SSH and update. This is done with “sudo apt update && sudo apt upgrade -y && sudo apt dist-upgrade -y”. Sit back, relax, and wait approximately 20 minutes for this all to complete.

Once the updates are complete, I like the first this I install to be screen. This allows me to continue where I left off, even if my WiFi drops for some reason. The key thing is to remember to launch “screen” on login, and if disconnected use “screen -r” on re-connection. This will allow the install to continue if you get disconnected.

The next step is to install all of the software required to build our packages. Some distributions may include multimon-ng as a download in their package manager, however I like to have the bleeding edge version and this means compiling from source. Let’s go ahead and install all of the packages that we will need to run everything. To install the prerequisties, type “sudo apt install git cmake build-essential libusb-1.0 qt4-qmake libpulse-dev libx11-dev qt4-default -y”. Sit back and await completion of the install.

Once this is done, we can get to the fun part. Create a new directory in your home folder to hold all of the source code you will be getting. this can be called sdr, source, src, or whatever you like. I’m going to use source, because I like descriptive names. To make the directory and enter it in one line “mkdir ~/source && cd ~/source”.

Next we are going to build our rtl-sdr drivers and blacklist the default ones built into the kernel. The source code for the rtl-sdr driver we want to use, as well as some additional useful programs for providing a raw datastream from the SDR is available at https://github.com/osmocom/rtl-sdr. To pull it to our Pi easily we use “git clone https://github.com/osmocom/rtl-sdr”. Things will happen and when done, a new folder appears called rtl-sdr. Next, change to the rtl-sdr directory with “cd rtl-sdr” and make and change into a new directory called build with “mkdir build && cd build”. Now inside the build directory we can use cmake to create a makefile; this is done with the command “cmake ../ -DINSTALL_UDEV_RULES=ON” the -DINSTALL_UDEV_RULES=ON tells cmake to create a makefile that will include udev rules for our adapter. Once this is done, run “make” then “sudo make install” and finally “sudo ldconfig” to add the udev rules to the system. This should blacklist the default drivers, but to be sure, I like to “sudo nano /etc/modprobe.d/blacklist.conf” and add the following, each on a new line: “blacklist dvb_usb_rtl28xxu”, “blacklist dvb_core”, “blacklist rtl2830”, and “blacklist dvb_usb_v2”. Use CTRL+X to exit nano, type “y” and press enter to save.

Next we are going to get the source and compile multimon-ng. We are going to go back to our source directory using “cd ~/source” and we will get the source code from https://github.com/EliasOenal/multimon-ng using git clone again like so, “git clone https://github.com/EliasOenal/multimon-ng”. Once that is done, “cd multimon-ng && mkdir build && cd build” to enter the directory git created, make a build directory inside that directory, and finally change into the build directory. For this program, we are going to rely on qmake as the author provided a .pro file to help automate the build. To invoke this use “qmake ../multimon-ng.pro” and patiently await the creation of the makefile. Once complete run “make” followed by “sudo make install”.

With all of the required programs installed, we can now start listening for pager traffic. The best way I have found to locate the frequencies (which vary based geographic location) is to use the SDR along with a program that provides a waterfall display. You can check https://www.sigidwiki.com/wiki/POCSAG for frequency lists of where pagers operate. Using the waterfall, you can home in on an interesting frequency and use that in rtl_fm to feed to multimon-ng. An example of this would be a command like “rtl_fm -f 152.180M -s 22050 | multimon-ng -t raw -a POCSAG512 -a POCSAG1200 -a POCSAG2400 -f alpha /dev/stdin >> ~/page.txt”. Breaking down this command, rtl_fm is used to control the SDR, the -f sets the frequency to the frequency entered (here it is 152.18 MHz) -s sets the sample rate to the entered value, the | sends the output to multimon-ng the -t tells multimon-ng that we are providing raw data, the -a switches tell multimon-ng to attempt to decode POCSAG512, POCSAG1200, and POCSAG2400 (different types of pager encodings, we could also include -a FLEX which is another pager encoding), the -f alpha /dev/stdin tells multimon-ng that we only want the alphanumeric data reported to stdin and the >> ~/page.txt writes the data to a text file in the home directory called page.txt. In more simple terms, rtl_fm tunes the dongle, then we pipe that to multimon-ng, multimon-ng then sends the decoded information to a text file.

Good luck with your decoding, and hopefully all the messages don’t read, “Be sure to drink your Ovaltine.”



BBS in 2019!? Say What?

Or, why would I even want to do this.

It has been a long time since I have had the opportunity to use a BBS system, and never before have I been a SYSOP. With my current status of being underemployed, I have chosen to utilize the time unwisely and have a nostolgic flashback to the pre-internet days. I can still recall begging for a modem for my Tandy 1000 HX that I started out with on this journey, but the modem would not come until we upgraded to a 386 machine with Windows 3.1 and 4800 baud (sexy right?) Ahh, Windows 3.1 with your lack of a TCP/IP stack. That PC would eventually get an upgraded modem to 9600 then to 14400, but the hard drive would never come to being a 1 GB drive. On the plus side, it did have internal storage, unlike the Tandy.

The truth of the matter is, I long for the days of BBS door games like Legend of the Red Dragon, Pimpwars, TradeWars, and others. Also, with the internet existing as it does and both Mystic and Synchronet supporting telnet, ssh, and rlogin it should be simple to network everything (famous last words).

After a few false starts, trying both Mystic BBS and Synchronet BBS, running in a 64-bit Ubuntu VM and discovering that no matter how many times I read the instructions I could not get DOSEMU to function, I was on the verge of giving up. Luckily I chose to persist, and can now play Legend of the Red Dragon 2 on my own BBS!

First step in the process was setting up the VM. For this I chose to use Virtualbox. I know full well that I can accomplish the same thing with KVM, however I am more comfortable using VirtualBox for this since I’ve been using it longer, and when all you have is a hammer….

I installed a small 64-bit Ubuntu 16.04 server VM, and installed the SSH server only. Once installed, I did the required updates and installed unrar. Mystic BBS comes packaged in a rar file, so I guess we need it.

I downloaded the 64-bit release from http://www.mysticbbs.com/downloads.html and proceeded to unrar the files as instructed. Mystic wants to install itself to the root directory, as such we need to escalate our privilege to do so, “sudo su”. then “./install”.

Up next, we changed the ownership of the mystic folder to one with less privileges. I created a new user of “bbs” and then “chown bbs:bbs -R /mystic”. Next I left the root account, and switched to my default user which has sudo privileges, as the bbs user does not.

Next, I installed dosemu with “sudo apt install dosemu” and modified the file /etc/dosemu/dosemu.conf to reflect a us keyboard layout so dosemu would not pester me every time a door game was launched.

After this, I followed the instructions in http://wiki.mysticbbs.com/doku.php?id=cryptlib to install cryptlib so I could enable logging into the bbs over ssh as well as telnet.

Coming soon….. Configuring Door Games!



Upgrading the ANET A8 with Ramps 1.4

It was bound to happen. My Anet A8 kicked the bucket, well not entirely. While printing a new top plate for my F550 Flamewheel (the original snapped when it went into free fall on the first flight due to improperly secured propellers), the z-axis driver stopped working. Reviewing my options, it seemed all was lost. This made me sad as I had spent a considerable amount of time upgrading it to work the way that I wanted. I had added the mosfet to the heated bed, soldered directly to the bed to eliminate the high current going through the plastic connector (which had already started to melt by that point), added an inductive z-probe to do auto leveling, and printed a few upgrades specific to the model. This left me faced with a choice, get a drop in replacement and hope that one of the drivers didn’t die again, or swap the anet v1.0 mainboard for a RAMPS 1.4 board. I chose not to take the easy route and bought the RAMPS 1.4 board.

When the package came with the RAMPS board in it, I was excited. It was a month that I was without my printer and I had an itch that I needed to scratch. I decided to dive right in and start rebuilding the printer. While I had done some research on the board, I was unprepared for the lack of documentation that was included. The packaging contained an Arduino Mega, the RAMPS board, 5 A4988 stepper motor drivers, a USB A to B cable, and a cd-r containing a host of documentation that was not relevant to what I had purchased. I consider myself somewhat handy and can usually figure these things out, so I started by looking at the RAMPS board. One thing that stood out immediately was that this was not going to be a simple plug and play operation. The connectors on the Ramps board are just pins used with Dupont connectors. All of my connectors from the Anet A8 were JST connectors. This was not an issue for the X, Y or Extruder motor. It was a problem with the Z motors, end stops, thermistors, and z-probe (added on in place of the z end stop, I’m lazy and like auto-leveling).

Back to Amazon to purchase some necessary supplies.  Connectorsterminal crimpers,  and jumpers (which were unnecessary, I failed to search in the RAMPS board box hard enough).

Thankfully, swapping the ends was pretty straightforward after watching some YouTube videos on the crimping process, and wasting multiple fittings before doing so.

After getting the board all wired up, and Marlin firmware tweaked, there are many guides on setting up Marlin for the Anet A8, it was time to test the printing.  The first few attempts lead to the nozzle smashing into the heated build plate, because the sensor was not registering.  My fault entirely as I had it wired as a z-max, not a z-min.  Oops.

Changing the settings in the Ramps firmware and re-flashing the board lead to the Ramps board functioning.  Finally, I could test the printer again.

 

Doesn’t it just pop?

 

Hyperion+RetroPie+Kodi=Bliss!

UPDATE: SteamLink is now available in the experimental packages inside RetroPie. Since Hyperion uses the whatever is output to the screen to drive the LEDs, SteamLink works in this setup as well!

Back at it again.  Due to funding issues, I have been using all the spare parts that I have acquired over the past 1-2 years.  This week it was finally time to put the blinky lights to good use.  Just about 3 years ago I had attempted to set up BobLight using a Raspberry Pi 2, Kodi, and a set of WS2801 LEDs.  This worked ok, but it wasn’t the greatest.  The majority of the time, only the set that was nearest to the Pi would light up, the remainder would work intermittently, if at all.  This caused the project to be put on the back burner since the cost of the LED strip and connectors made it not worthwhile to replace the setup.  Fast Forward 2 years, and I acquired a 5m length of WS2812B LEDs, an Arduino compatible Nano, yet another Raspberry Pi 3, and a 5V/8A power supply. Continue reading Hyperion+RetroPie+Kodi=Bliss!

Squirrel Camera!

This morning I awoke and discovered that our local neighborhood squirrels have been attacking our pumpkins that have been on the porch.  Not that it is a major concern, squirrels need to eat also, but this lead me to today’s project.  Today I decided to create the squirrel cam.  Most of the functionality was already widely available, the only major thing was putting it all together.  Most of the parts for this were just laying around not being used, but I’m running out of Raspberry Pi’s to use in these projects.  Between replacing multiple machines with one, running the 3-D printer off another, RetroPie/Kodi setups, I only have a Pi Zero W remaining.  I think I’ll have to find a way to rectify this at some point.  Anyway, back to the squirrel camera.

Items Needed

  1.  3D printer
  2.  PLA Filament
  3.  Raspberry Pi
  4.  USB Power Bank
  5.  Webcam (case will mount a Logitech C310 or C270)

My first attempt at making this work involved only printing the webcam mount. This failed almost immediately when I attempted to put it on the tripod plate.  The screw hole immediately stripped out and would not stay attached.  With that, I then moved onto Plan B.  A bit more involved than simply printing a mount, I was able to take the mount STL files, and using TinkerCAD, remix it with a snap together “Simple Raspberry Pi 3” case, to create a case with a built in mount. Admittedly, my remix could have been done better, but I was in a rush after being frustrated by the stripped screw hole. Fair warning, the case requires a bit of clean up and has a lot of support structures, but it fits and does the job, so it is a partial win.  At some point I’ll redo it so it is prettier, and prints with less support, but for now, it’ll do.

Case
I made this myself

Next up was getting the software installed to the micro SD card.  Raspbian Stretch Lite was my choice to use.  Burn the image to the micro SD card in whatever way you want.  Since we are going to be running the Pi headless, we need to enable SSH access before we boot the Pi.   With all versions of Raspbian since Nov. 2016, ssh access is disabled by default.  The way to enable it, without a display, is to create a file in the boot partition named “ssh” without quotes.  This will allow us to access the pi over the network and proceed with the rest of the configuration.

Once we are connected to the network with the Pi, we will update the OS, and then install all the needed software.  This Pi will be running a program called motion, which will take pictures automatically when enough pixels change in the image.  Ideally, this won’t cause unwanted images to be taken since I decided it will also be a Twitter bot, tweeting the images whenever a photo is taken.

Up next, we will install motion.  I followed a very straightforward guide, located here.  I followed along until the part where it describes adding network storage to the mix as the sd card available to me was large enough to not worry about that.

Next was deciding on how to post the photos that were taken to Twitter.  This created an issue as I have never tried to run a Twitter bot before.  A little bit of research later,  and I found an easy to run twitter bot here.  I love Github.

One key thing I found with the bot though is that the starting tweet number has to be set.  It didn’t want to default to 1 on the initial run, causing errors whenever I tried to start it.

Following the instructions in the readme file, I set up Twitter API access, plugged in the required information, ran the script once with the –tweetnumber variable set to 1 and success!  The pi tweeted on its own.  Next up, configuring cron jobs to run the script at regular intervals.

For this,  I chose to have the script run every 5 minutes.  Many walkthroughs exist for setting up cronjobs so this will not be covered.  Lastly, we put it all together and set it up.  The Squirrel cam is tweeting at @TLPorchCam

Things I want to change:

Make the squirrel cam tweet the most recent image if it hasn’t been posted previously.  The bot I used just picks randomly from the folder, without regard to the most recent.  I guess eventually the pictures will get posted, but it isn’t the most effective way in my mind.

 

UPDATE:  Success!  I have squirrel pictures.  58-20171222123131

Squirrel
This is what I do with days off from work!

Full setup of Retropie.

In this posting, I’ll be walking you through a setup of Retropie on the Raspberry Pi3.  We’re going to be working exclusively in GNU/Linux in this tutorial, as all the necessary software is already included in most distros, and most of the file systems we will be working with are ext3/ext4.  If you are unfamiliar with GNU/Linux, I will post a guide later on creating a live distro environment to use for this task.  Another option would be to virtualize a GNU/Linux system through Virtual Box or similar software and have USB passthrough enabled to access the drives through the virtual machine.  This is outside of the scope of this tutorial.

Necessary items for the Setup–

Raspberry Pi 3 – either by itself or as part of a kit with power supply, case, and HDMI cable.

Controllers- My favorites are iBuffalo SNES style and knock off PS3 style

A keyboard for the Pi3 — any usb keyboard will work.  I like this one.

And if you want to make it look awesome, you could always get this because, nostalgia.

First, download the most recent version of RetroPie for the Raspberry Pi 3.  This can be found at https://retropie.org.uk/download/.  Once this is done, we can extract the contents of the archive.  My personal favorite for this is a small program called dtrx .  This usually is not included with your distro of choice but can easily be acquired through your package manager.  On a Debian based system this would use the command “sudo apt install dtrx” in the terminal.   Then type “cd Downloads” since this is where the file should have saved, and then “dtrx <filename>”.  Tab completion helps here.  Simply type the first few letters of the filename,  which should be “retr” press tab on the keyboard, and bash should fill in the rest.

Next plug in your micro SD card to the computer, not the Pi.  Wait for the drive to mount and in your terminal type “dmesg | tail”.  This gives a readout of the last ten lines of the kernel buffer.  This will provide you with the /dev name of your connected drive.  Usually this will be in the format [sdX] with X being a letter that is displayed in the terminal, in this screenshot

dmesg output
Output of dmesg

, you can see mine is sdc.   We can do the same with the “mount” command if you know what you are looking for, but for now dmesg is our best option.  The disk I am using just happens to be sdc, so all of my commands will use this.  Be sure to substitute your own that was found with dmesg | tail for your own use.

Now that we have our image file and our SD card /dev reference, we can start the process of writing the image to the card.  If you have made sure that you have the right /dev reference this is not a dangerous thing, but be warned, using the wrong /dev reference will all but guarantee you a bad day.  The next command is “sudo dd if=<imagefilename> of=/dev/sdX”.  This will take a few minutes to write depending on the speed of your SD card and other variables.  Once it is done, place the SD card into the Raspberry Pi with it connected to a screen, and let it do its magical resizing of the partitions.

Now that the partitions are resized we can start configuring.  The Emulation Station frontend will load automatically, and ask you to start configuring your controllers.  At this point, we are not going to configure them.  I’m personally more of a fan of installing all my games on a USB drive.  This is a personal choice, and if you prefer to keep them on the SD card, you can skip the next part.

Power off the PI properly.  This can be done by pressing F4 and the keyboard and typing “sudo poweroff” in the terminal that loads.  Once the PI safely shuts down, remove the SD card.

We are going to take the SD card, and a USB flash drive (the larger the better) to our GNU/Linux workstation.  Insert the USB drive into the machine.  The majority of USB drives sold in stores are by default formated to FAT32.  Retropie can read this file system, but I prefer to rewrite them to be ext4 formatted.  Again personal preference, lifespan be damned.  To do this, I use gparted.  With the drive formatted now to ext4, unplug and replug in the drive.   In a terminal, run “dmesg | tail” again to find the /dev/sdX designation of the flash drive.  This may be the same as it was for the SD card since the SD has been removed, and then run “sudo blkid”.  We are looking to find the UUID of the disk we just formated and it will be preceded on the screen by the /dev/sdX designation.  This is a long number made of hexadecimal digits, and we will copy this to the clipboard.  The next step is to plug in the SD card we created, and navigate to partition that is not called “boot”  this should automount under the /media/$USER/ folder.   You can then copy the contents of the /home directory on the SD card to the USB drive.  This can be accomplished by “rsync /media/$USER/SDCARDDIRECTORY/home/ /media/$USER/USBDIRECTORY/ -aP” .  Once this completes, use the command “sync” to ensure all data is copied.  then delete the contents of the /media/$USER/SDCARDDIRECTORY/home.  You can use the rm command for this, like so “rm /media/$USER/SDCARDDIRECTORY/home/* -R”.  Next up, we are going to tell RetroPie that we want to use the home directory we create on the USB drive as the home directory for RetroPie.  We will do this by modifying the fstab file in /media/$USER/SDCARDDIRECTORY/etc/.  To do this type “sudo nano /media/$USER/SDCARDDIRECTORY/etc/fstab”  a simple text editor will appear that doesn’t require dark magic to navigate, unlike vim.

In nano, you’ll see something like this.

fstab screenshot

What we are going to do is add a new line, telling fstab what the USB drive is and to mount it at boot time into /home.  This looks like this, and all you need to do is change the UUID to the one that was copied earlier from “sudo blkid”

fstab with uuid
fstab updated

Now that we have that set, we can proceed to copying our game files over.   Since we still have our USB plugged into the PC, we can do this natively, and RetroPie has laid out the folders in a straightforward manner.  All the game files go under RetroPie/roms with separate subdirectories for each game system.  Additional systems can be installed later, and rsync’d across the network over ssh or via smb shares.  The next part of the puzzle, is what kind of controllers do you wish to use.  There are many choices out there, and some work better than others.  The two of which I am a fan are listed above.

Both of these work well for casual gaming, and I’ve used both with the only caveat being that the controllers are defined in RetroPie as Shanwan Clones.  The default bluetooth stack does not work with these.  RetroPie takes care of installing the correct software, but bluetooth keyboards will cease to work when using any type of PS3 compatible controller.  For the cost of them though in my opinion, it is worth the trade-off.

Boot up your RetroPie system now, and we can start to configure the controllers and add additional software.  Once booted exit out to the command line by pressing F4 on the keyboard.  Type “cd RetroPie-Setup” and then “sudo ./retropie-setup.sh” .  This loads the RetroPie configuration application where we can update, install new software, and configure the PS3 controllers.  First select update, and allow the software to do it’s thing.  The next step I usually take is to go to manage packages then manage optional packages and install Kodi.  I really like this software, and has been in use in my household since the days of the softmodded original XBox with the “Duke” controller.  I also am running another Pi3 as a file server on the network that feeds videos and music to everything on the network, so one multipurpose box taking up an HDMI port is always a win.

If using the PS3 controllers your next step is to install the drivers/this is under Configuration/Tools then ps3controller.  Follow the directions on-screen and you’ll be able to use the wireless controllers.

Finally, the last step of my install is to get moonlight up and running.  Moonlight is cross-platform streaming software that takes advantage of the Nvidia game streaming system that is available on Windows.  I have a ridiculous collection of Steam games, a Steam Link, and the Steam Controller, but my TV used to only have one HDMI port, and it got old swapping cables.

To install Moonlight, exit out to the command line again, ensure that you are connected to the network on your RetroPie, confirm you have game streaming enabled on your gaming computer and in the command line on your RetroPie type “cd” to return to the home directory.  Type “wget https://raw.githubusercontent.com/TechWizTime/moonlight-retropie/master/moonlight.sh –no-check” to download the latest version of Tech Wiz Time’s Moonlight install script.  Type “sudo chmod +x moonlight.sh” to allow the script run permissions.  Type “sudo ./moonlight.sh” and follow the onscreen instructions.

Now that everything is installed, you can reboot the pi with “sudo reboot” or simply type “emulationstation” in the terminal, configure your controllers,  and start your adventure re-playing the classics from your childhood.  If you have any questions, please feel free to ask below.  Any requests for where to find games will be directed to lmgtfy.com

Tails installer errors on wipefs

Today I decided to install Tails to a flash drive  that was not being used.  While I have my system set to dual boot between Xenial and Windows 7, it is unusual for the Windows system to launch at all, unless I’m in the mood to stream Steam games over moonlight.    The choices offered at the Tails website are to install from another Tails installation, or install straight from Ubuntu.   Since I was using Tails for the first time in about 3 years, I chose to go straight from Ubuntu.  This involved following the steps provided on the Tails website, with one small addition.  Every time I tried to install, wipefs failed, and I was unable to figure out why, until the Eureka moment struck.  My usb drive is set to automount on insert, and it was mounted.

The solution was simple.  Unmount the drive and then run the installer.  While it was a simple solution, it might be beneficial to include a warning that tails-installer will not work if the drive your are installing to is already mounted.  I’ve run dd multiple times with mounted disks and never experience this issue.  Hopefully my simple solution will help you with keeping your private life private, and making life easier for yourself.

 

Replacing a PC and Pogoplug with a Raspberry Pi 3

For the past two years, I’ve had a hacked together system combining a desktop PC running Ubuntu headless and a Pogoplug v4 running Arch Linux ARM headless.  The PC handled all download scheduling, Pihole, centralized media library database for 4 instances of Kodi, external SSH access, and for a brief time an instance of MythTV.  The Pogoplug v4 handled the shared 10 TB of hard drives on the network over Samba, and formerly external SSH until a botched update broke SSH.  Attempts to restore SSH access to the Pogoplug were temporarily successful, until updates once again borked it.

Now I’m rebuilding the setup under a single Raspberry Pi 3, for power consumption reasons and simply because I’m bored.  Come along on the journey, while we explore if all of this functionality can be placed into an SBC (no MythTV is not being installed on the Pi 3, although previously that has occured).  I’m mostly interested in how many simultaeneous HD streams it can serve up over the 100 Mbps ethernet that is shared with the USB 2.0 bus.  I have no reason to beliew

The first step in any project like this is to choose the base operating system which will enable the Pi 3 to do all the necessary functions.  On researching the options available, I had a choice to make.  0) I can do it the hard way, start with the most recent version of Raspbian,  or 1) I could install a system that has some or all of the functionality built in and add on as required.

I went with option 1, because I was on a bit of a time constraint and really wanted to finish the process in under 4 hours.

As already discussed, Raspbian was out, because although it can be done, I really wanted to be lazy.  The next option was OpenMediaVault.  This would serve all of my file server need, and would only need to add on SabNZBD, Sonarr, CouchPotato,  HeadPhones, and MySQL to complete the process.  This seemed easy enough, but attempting to log in over ssh at the start was a hassle, and mono would need to be compiled for the processor.  Since I don’t have a cross compiling environment configured, it was on to plan B.

Plan B was DietPi.  I had never heard of this software prior to this grand experiment, but was willing to give it a chance.  After all, the setup I have currently works, the worst that would happen is I keep the existing arrangement if it fails to function.

DietPi on first boot, seemed like a winning choice.  I’m a big fan of running things headless, since that means I can sit in the comfy chair at my desk, and ssh into the device.  DietPi does not launch a GUI on first boot, instead it displays the ip address, user name, and password, and leaves it to you to figure out that things should be configured over SSH.

Once I was able to log in over ssh, I logged out, and performed my favorite bash command “ssh-copy-id”.   I really can’t stand having a SSH server accept password authentication, and this system would not fall into that trap.  Besides, I have seen way too many attempts to try to brute force a password with a user name of “pi”.  This is one of the favorites to attempt by people who practice the “throw it at the wall and see if it sticks” method of intrusion.  No reconnaissance, just blast everything that has port 22 open to the world with a username of pi and password raspberry, and you are bound to get a shell somewhere.  I can’t really fault them, it works because many people don’t change default passwords.

On logging in, the Diet-Pi configuration wizard loaded up with many options, and to my suprise, almost everything I needed was available.  PiHole, Sonarr, SabNZBd, Deluge, NFS, and Samba servers.  No standalone MySQL server (many choices for webserver stacks, but I didn’t want to be installing unnecessary packages) which will be rectified later.  HeadPhones was also missing from the easy list, but I can live without that for now.

After what appeared to be a very long wait for software to install, I finally was able to start configuring everything that had previously been set up between the two boxes that were in use.  A great part about the software being used, is that the configuration files are the same across platforms.  Copy the files to the right locations, modify a few server calls to reflect the new ip address, and back in business like before.

The incredibly hard part was configuring Samba on the pi.   This wasn’t because Samba is hard in itself, but more because I had a very outdated configuration file.

All told, the total time taken was just over 9 hours to get everything “close enough” which included forgetting that I mounted the USB drives under a home directory for a user I had yet to create and wasting 2 hours trying to get NFS working rather than troubleshooting to make my smb.conf file compatible with Samba 4.