<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[ycnrg.org]]></title><description><![CDATA[Articles and info for various projects by Jacob Hipps / ycnrg.org]]></description><link>https://ycnrg.org/</link><generator>Ghost 0.9</generator><lastBuildDate>Mon, 09 Feb 2026 09:19:17 GMT</lastBuildDate><atom:link href="https://ycnrg.org/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Allegorithmic Substance Suite on Ubuntu 16.04]]></title><description><![CDATA[How to install Substance Painter, Substance Designer, and the Automation Toolkit/Python API on Ubuntu 16.04 and similar distros.]]></description><link>https://ycnrg.org/substance-on-ubuntu/</link><guid isPermaLink="false">b93e1224-1ddd-4960-ae8a-b0deed4eab00</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[substance]]></category><category><![CDATA[app install guides]]></category><category><![CDATA[3d graphics]]></category><dc:creator><![CDATA[Jacob Hipps]]></dc:creator><pubDate>Sat, 31 Mar 2018 06:16:15 GMT</pubDate><media:content url="https://ycnrg.org/content/images/2018/03/substance_ubuntu_titlecard.png" medium="image"/><content:encoded><![CDATA[<img src="https://ycnrg.org/content/images/2018/03/substance_ubuntu_titlecard.png" alt="Allegorithmic Substance Suite on Ubuntu 16.04"><p><img src="https://ycnrg.org/content/images/2018/03/substance_ubuntu_titlecard.png" alt="Allegorithmic Substance Suite on Ubuntu 16.04"></p>

<p>This guide covers how to install the Substance applications on Ubuntu 16.04 or similar distros (such as Debian or Mint) manually, without using <code>alien</code> (check out <a href="https://forum.allegorithmic.com/index.php?topic=12851.0">this forum post</a> on how to do this via <code>alien</code>). We basically just need to extract the RPM contents, copy it to <code>/opt</code>, then run the postinstall scripts to setup icons, links, and file associations.</p>

<p>At the end, we'll also look at setting up the Python API. If you have a Substance subscription plan, you can download the Automation Toolkit from the Licenses page after logging into Allegorithmic's website.</p>

<h1 id="installation">Installation</h1>

<h2 id="installprerequisites">Install Prerequisites</h2>

<pre><code>sudo apt-get update  
sudo apt-get install rpm rpm2cpio  
</code></pre>

<h2 id="installsubstancepackages">Install Substance Packages</h2>

<p>First, create a new directory called <code>sub_extract</code> inside of your home directory:  </p>

<pre><code>mkdir ~/sub_extract  
cd ~/sub_extract  
</code></pre>

<p>Then download (or copy) your Substance RPMs to this new directory. You should see something like this when listing the directory (version and actual packages will obviously be different, depending on what you're trying to install):  </p>

<pre><code>jacob@jotunn:~/sub_extract$ ls  
Substance_Automation_Toolkit-2017.2.4-132-linux-x64-standard-indie.tar.gz  
Substance_Painter-2018.1.0-2128-linux-x64-standard-full.rpm  
Substance_Designer-2018.1.0-1039-linux-x64-standard-full.rpm  
Substance_Player-2018.1.0-1039-linux-x64-standard-full.rpm  
</code></pre>

<p>Next, extract the RPM package contents for each RPM:  </p>

<pre><code>for thisrpm in *.rpm; do rpm2cpio $thisrpm | cpio -idmv ; done  
</code></pre>

<p>If you downloaded the Substance Automation Toolkit, also extract that into the same directory:  </p>

<pre><code>tar -xvf Substance_Automation_Toolkit-*.tar.gz -C opt/Allegorithmic  
</code></pre>

<p>Now &mdash; copy the files into their final installation location. The applications expect to be installed in <code>/opt/Allegorithmic</code>, so that's where we'll put them:  </p>

<pre><code>sudo cp -Rvp ~/sub_extract/opt/Allegorithmic /opt/  
</code></pre>

<p>Last, we need to extract the installation scripts from the RPMs. This one-liner will dump the scripts for each RPM, then extract the postinstall script:  </p>

<pre><code>for thisrpm in *.rpm; do outscript=$(echo "$thisrpm" | sed -e 's/^\([a-zA-Z0-9_]*\)-.*/\1/').sh ; rpm -qp --scripts $thisrpm | sed -n '/postinstall/,/preuninstall/p' | grep -v ':$' &gt; $outscript ; chmod +x $outscript ; done  
</code></pre>

<p>You should now have matching scripts in your directory that corresponds to each RPM:  </p>

<pre><code>jacob@jotunn:~/sub_extract$ ls -lAh *.sh  
-rwxrwxr-x 1 jacob jacob 1022 Mar 31 01:02 Substance_Designer.sh
-rwxrwxr-x 1 jacob jacob 1016 Mar 31 01:02 Substance_Painter.sh
-rwxrwxr-x 1 jacob jacob 1016 Mar 31 01:02 Substance_Player.sh
</code></pre>

<p>After you have reviewed each script (they should just fix the permissions, then set up icons and file associations), run them:  </p>

<pre><code>sudo bash Substance_Designer.sh 1  
sudo bash Substance_Painter.sh 1  
sudo bash Substance_Player.sh 1  
</code></pre>

<p>(Note: The <code>1</code> parameter is required as it triggers the postinstall actions in the RPM script)</p>

<p>If no errors were returned, then they were successful. The application launchers should now show up in your Applications Menu! You can also launch the main applications via <code>substancedesigner</code>, <code>substancepainter</code>, or <code>substanceplayer</code> from a terminal window or run dialog.</p>

<p><img style="zoom: 75%;" src="https://ss.ycnrg.org/jotunn_20180407_233212.png" class="no-fluid" alt="Allegorithmic Substance Suite on Ubuntu 16.04"></p>

<h2 id="tweaksfixes">Tweaks/Fixes</h2>

<h3 id="panningnotworking">Panning Not Working</h3>

<p>If you're using the Compiz compositor for your desktop, more than likely the <code>Alt+Middle Mouse</code> (Middle Mouse = <code>Button 2</code>) combo will be assigned to resizing windows. Since this is also the default configuration for panning the camera in Substance applications, this poses a problem. To fix it, open up the CompizConfig Settings Manager:  </p>

<pre><code>sudo ccsm  
</code></pre>

<p>Then find the "Resize Window" plugin and click on it. On the first tab, click the key binding for "Initialize Window Resize", then either change it to something else, or disable it. This key is not required to be bound for window resizing to work. Once set, click "Back", then close CCSM to save the settings.</p>

<p><img style="zoom: 75%;" src="https://ss.ycnrg.org/jotunn_20180330_031031.png" class="no-fluid" alt="Allegorithmic Substance Suite on Ubuntu 16.04"></p>

<h3 id="fixmissinguiasset">Fix missing UI asset</h3>

<p>When running Painter or Designer, you may see an error message such as the following:  </p>

<pre><code>[Script] file:///home/jacob/Documents/Allegorithmic/Substance Painter/plugins/substance-source/MainHeaderBar.qml:161:7: QML Image: Cannot open: file:///home/jacob/Documents/Allegorithmic/Substance Painter/plugins/substance-source/exit.svg
</code></pre>

<p>In this case, the file <em>does</em> exist, but it's named <code>Exit.svg</code>. Since most filesystems on Linux/UNIX are case-sensitive, but NTFS and HFS+ are not (HFS+ <em>can</em> be, but is usually configured to be case-insensitive), the application is unable to locate the file.</p>

<p>To solve this, simply add a symlink from <code>exit.svg</code> to <code>Exit.svg</code>:  </p>

<pre><code>ln -s ~/Documents/Allegorithmic/Substance\ Painter/plugins/substance-source/{E,e}xit.svg  
</code></pre>

<p>I imagine this will be fixed by Allegorithmic in a future release. :D</p>

<h2 id="screenshots">Screenshots</h2>

<p>Below are a couple of screenshots of Substance Painter and Substance Designer running on Ubuntu 16.04&mdash; working great! My machine has an nVidia GTX 960, running the proprietary NVIDIA 381.22 drivers. These applications can use a ton of VRAM (dependent upon your output texture size), so the more graphics memory you have, the better!</p>

<p><img src="https://ss.ycnrg.org/jotunn_20180330_030613.png" alt="Allegorithmic Substance Suite on Ubuntu 16.04"></p>

<p><img src="https://ss.ycnrg.org/jotunn_20180330_031602.png" alt="Allegorithmic Substance Suite on Ubuntu 16.04"></p>

<p><img src="https://ss.ycnrg.org/jotunn_20180407_233919.png" alt="Allegorithmic Substance Suite on Ubuntu 16.04"></p>

<h1 id="pythonapisetup">Python API Setup</h1>

<p>The toolkit should already be installed in <code>/opt/Allegorithmic/Substance_Automation_Toolkit</code> if you followed the previous installation procedure.</p>

<p>The <code>pysbs</code> module (the Python module that houses all of the magic) is contained within a zip archive that can be installed via <code>pip</code>:</p>

<pre><code>sudo pip install /opt/Allegorithmic/Substance_Automation_Toolkit/Python\ API/Pysbs*.zip  
</code></pre>

<p>If you have multiple Python versions, you can also install the module for those, too. For example, to also install for Python 3:  </p>

<pre><code>sudo pip3 install /opt/Allegorithmic/Substance_Automation_Toolkit/Python\ API/Pysbs*.zip  
</code></pre>

<p>If everything went smoothly, you should be able to run one of the included sample scripts, located in <code>/opt/Allegorithmic/Substance_Automation_Toolkit/samples</code>. Let's test out the <code>variations.py</code> script, which takes an example stone texture, creates different variations, then bakes out all of the textures as PNG images.  </p>

<pre><code>cd /opt/Allegorithmic/Substance_Automation_Toolkit/samples  
python variations.py  
</code></pre>

<p>After some time, it should finish, and we can check the results (<code>xdg-open</code> should open the image in your default image viewer, like <code>eog</code>)  </p>

<pre><code>cd /opt/Allegorithmic/Substance_Automation_Toolkit/samples/variations/_output  
ls  
xdg-open basecolor_opus_0.png  
</code></pre>

<p>That's it! Now you can start writing scripts that generate, manipulate, or bake out substances! Try using <code>ipython</code> to interactively test and explore the API.</p>

<ul>
<li><a href="https://support.allegorithmic.com/documentation/display/SAT/Automation+Toolkit+Home">Substance Automation Toolkit documentation</a></li>
</ul>]]></content:encoded></item><item><title><![CDATA[Lattice Diamond on Ubuntu 16.04]]></title><description><![CDATA[Walkthrough of Lattice Diamond 3.9+ installation on Ubuntu 16.04, as well as udev setup for FTDI-based USB programmers or dev boards.]]></description><link>https://ycnrg.org/lattice-diamond-on-ubuntu-16-04/</link><guid isPermaLink="false">49546c9c-01f8-4f2a-a644-a9ad8c3e7b19</guid><category><![CDATA[electronics]]></category><category><![CDATA[linux]]></category><category><![CDATA[fpga]]></category><category><![CDATA[lattice]]></category><category><![CDATA[embedded]]></category><dc:creator><![CDATA[Jacob Hipps]]></dc:creator><pubDate>Fri, 16 Jun 2017 02:39:56 GMT</pubDate><media:content url="https://ss.ycnrg.org/lattice.png" medium="image"/><content:encoded><![CDATA[<img src="https://ss.ycnrg.org/lattice.png" alt="Lattice Diamond on Ubuntu 16.04"><p><img src="https://ss.ycnrg.org/lattice_machxo3_starter.jpg" alt="Lattice Diamond on Ubuntu 16.04"></p>

<p>I recently picked up a <a href="http://www.latticesemi.com/xo3lfstarter">Lattice MachXO3 starter kit</a> from Mouser-- it's a pretty cheap and convenient board for experimenting with an FPGA, without having a bazillion peripherals attached to it (and it's under $30).</p>

<p>Lattice's design &amp; synthesis software, <a href="http://www.latticesemi.com/diamond">Lattice Diamond</a>, is available for both Windows and Linux, which is great. However, they only provide an RPM package, with official support for RHEL. This is a pain, but isn't a big issue. Other guides I saw online tried to use <code>alien</code> to repackage the RPM as a deb package, but that seems like a pain, so we can just extract the files and run the post install script manually.</p>

<h2 id="prerequisites">Prerequisites</h2>

<p>We will need some RPM tools to work with the provided package, as well as <em>libusb1</em> for Python 2.7. Ensure those are installed as follows:  </p>

<pre><code>sudo apt-get install rpm rpm2cpio  
sudo pip install libusb1  
</code></pre>

<h2 id="step1acquiretherpm">Step 1: Acquire the RPM</h2>

<p>First, you'll need to sign up for an account on Lattice's website, then you'll be able to download the software <a href="http://www.latticesemi.com/view_document?document_id=52032">here</a>.</p>

<p>Create a directory called <em>diamond</em>, then download the RPM to this directory.</p>

<pre><code>mkdir diamond  
</code></pre>

<h2 id="step2extracttherpminstallfiles">Step 2: Extract the RPM, Install Files</h2>

<p>In the <em>diamond</em> directory, run the following command to extract the file contents:  </p>

<pre><code>rpm2cpio *.rpm | cpio -idmv  
</code></pre>

<p>Next, we will need the post-install scriptlet from the RPM.  </p>

<pre><code>rpm -qp --scripts *.rpm  
</code></pre>

<p>This command will print <em>all</em> of the scriptlets. Highlight and copy the postinstall section, then open a text editor and paste the contents into a file named <strong>postin.sh</strong>. Once that's done, make the file executable, and run it:  </p>

<pre><code>chmod +x postin.sh  
RPM_INSTALL_PREFIX=$PWD/usr/local bash postin.sh  
</code></pre>

<p>Now, copy the files to the correct location:  </p>

<pre><code>sudo cp -Rva --no-preserve=ownership ./usr/local/diamond /usr/local/  
</code></pre>

<p>The <em>diamond</em> directory we created for the intermediate steps can be removed once installation is complete (optional):  </p>

<pre><code>cd ../  
rm -Rf diamond  
</code></pre>

<h2 id="step3setupudevrules">Step 3: Setup udev rules</h2>

<p>If you will be using a dev board or programming cable with Diamond, you will need to set up some udev rules to ensure the kernel's ftdi_sio driver doesn't bind to the device. We will also need to ensure correct permissions on the devices <strong>(NOTE: If Diamond or Programmer start with your device plugged in, but are unable to access the device due to permissions issues, they will segfault! Yay...)</strong>.</p>

<p>Create a file called <code>/etc/udev/rules.d/10-lattice.rules</code> with the following contents (adjust as necessary). You'll need to be root, or use <code>sudo</code> to create this file:  </p>

<pre><code>ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6010", MODE="0666", SYMLINK+="ftdi-%n", RUN+="/bin/sh -c 'basename $(dirname $(realpath /sys%p/device)) &gt; /sys/bus/usb/drivers/ftdi_sio/unbind'",RUN+="/root/ftdi_fixer.py"  
</code></pre>

<p>The vendor and product ID can be determined by running <code>lsusb</code> with your device plugged in to your machine.</p>

<p>The above entry runs a couple of commands whenever your device is plugged in. The first is to unbind the device from the ftdi_sio kernel driver. The second is a Python script (introduced shortly), which will properly fix the device entry permissions, since udev fails to do this correctly (it is likely I am doing something wrong, but at least this works).</p>

<p>The script <code>/root/ftdi_fixer.py</code> can be viewed <a href="https://ycc.io/scripts/ftdi_fixer.py">here</a>. This is a short script I wrote (which utilizes libusb1 we installed earlier) to fix the device entry permissions.</p>

<pre><code>sudo curl https://ycc.io/scripts/ftdi_fixer.py -o/root/ftdi_fixer.py  
sudo chmod +x /root/ftdi_fixer.py  
</code></pre>

<p>Now that everything is in place, be sure to unplug your cable (if it's plugged in), then reload the udev rules:</p>

<pre><code>sudo udevadm control --reload  
</code></pre>

<p>Now plugging in your cable, you should see entries like the following in syslog or dmesg:  </p>

<pre><code>[3117880.476085] ftdi_sio ttyUSB1: FTDI USB Serial Device converter now disconnected from ttyUSB1
[3117880.476109] ftdi_sio 3-14:1.1: device disconnected
[3117881.483165] ftdi_sio ttyUSB0: FTDI USB Serial Device converter now disconnected from ttyUSB0
[3117881.483193] ftdi_sio 3-14:1.0: device disconnected
</code></pre>

<p>This means that the device was disconnected <em>from the driver</em> (which is what we want). Be sure to check <code>/var/log/syslog</code> for execution errors for the ftdi_fixer.py script if you encounter problems.</p>

<h2 id="conclusion">Conclusion</h2>

<p>Now that the device is connected and ready-to-go, you should be able to access the adapter from within Lattice Diamond or Lattice Programmer (formerly called ispVM).</p>

<p><img src="https://ss.ycnrg.org/jotunn_20170615_214537.png" alt="Lattice Diamond on Ubuntu 16.04"></p>

<h3 id="notes">Notes</h3>

<ul>
<li>Cable/adapter detection, JTAG chain scanning, and Flash programming are <em>super</em> slow on Linux. While troubleshooting problems initially, I noticed that the Lattice software re-enumerates and checks every single USB device prior to starting any of these operations.</li>
<li>The MachXO3 Starter Kit, as well as many other Lattice dev &amp; starter boards, feature an <a href="http://www.ftdichip.com/Products/ICs/FT2232H.html">FTDI FT2232H</a>. This is common on many recent programming cables, as it allows using the <a href="http://www.ftdichip.com/Support/SoftwareExamples/MPSSE.htm">MPSSE interface</a> for banging out JTAG-- and FTDI even have an example that features JTAG (see previous link). The Lattice tools use the ftd2xx-mpsse driver on Linux. However, I was not able to get <a href="http://urjtag.org/">urJTAG</a> to communicate with the board using its generic FT2232 cable type and matching settings.</li>
<li>Using an FTDI MPSSE cable (<a href="http://www.mouser.com/ProductDetail/FTDI/C232HM-DDHSL-0">Mouser</a>, <a href="https://www.amazon.com/FTDI-C232HM-DDHSL-0-Assembly-USB-MPSSE-Output/dp/B00HKK4SCO/">Amazon</a>, <a href="https://www.digikey.com/products/en?keywords=C232HM-DDHSL">Digikey</a>) might work as an in-circuit JTAG programmer without any modification (as long as the VID/PID are <code>0x0403</code>/<code>0x6010</code>, I believe this should work without any issues). Need to buy one and actually test this.</li>
</ul>]]></content:encoded></item><item><title><![CDATA[VGA Passthrough with OVMF+VFIO on Ubuntu 16.04]]></title><description><![CDATA[Walkthrough and thoughts on the setup and configuration of a Windows 10 KVM VM, using PCI passthrough to enable near-native gaming performance.]]></description><link>https://ycnrg.org/vga-passthrough-with-ovmf-vfio/</link><guid isPermaLink="false">677ad9d6-178e-48f0-a622-54f5e6c3338c</guid><category><![CDATA[linux]]></category><category><![CDATA[virtualization]]></category><category><![CDATA[kvm]]></category><category><![CDATA[vga passthrough]]></category><dc:creator><![CDATA[Jacob Hipps]]></dc:creator><pubDate>Sun, 12 Mar 2017 07:40:30 GMT</pubDate><media:content url="https://ss.ycnrg.org/jotunn-win10_20170305_235642.png" medium="image"/><content:encoded><![CDATA[<img src="https://ss.ycnrg.org/jotunn-win10_20170305_235642.png" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04"><p><img src="https://ss.ycnrg.org/jotunn-win10_20170305_235642.png" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04"></p>

<p>After many months of research and reading through various articles detailing VGA passthrough builds (such as <a href="http://dominicm.com/gpu-passthrough-qemu-arch-linux/">here</a>, <a href="https://www.reddit.com/r/pcmasterrace/comments/2z0evz/gpu_passthrough_or_how_to_play_any_game_at_near/">here</a>, and <a href="https://www.pugetsystems.com/labs/articles/Multiheaded-NVIDIA-Gaming-using-Ubuntu-14-04-KVM-585/">here</a>), I finally decided to upgrade my machine&mdash; with PCI passthrough being a primary objective of the new build. I have never liked dual-booting, and using Windows as my primary OS is not really an option, as far as I'm concerned. My major issue now is that I need a bigger desk to stick more monitors on ;)</p>

<p>This write-up details my experience in setting up a Windows 10 guest to run on an Ubuntu 16.04 host. I have used libvirt to manage things, and instructions for using virt-manager to perform most tasks have been provided (some configuration, such as CPU pinning or using raw block storage, is not possible via virt-manager). There are a <em>lot</em> of places where things can go wrong, and I'll try to point those out. This should not be considered an exhaustive guide&mdash; I've provided links to additional resources at the end of this document in case you get stuck, or have needs that differed from mine.</p>

<h2 id="requirements">Requirements</h2>

<ul>
<li><strong>CPU must support virtualization extensions</strong> (VT-x for Intel)</li>
<li><strong>CPU must support Directed I/O</strong> (VT-d for Intel, generically known as <strong>IOMMU</strong>)</li>
<li><strong>Motherboard must support VT-x and VT-d (or AMD equivalents).</strong> When buying new hardware, check the motherboard's User Manual, which can typically be found on the product page on the manufacturer's website. <a href="https://ss.ycnrg.org/mobo_vtd_support.png">Example from the PDF manual for my motherboard</a>, which I checked before purchasing.</li>
</ul>

<h4 id="setuprecommendations">Setup recommendations</h4>

<ul>
<li><strong>Kernel should be 4.1 or newer;</strong> otherwise you may need to apply various workarounds. vfio-pci is natively supported in 4.1.</li>
<li><strong>Your host operating system should be installed in UEFI mode, and your machine set to boot via UEFI</strong></li>
<li><strong>The graphics card you plan to passthrough should have a UEFI or Hybrid BIOS</strong></li>
<li><strong>Your CPU should fully support ACS if you don't want to worry about IOMMU groupings.</strong> If you have a CPU that does not fully support ACS and your IOMMU groupings are less than ideal, workarounds can be done. <a href="http://vfio.blogspot.com/2015/10/intel-processors-with-acs-support.html">Intel CPUs with full ACS support</a>. With my CPU and X99A mainboard combo, every single PCI device is inside of its own IOMMU group, without the need for quirks/patches. Many people have setup VGA passthrough without full ACS support-- but having it means one less thing to worry about.</li>
<li><strong>Two sets of keyboard/mice, or a <a href="https://en.wikipedia.org/wiki/KVM_switch">KVM switch</a>.</strong> Once your OS is installed, you'll need to passthrough a mouse and keyboard, which will then be unusable by the guest. After initial setup, you can use something like <a href="https://symless.com/synergy/">Synergy</a> if you plan to use both the guest and host simultaneously. Otherwise a KVM switch (or a <a href="https://www.amazon.com/s/?url=search-alias&amp;field-keywords=usb+switch">USB switch</a>) might be a good/simple option.</li>
<li><strong>The graphics card you're passing through should NOT be the card initialized during boot.</strong> In my setup, slot <em>PCIE_1</em> is the primary 16x/16x slot-- my host's graphics card is connected to this slot, which is used during boot. The guest's card is connected to <em>PCIE_5</em> (16x/8x)</li>
<li><strong>Should have two or more discrete PCIe graphics cards</strong>. Although using Intel IGD for your host is possible, it is more difficult and error-prone to get working.</li>
</ul>

<h2 id="mysetup">My Setup</h2>

<h3 id="software">Software</h3>

<ul>
<li>Xubuntu 16.04 (64-bit UEFI)</li>
<li>Linux kernel 4.4.0-59</li>
<li>QEMU 2.5.0</li>
<li>libvirt 1.3.1</li>
<li>virt-manager 1.3.2</li>
</ul>

<h3 id="hardware">Hardware</h3>

<ul>
<li><strong>CPU:</strong> Intel Core i7-6800K Broadwell-E <em>(6 cores/12 threads, 3.4 GHz, LGA 2011-v3, 140W TDP)</em> <a href="https://ark.intel.com/products/94189/Intel-Core-i7-6800K-Processor-15M-Cache-up-to-3_60-GHz">[ARK]</a></li>
<li><strong>Motherboard:</strong> MSI X99A XPOWER GAMING TITANIUM <em>(LGA2011-3, Intel X99A Chipset)</em> <a href="https://us.msi.com/Motherboard/X99A-XPOWER-GAMING-TITANIUM.html">[Mfg link]</a></li>
<li><strong>Memory:</strong> Corsair Vengeance LPX 32GB kit <em>(4x8GB DDR4 2133/3200)</em> <a href="http://www.corsair.com/en-us/vengeance-lpx-32gb-4x8gb-ddr4-dram-2666mhz-c16-memory-kit-black-cmk32gx4m4a2666c16">[Mfg link]</a></li>
<li><strong>GPU 1 (host/boot, slot PCIE_1)</strong>: NVIDIA GeForce GTX 770 <em>(4GB GDDR4, GK104, Rev a1)</em> <a href="http://www.nvidia.com/gtx-700-graphics-cards/gtx-770/">[Mfg link]</a></li>
<li><strong>GPU 2 (guest, slot PCIE_5)</strong>:  XFX Radeon GTR RX 480 <em>(1338MHz, 8GB GDDR5, "Hardswap Fan Black Edition")</em> <a href="http://www.xfxforce.com/en-us/products/amd-radeon-rx-400-series/rx-480-gtr-black-8gb-dd-led-rx-480p8dba6">[Mfg link]</a></li>
</ul>

<p><img src="https://ss.ycnrg.org/jotunn_20170309_015956.png" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04">
<strong>New hardware for my VGA passthrough build</strong></p>

<h2 id="initialhostsetup">Initial Host Setup</h2>

<h3 id="ueficheck">UEFI Check</h3>

<p>To ensure that your host has booted via UEFI, check dmesg for EFI-related messages. It is also possible to confirm by checking that <code>/sys/firmware/efi/efivars</code> is populated.</p>

<pre><code>dmesg | grep -i efi  
</code></pre>

<h3 id="installrequiredpackages">Install required packages</h3>

<p>First, we need to install KVM, libvirt, and OVMF  </p>

<pre><code>sudo apt-get update  
sudo apt-get install qemu-kvm qemu-utils qemu-efi ovmf libvirt-bin libvirt-dev libvirt0 virt-manager  
</code></pre>

<h3 id="updatemoduleslist">Update modules list</h3>

<p>Open up <code>/etc/modules</code> and append the following:  </p>

<pre><code>pci_stub  
vfio  
vfio_iommu_type1  
vfio_pci  
vfio_virqfd  
kvm  
kvm_intel  
</code></pre>

<h3 id="enableiommu">Enable IOMMU</h3>

<p>Now, we need to enable IOMMU support in the kernel at boot-time. To do this with GRUB, edit <code>/etc/default/grub</code> and append <code>intel_iommu=on</code> to the <code>GRUB_CMDLINE_LINUX_DEFAULT</code> option. On a stock install of Ubuntu 16.04, this then becomes:</p>

<pre><code>GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on"  
</code></pre>

<p>Now update GRUB:</p>

<pre><code>sudo update-grub  
</code></pre>

<h3 id="rebootcheckup">Reboot &amp; Check-up</h3>

<p>Reboot your machine. This will allow the kernel to boot with IOMMU enabled, and will also load our vfio and pci-stub modules we defined  previously.</p>

<p>To ensure IOMMU has been enabled, check for the string <code>Directed I/O</code>, which will be prefixed with either <code>DMAR</code> or <code>PCI-DMA</code>.  </p>

<pre><code>~$ dmesg | grep -i 'Directed I/O'
[    0.750152] DMAR: Intel(R) Virtualization Technology for Directed I/O
</code></pre>

<h3 id="determinepciids">Determine PCI IDs</h3>

<p>If you haven't done so already, you need to determine the PCI device IDs and bus location of the device(s) you want to passthrough. For modern video cards with HDMI audio, you'll also want to passthrough the audio device, which typically has the same location, but different function (<code>BUS:SLOT.FUNC</code> is the location format-- <code>03.00.1</code> is bus 3, slot 0, function 1).</p>

<p>To find video cards and their HDMI audio buddies:</p>

<pre><code>lspci -nn | grep -A1 VGA  
</code></pre>

<p>On my machine, I get:</p>

<pre><code>03:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Device [1002:67df] (rev c7)  
03:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Device [1002:aaf0]  
04:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104 [GeForce GTX 770] [10de:1184] (rev a1)  
04:00.1 Audio device [0403]: NVIDIA Corporation GK104 HDMI Audio Controller [10de:0e0a] (rev a1)  
</code></pre>

<p>Since I want to passthrough the AMD card, I will make note of the PCI VID/PIDs: <code>1002:67df</code> and <code>1002:aaf0</code> for the VGA and Audio device, respectively (<code>VID:PID</code> is the format; in this case <code>1002:67df</code> has a vendor ID of 0x1002 and product ID of 0x67df). We also need to remember the location, which we will use to determine IOMMU groupings. In the example above: <code>03:00.0</code> and <code>03:00.1</code> are my locations.</p>

<h3 id="checkiommugroupings">Check IOMMU Groupings</h3>

<p>Next up is to determine if the device(s) you want to passthrough are in isolated groups (eg. the groups do not overlap with other devices that you want to leave delegated to the host).</p>

<p>Listing the contents of all IOMMU groups:  </p>

<pre><code>find /sys/kernel/iommu_groups/*/devices/*  
</code></pre>

<p>You can also check this ugly one-liner I wrote that will inject the IOMMU group number into the lspci output:  </p>

<pre><code>for dp in $(find /sys/kernel/iommu_groups/*/devices/*); do ploc=$(basename $dp | sed 's/0000://'); igrp=$(echo $dp | awk -F/ '{print $5}'); dinfo=$(lspci -nn | grep -E "^$ploc"); echo "[IOMMU $igrp] $dinfo" ; done  
</code></pre>

<p>In my example, I have determined that my AMD card (and its audio device) are both in group 31. Furthermore, these are the <em>only</em> devices in group 31. This means we should have no problems passing it through with vfio-pci.  </p>

<pre><code>[IOMMU 31] 03:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Device [1002:67df] (rev c7)
[IOMMU 31] 03:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Device [1002:aaf0]
</code></pre>

<h3 id="stubbingwithpcistub">Stubbing with pci-stub</h3>

<p>Now that we've confirmed the IOMMU grouping and have the required info, we can set up <code>pci-stub</code> to claim these devices at boot. This prevents the host from assigning a kernel driver to them.</p>

<p>Open up <code>/etc/initramfs-tools/modules</code>, then add the ID(s) of your devices that should be reserved by pci-stub:  </p>

<pre><code>pci_stub ids=VID:PID,VID:PID,...  
</code></pre>

<p>On my machine, this has the PCI VID/PIDs for the AMD VGA and Audio devices:  </p>

<pre><code>pci_stub ids=1002:67df,1002:aaf0  
</code></pre>

<blockquote>
  <p><strong>Important note:</strong> If you have other PCI devices that share the same VID/PID (eg. two identical graphics cards), and you plan to delegate one to the host, and the other to the guest-- then this method won't work. Check Alex's <code>vfio-pci-override-vga.sh</code> script at <a href="http://vfio.blogspot.co.uk/2015/05/vfio-gpu-how-to-series-part-3-host.html">http://vfio.blogspot.co.uk/2015/05/vfio-gpu-how-to-series-part-3-host.html</a> or use <code>xen-pciback</code> instead, which uses bus/location IDs rather than vendor IDs.</p>
</blockquote>

<p>Now we need to rebuild the initrd image:  </p>

<pre><code>update-initramfs -u  
</code></pre>

<p>Once this has completed, reboot.</p>

<p>Once the machine has restarted, check dmesg to ensure that pci-stub has claimed the devices correctly:  </p>

<pre><code>~$ dmesg | grep pci-stub
[    2.151798] pci-stub: add 1002:67DF sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
[    2.151815] pci-stub 0000:03:00.0: claimed by stub
[    2.151819] pci-stub: add 1002:AAF0 sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
[    2.151827] pci-stub 0000:03:00.1: claimed by stub
</code></pre>

<p>Above, we can see that both <code>03:00.0</code> (AMD VGA) and <code>03:00.1</code> (AMD Audio) were successfully reserved by pci-stub for our future guest VM.</p>

<h3 id="networkprep">Network Prep</h3>

<p>If you don't plan on passing through a network device to your virtual machine, then configuring a suitable bridge on the host is required to achieve a networking configuration where the guest and host can communicate. Although <code>macvtap</code> is probably the simplest, no-setup option, it has a major disadvantage-- your guest will be unable to communicate with the host. Instead, we'll create a bridge on the host and use the <code>virtio</code> network interface for best performance.</p>

<p>Add the following to the bottom of your <code>/etc/sysctl.conf</code> file:  </p>

<pre><code># Enable IPv4 forwarding
net.ipv4.ip_forward=1  
net.ipv4.conf.all.rp_filter=1  
net.ipv4.icmp_echo_ignore_broadcasts=1  
net.ipv4.conf.default.proxy_arp=1

# Enable IPv6 forwarding &amp; IPv6 Autoconfiguration (optional)
net.ipv6.conf.all.autoconf = 0  
net.ipv6.conf.all.accept_ra = 0  
net.ipv6.conf.all.forwarding=1  
net.ipv6.conf.all.proxy_ndp=1
</code></pre>

<p>If you're using NetworkManager (enabled by default in Ubuntu Desktop installations), you'll want to ensure the following setting is present in <code>/etc/NetworkManager/NetworkManager.conf</code>. This instructs NetworkManager to ignore any interface that we have explicitly configured in <code>/etc/network/interfaces</code>, but still allows us to use NetworkManager for WiFi, Bluetooth, and VPN configuration.</p>

<pre><code>[ifupdown]
managed=false  
</code></pre>

<p>Below is a sample network configuration file (<code>/etc/network/interfaces</code>), where the primary interface (in this example, <code>eno1</code> -- change this to match your chosen interface) is a part of the <code>vbr0</code> bridge.  </p>

<pre><code>auto lo  
iface lo inet loopback

auto  eno1  
iface eno1 inet manual  
iface eno1 inet6 manual

auto vbr0  
iface vbr0 inet static  
  address YOUR_IPV4_ADDRESS
  netmask YOUR_IPV4_NETMASK
  gateway YOUR_IPV4_GATEWAY

  bridge_ports eno1
  bridge_stp off
  bridge_waitport 0
  bridge_fd 0

iface vbr0 inet6 static  
  address YOUR_IPV6_ADDRESS
  netmask 64
  gateway YOUR_IPV6_GATEWAY

  bridge_ports eno1
  bridge_stp off
  bridge_waitport 0
  bridge_fd 0
</code></pre>

<p>Finally, create the new bridge and add the primary interface to it. Once you do this, you will lose network connectivity until you reboot your machine or restart networking. Remember to change <code>eno1</code> to match your interface.</p>

<pre><code>brctl addbr vbr0  
brctl addif vbr0 eno1  
</code></pre>

<p>Once complete, reboot.</p>

<h2 id="guestvmsetup">Guest VM Setup</h2>

<h3 id="vmcreationinitialconfiguration">VM Creation &amp; Initial Configuration</h3>

<p>During OS installation, there is no need to passthrough the GPU. Instead, we'll use Spice (or VNC) for ease of use.</p>

<p>This part can be done in virt-manager-- create a new VM, choose your OS (in my case, I installed Windows 10 Pro), select the installation ISO, then set up your storage. In my case, since I wanted to passthrough an entire physical SSD to my VM, this wasn't possible to setup via the virt-manager GUI (although, it's possible I just didn't have the patience to figure it out). If you plan to do this (or some other non-pool storage scenario), choose No Storage during the creation wizard. We can edit the XML later.</p>

<p><img src="https://ss.ycnrg.org/jotunn_20170126_205055.png" class="no-fluid" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04"></p>

<p>As mentioned above in the <em>Network Prep</em> section, it is recommended that you configure and use a bridge on the host machine if you want to be able to communicate with the host from the guest. In the setup wizard, choose the bridge you created (in my case, I named the bridge <em>jobr0</em>). Also be sure to tick the box "Customize configuration before install".</p>

<p><img src="https://ss.ycnrg.org/jotunn_20170128_102525.png" class="no-fluid" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04"></p>

<p>To complete our VM configuration:</p>

<ul>
<li>In the <strong>Overview</strong> section, make sure to change Firmware to <em>UEFI x86_64: /usr/share/OVMF/OVMF_CODE.fd</em> -- the path to the OVMF ROM may differ slightly depending on your installation. <strong>(This part is important! If you don't have this option in virt-manager, then you'll need to manually edit the XML prior to install)</strong></li>
<li>In the <strong>CPUs</strong> section, set the model to <em>host-passthrough</em> (you will need to enter it manually). <a href="https://libvirt.org/formatdomain.html#elementsCPU">More info on host-passthrough and CPU model configuration</a>. Adjust the number of cores you want to allocate to the guest, if you haven't done so already.</li>
<li>In the <strong>Boot Options</strong> section, tick the box for IDE CDROM1 to set it as the primary boot device.</li>
<li>If you're using any VirtIO devices, click <strong>Add Hardware</strong> and add another IDE CDROM drive, then attach the VirtIO iso to it. This is required to be able to install Windows on a VirtIO disk! See the note below this list for a link to download the iso.</li>
<li>In the <strong>NIC</strong> section, change the Device model to <em>virtio</em>. If you have a specific need, you can set the MAC address here as well. Otherwise you can leave it as the default.</li>
<li>If you haven't already added a storage device, click <em>Add Hardware</em> to do so now. I would recommend using <em>VirtIO</em> as the bus type for best performance. You may also wish to change the cache mode to writethrough, as your guest is likely going to be doing its own caching.</li>
</ul>

<p><img src="https://ss.ycnrg.org/jotunn_20170128_105018.png" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04"></p>

<p><strong>Check here for an <a href="https://ycc.io/conf/ovmf/win10_nopass.xml">example libvirt XML configuration for OS installation phase</a>.</strong></p>

<blockquote>
  <p>The Windows libvirt drivers are supplied by RedHat and can be found over at the Fedora wiki: <a href="https://fedoraproject.org/wiki/Windows_Virtio_Drivers">Windows Virtio Drivers</a>. Unless you run into problems, you'll most likely want to choose the stable drivers.</p>
</blockquote>

<p>Now we are ready to start the installation&mdash; click the <strong>Begin Installation</strong> button at the top-left. If you need to modify your XML manually (for example, to use a raw block device for storage), then you will still click the <em>Begin Installation</em> button, then immediately stop the VM. This will allow the VM Creation Wizard to save your VM configuration.</p>

<p><img src="https://ss.ycnrg.org/jotunn_20170128_105815.png" class="no-fluid" title="TianoCore OVMF boot splash" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04"></p>

<p>If installing Windows and you miss the 'Press any key to boot from CD/DVD...' prompt, then you'll be dumped into OVMF's UEFI shell. The easiest thing to do is just reboot again, or you can navigate to the UEFI binary from the shell.</p>

<p>If you're installing Windows and using a VirtIO disk, you'll be greeted with an empty list of disks to choose from. This is where the VirtIO iso from RedHat/Fedora comes into play.</p>

<p><img src="https://ss.ycnrg.org/jotunn_20170128_110312.png" class="no-fluid" title="We couldn't find any drives! From here, we click Load Driver button to load the VirtIO iso" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04"></p>

<p>Click the <strong>Load Driver</strong> button. Setup will complain that it couldn't find any compatible driver, but that's OK. Click <em>Browse</em> in the lower-left, then navigate to the appropriate directory for your OS. For Windows 10, this was <code>E:\viostor\w10\amd64</code>. <br>
<img src="https://ss.ycnrg.org/jotunn_20170128_110605.png" class="no-fluid" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04"> <br>
<img src="https://ss.ycnrg.org/jotunn_20170128_110931.png" class="no-fluid" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04"></p>

<h3 id="finalpassthroughsetup">Final Passthrough Setup</h3>

<p>After completing OS installation using Spice or VNC, power down the VM so that the final adjustments can be made.</p>

<h4 id="removeunnecessarydevices">Remove Unnecessary Devices</h4>

<p>First, remove any of the following devices from your VM configuration, if they exist:</p>

<ul>
<li>Display Spice</li>
<li>Video Spice/QXL/VNC</li>
<li>Channel spice</li>
<li>Tablet</li>
<li>Mouse</li>
<li>USB Redirector (Type: SpiceVMC)</li>
</ul>

<p>The <em>IDE CDROM</em> device created for the Windows ISO can be removed, however, leave the device associated with the RedHad VirtIO drivers connected, as we'll use it later to install additional drivers.</p>

<h4 id="addguestvideocard">Add Guest Video Card</h4>

<p>If using virt-manager, click the <strong>Add Hardware</strong> button, then choose <em>PCI Host Device</em> from the list. This will present you with a list of all of the devices on your PCI bus, similar to what you'd see from <code>lspci</code>. Most modern video cards will have a function for video (typically <code>.0</code>), and a function for audio (typically <code>.1</code>). Be sure to add both devices.</p>

<p>In the screenshot, the Video and Audio functions associated with the AMD RX480 can be seen on my machine.</p>

<p><img src="https://ss.ycnrg.org/jotunn_20170303_212950.png" class="no-fluid" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04"></p>

<h4 id="configureguestusb">Configure Guest USB</h4>

<p>You will also need a way to interact with your VM. While sharing your existing mouse and keyboard is possible, it is not recommended, since you won't be able to shut your machine down locally or troubleshoot if something goes wrong.</p>

<p>On my machine, I have a KVM switch that connects to the guest and host (obviously, both USB ports are going to be physically on the same machine-- this might make more sense later). This setup allows me to pass input directly to the guest for initial setup, troubleshooting, as well as gaming. For everything else, a program such as <a href="https://symless.com/synergy/">Synergy</a> can be used (you'll still need a keyboard/mouse connected <em>before</em> you get that working, though).</p>

<p><strong>Option 1:</strong> <em>(Best option, in my opinion)</em> Use PCI passthrough to give the guest an entire USB bus. This has a few advantages: 1) Allows hotplugging devices while the guest is running, 2) No need to map devices by USB IDs or ports (which can change), 3) Lowest latency. The main disadvantage is that you'll need to determine which physical USB ports map to which USB bus. On my motherboard, even though it has 22 USB ports, <em>most</em> of those ports are on the same USB bus. Thankfully, it does have 4 ports (all grouped together nicely) on the back, which are on their own bus. This may or may not be a viable option, depending on your motherboard and the USB connectivity needs of the host. See the <a href="https://ycnrg.org/vga-passthrough-with-ovmf-vfio/#usbmapping">Additional Tweaks: USB Mapping</a> section for more information.</p>

<p>Once you've determined the PCI device associated with the USB bus you'd like to passthrough, add a new <strong>PCI Host Device</strong> in virt-manager. In the example screenshot below, I will be passing through the <em>VIA VL805</em> controller, which is mapped to 4 ports on the back of my motherboard, as I mentioned earlier.</p>

<p>Note that <strong>we don't need to stub-out the USB bus</strong> in question. In fact, this would be quite difficult, since the USB driver is typically a module statically built into the kernel, and is one of the very first modules initialized (would require a custom kernel build, and in the end, wouldn't be very useful). When your host first starts, all devices will associate with the host. When the guest starts, all devices will "disconnect" from the host, and PCI device will be available for the guest. On shutdown of the guest, the devices will reconnect to the host. If this is a problem for your setup, <a href="https://projectgus.com/2014/09/blacklisting-a-single-usb-device-from-linux/">udev rules can be used to 'blacklist' single devices</a>, or you can <a href="https://wiki.archlinux.org/index.php/Kernel_modules#Blacklisting">blacklist entire modules</a>-- just be sure they aren't being used by anything the host needs!</p>

<p><img src="https://ss.ycnrg.org/jotunn_20170303_215947.png" class="no-fluid" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04"></p>

<p><strong>Option 2:</strong> Use USB passthrough to provide the guest access to individual USB devices. The main advantage of this option is that it's usually super easy to set up. One drawback is that the KVM switch option I mentioned earlier will not work with this method (since the USB device needs to exist at the time the VM is started in order to pass it through). Additionally, this incurs additional overhead on the host, and increases latency (not really an issue for HID devices like keyboards/mice, but is definitely an issue for anything that requires timely isochronous transfers, like USB audio).</p>

<p>In virt-manager, click <strong>Add Hardware</strong>, then select <strong>USB Host Device</strong>. This will provide a list of devices, similar to the output of <code>lsusb</code>. You may notice in the screenshot that there are two identical Logitech devices. When this happens, virt-manager will automatically include the bus location in the configuration. However, this location can (and usually does) change between host reboots. <br>
<img src="https://ss.ycnrg.org/jotunn_20170303_215504.png" class="no-fluid" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04"></p>

<h4 id="bootingup">Booting up!</h4>

<p>Once everything is configured to your liking, it's time to boot the VM! Make sure you have a usable input device as discussed above, then start it up. The video below is a quick demo of starting the VM on my machine. Towards the end of the video, you can hear the Windows "USB disconnect" sound as I hit my KVM's foot-pedal to switch input back to the host.</p>

<iframe width="800" height="450" src="https://www.youtube.com/embed/kNjdreZScwU" frameborder="0" allowfullscreen></iframe>

<blockquote>
  <p><strong>Sample libvirt domain XML:</strong> For reference, the libvirt configuration I used can be found <a href="https://ycc.io/conf/ovmf/win10_passthru_final.xml">here</a>. This also demonstrates how to passthrough entire SSDs or single partitions.</p>
</blockquote>

<h3 id="guestdriverinstallation">Guest Driver Installation</h3>

<p>Now that the guest OS has booted up, it's time to setup the VirtIO drivers. If you're using Linux, this is done automatically. On Windows, we'll use the <a href="https://fedoraproject.org/wiki/Windows_Virtio_Drivers">ISO that was mounted previously</a>. This is required if you've decided to use the VirtIO Network Adapter (less host overhead, better performance) rather than an emulated adapter.</p>

<p>Here's a recap in case you skipped over this before, or no longer have the ISO mounted. Create a new Storage device, change the device type to CDROM, then select the <strong>virtio-win</strong> ISO. <br>
<img src="https://ss.ycnrg.org/jotunn_20170306_010115.png" class="no-fluid" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04"></p>

<h4 id="ethernetdriver">Ethernet Driver</h4>

<p>After Windows has booted up, open up the Device Manager, and you should spot an unknown Ethernet Controller in the <em>Other devices</em> section. Right click on the Ethernet Controller, then choose <strong>Update Driver Software</strong>. Then when prompted, select <strong>Browse my computer for driver software</strong>. You can then click <strong>Browse...</strong> to navigate to the correct directory for your version of Windows. The VirtIO Ethernet driver is under the <strong>NetKVM</strong> directory. Once expanded, select the directory corresponding to your guest OS, then choose the architecture (<em>amd64</em> for 64-bit, <em>x86</em> for 32-bit).</p>

<p><img src="https://ss.ycnrg.org/jotunn-win10_20170306_011613.png" class="no-fluid" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04"></p>

<p>Once completed, the new network adapter should pop into your Network Adapters section, and can be configured. The current version of the driver should provide you with a 10Gbps connection.</p>

<p><img src="https://ss.ycnrg.org/jotunn-win10_20170306_011420.png" class="no-fluid" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04"> <br>
<img src="https://ss.ycnrg.org/jotunn-win10_20170306_012812.png" class="no-fluid" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04"></p>

<h4 id="balloondriver">Balloon Driver</h4>

<p>After installing the Ethernet driver, you can also install the VirtIO Balloon driver. While this is optional, you might as well do it since you've already got the ISO mounted and you're in the Device Manager! This will allow the Windows guest to dynamically expand and relinquish portions of memory as needed.</p>

<p>The balloon interface will show up as an <strong>Unknown device</strong> under <em>Other devices</em>. Right click, select <strong>Update driver software..</strong> and follow the same steps that were done for the Ethernet driver. This time, you'll expand the <strong>balloon</strong> folder, then choose the correct OS and Arch.</p>

<p>If you added any other custom VirtIO devices, you can install them via the same method (such as serial ports).</p>

<h4 id="storagedriver">Storage Driver</h4>

<p>The VirtIO SCSI driver should already be installed if you followed the Load Driver process during installation.</p>

<h4 id="videomiscdrivers">Video &amp; Misc Drivers</h4>

<p>Now that your network interface is (hopefully) working, you can take the time to install the latest drivers for your passthrough VGA card. Other than that, you shouldn't really need to install any other drivers, unless you've attached or passed through obscure hardware.</p>

<ul>
<li><a href="http://support.amd.com/en-us/download">AMD</a></li>
<li><a href="http://www.nvidia.com/Download/index.aspx">Nvidia</a></li>
</ul>

<h2 id="additionaltweaks">Additional Tweaks</h2>

<p>Hopefully your guest OS is now fully up &amp; running&mdash; maybe with a few kinks to work out (for me, audio was a pain in the ass to get working, while video worked flawlessly the first time). I've outlined below a few additional tweaks, as well as some alternative methods that can be used.</p>

<h3 id="usbcontrollerpassthrough">USB Controller Passthrough</h3>

<p>As explained in the setup section, passing through an entire USB bus is (in my opinion) the best option for USB connectivity. It allows hotplugging to be handled by the guest OS, there's no need to worry about device mapping when booting your guest, and it also helps reduce host overhead and latency.</p>

<p>The downside: Which ports map to which bus? For my motherboard, I created this Google Docs spreadsheet to help me map out the ports on my board (my motherboard has 22 USB ports!). <br>
<img src="https://ss.ycnrg.org/jotunn_20170127_200147.png" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04"></p>

<p>To make things a bit easier, I wrote a simple (nasty) Bash one-liner that will enumerate all of a machine's USB devices, showing which USB bus they are connected to, and the PCI bus location of each USB controller, as well as the corresponding VID:PID combos for both buses.</p>

<pre><code>echo -en "\n USB BUS [PCI BUS] -- DESCRIPTION [PCI VID:PID]\n\tDEVICE NUM: USB VID:PID DESCRIPTION\n"; for upath in /sys/bus/pci/devices/0000:*/usb*; do arx=($(echo "$upath" | perl -ne '/^.+0000:(.+)\/usb([0-9]+)$/ &amp;&amp; print "$1 $2"')); loc=${arx[0]}; bus=${arx[1]}; hname=$(lspci -nn | grep "^$loc" | awk -F: '{print $3 ":" $4}'); echo "** Bus ${bus} [${loc}] -- ${hname}"; lsusb -s "${bus}:" | sed 's/^Bus ..../\t/' | sort -n ; done  
</code></pre>

<p>For example, here is (abbreviated) output from my machine:  </p>

<pre><code> USB BUS [PCI BUS] -- DESCRIPTION [PCI VID:PID]
    DEVICE NUM: USB VID:PID DESCRIPTION
** Bus 3 [00:14.0] --  Intel Corporation C610/X99 series chipset USB xHCI Host Controller [8086:8d31] (rev 05)
    Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
    Device 002: ID 0d8c:0008 C-Media Electronics, Inc. 
    Device 005: ID 0644:0200 TEAC Corp. All-In-One Multi-Card Reader CA200/B/S
    Device 006: ID 8087:0a2b Intel Corp. 
    Device 007: ID 046d:c52b Logitech, Inc. Unifying Receiver
    Device 024: ID 046d:0a29 Logitech, Inc. H600 [Wireless Headset]
    Device 039: ID 0566:3055 Monterey International Corp. 
    Device 040: ID 046d:c52b Logitech, Inc. Unifying Receiver
** Bus 4 [00:14.0] --  Intel Corporation C610/X99 series chipset USB xHCI Host Controller [8086:8d31] (rev 05)
    Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
** Bus 1 [00:1a.0] --  Intel Corporation C610/X99 series chipset USB Enhanced Host Controller #2 [8086:8d2d] (rev 05)
    Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
    Device 002: ID 8087:800a Intel Corp. 
</code></pre>

<p>If you have a lot of devices plugged in, this may give you a pretty good idea of how things might be laid out. If not, you can use something like a phone/tablet or USB thumb drive, and plug it into each port of your computer&mdash; making note which bus that port is connected to each time you plug the device in.</p>

<p>Once you've mapped out which ports belong to which bus, you can then check the output from the one-liner above (or <code>lspci -nn</code>) and use the corresponding PCI VID/PID when adding a new <strong>PCI Host Device</strong> to your virtual machine.</p>

<p>As noted before, there's no need to stub-out the USB controller, since all devices will be gracefully disconnected prior to handoff from the host to guest VM. As a consequence of this, devices attached at startup will first be initialzed by the host until the time  when the guest machine starts and takes control of the USB controller.</p>

<h3 id="rawblockdeviceswithvirtio">Raw Block Devices with VirtIO</h3>

<p>Rather than using QCOW2 (default) or LVM2 as a backing store for your new VM, an entire SSD (or partition from an SSD can be used instead). The primary advantage of doing this is to allow the disk to be directly accessed from other operating systems if you ever needed to dual-boot or place the drive in another machine. It also bypasses the OS's filesystem layer by directly accessing the device, although the performance gains from doing this are likely minimal.</p>

<p>This will need to be done by manually editing the domain XML definition. Examples of two possible scenarios are shown below from my setup.</p>

<p>It's best to use the <code>/dev/disk/by-id</code> path rather than <code>/dev/sdX</code>, as this can change when your machine is rebooted for various reasons. The <code>cache</code> value should be set to <code>'none'</code>, as the guest operating system will provide its own caching.</p>

<p><strong>Passing through an entire disk:</strong></p>

<pre><code>&lt;disk type='block' device='disk'&gt;  
  &lt;driver name='qemu' type='raw' cache='none'/&gt;
  &lt;source dev='/dev/disk/by-id/ata-M4-CT128M4SSD2_00000000113303180B16'/&gt;
  &lt;backingStore/&gt;
  &lt;target dev='vda' bus='virtio'/&gt;
  &lt;boot order='1'/&gt;
&lt;/disk&gt;  
</code></pre>

<p><strong>Passing through a single partition:</strong></p>

<pre><code>&lt;disk type='block' device='disk'&gt;  
  &lt;driver name='qemu' type='raw' cache='none'/&gt;
  &lt;source dev='/dev/disk/by-id/nvme-Samsung_SSD_960_EVO_500GB_S3EUNX0HB09470T-part3'/&gt;
  &lt;backingStore/&gt;
  &lt;target dev='vdb' bus='virtio'/&gt;
&lt;/disk&gt;  
</code></pre>

<p>More information: <br>
- <a href="https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sect-Virtualization-Adding_storage_devices_to_guests-Adding_hard_drives_and_other_block_devices_to_a_guest.html">Adding Hard Drives and Other Block Devices to a Guest
</a></p>

<h2 id="benchmarks">Benchmarks</h2>

<p><img src="https://ss.ycnrg.org/win10_timespy2.png" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04">
<strong>3DMark 2016:</strong> Time Spy demo, with AMD Radeon beta driver</p>

<p><img src="https://ss.ycnrg.org/jotunn_20170307_000038.png" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04">
<strong>3DMark 2016:</strong> Fire Strike demo, with AMD Radeon release (official) driver</p>

<p><img src="https://ss.ycnrg.org/win10_steamvr.png" class="no-fluid" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04"> <br>
<strong>SteamVR Readiness Test:</strong> with AMD Radeon release (official) driver</p>

<p><img src="https://ss.ycnrg.org/win10_crystal.png" class="no-fluid" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04"> <br>
<strong>CrystalDiskMark results:</strong> VirtIO SCSI storage - Crucial M4 128GB SATA3 SSD (raw block, device)</p>

<p><img src="https://ss.ycnrg.org/jotunn-win10_20170312_032800.png" class="no-fluid" alt="VGA Passthrough with OVMF+VFIO on Ubuntu 16.04"> <br>
<strong>CrystalDiskMark results:</strong> VirtIO SCSI storage - Samsung 960 EVO NVMe SSD (raw block, partition)</p>

<h2 id="referencesadditionalresources">References &amp; Additional Resources</h2>

<ul>
<li><a href="http://vfio.blogspot.com/">VFIO Tips &amp; Tricks</a>, Alex Williamson</li>
<li><a href="https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF">PCI passthrough via OVMF</a>, ArchLinux Wiki</li>
<li><a href="http://blog.wikichoon.com/2014/07/enabling-hyper-v-enlightenments-with-kvm.html">Hyper-V Enhancements for Windows 10 in KVM</a>, Cole Robinson</li>
<li><a href="http://dominicm.com/gpu-passthrough-qemu-arch-linux/">GPU Passthrough with QEMU on Arch Linux</a>, DominicM</li>
<li><a href="https://zllovesuki.git.sx/essays/2015/09/gpu-passthrough-via-vfio-pci-with-kvm-on-ubuntu-1504/">GPU Passthrough via vfio-pci with KVM on Ubuntu 15.04</a>, Rachel Chen</li>
<li><a href="https://arrayfire.com/using-gpus-kvm-virutal-machines/">Using GPUs in KVM Virtual Machines</a>, ArrayFire</li>
<li><a href="https://bbs.archlinux.org/viewtopic.php?id=162768">KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9</a>, ArchLinux BBS</li>
<li><a href="https://docs.google.com/spreadsheets/d/1LnGpTrXalwGVNy0PWJDURhyxa3sgqkGXmvNCIvIMenk/edit#gid=0">KVM VGA Passthrough Database</a> (Google Docs spreadsheet)</li>
</ul>]]></content:encoded></item><item><title><![CDATA[Running HipChat Server on KVM/libvirt]]></title><description><![CDATA[<p>A few months ago, I picked up a starter license for <a href="https://www.hipchat.com/server">HipChat Server</a> out of curiosity. As I use a few other Atlassian products, such as <a href="https://www.atlassian.com/software/jira">JIRA</a> and <a href="https://www.atlassian.com/software/bitbucket/server">Bitbucket Server</a> (aka Stash), I figured it would be distributed similar to their other applications (a self-extracting script). However, HipChat is distributed</p>]]></description><link>https://ycnrg.org/running-hipchat-server-on-kvm-libvirt/</link><guid isPermaLink="false">a6fd71be-0b9f-45ff-944d-99d61ea645a7</guid><dc:creator><![CDATA[Jacob Hipps]]></dc:creator><pubDate>Mon, 20 Feb 2017 06:57:53 GMT</pubDate><media:content url="https://ss.ycnrg.org/jotunn_20170220_015727.png" medium="image"/><content:encoded><![CDATA[<img src="https://ss.ycnrg.org/jotunn_20170220_015727.png" alt="Running HipChat Server on KVM/libvirt"><p>A few months ago, I picked up a starter license for <a href="https://www.hipchat.com/server">HipChat Server</a> out of curiosity. As I use a few other Atlassian products, such as <a href="https://www.atlassian.com/software/jira">JIRA</a> and <a href="https://www.atlassian.com/software/bitbucket/server">Bitbucket Server</a> (aka Stash), I figured it would be distributed similar to their other applications (a self-extracting script). However, HipChat is distributed as an OVA package (Open Virtual Appliance). The file itself is just a tarball containing a few vmdk (VMware) disk images, a checksum file, and an XML description of the contents.</p>

<p>If you're using VirtualBox or VMware, the OVA package can be opened without a problem, and the XML description provides all of the necessary information to create a pretty interface and construct a compliant virtual machine. However, if you're using Xen or KVM/QEMU, you'll need to do a bit more work to make this work properly.</p>

<h2 id="prerequisites">Prerequisites</h2>

<p>This guide assumes you have a working knowledge of how to create virtual machines with libvirt, currently have a working host with an LVM2 or QEMU storage pool, and a bridged network configuration.</p>

<p>This guide was written using a Debian 8 host, but can be easily adapted as needed. Most of the commands herein should be run as root, or via <code>sudo</code>.</p>

<p>First, ensure required tools are installed via your package manager (example for Debian-based distros):  </p>

<pre><code>apt-get install qemu-utils jq  
</code></pre>

<p>Retrieve current OVA archive, then extract it with <code>tar</code>:  </p>

<pre><code>wget https://hipchat-server-stable.s3.amazonaws.com/HipChat.ova  
tar -xvf HipChat.ova  
</code></pre>

<p>Depending on the type of storage pools you have configured (if any), only one of the following two sections should be performed. If you don't have any storage pools configured, use the QCOW2 section.</p>

<h3 id="lvm2storage">LVM2 Storage</h3>

<p>If using LVM2 as a backing store, create a new LVM logical volume for each disk image (change the name and volume as necessary).  </p>

<pre><code>lvcreate -L$(qemu-img info --output=json system.vmdk | jq '.["virtual-size"]')b -n hipchat-system vg0  
lvcreate -L$(qemu-img info --output=json file_store.vmdk | jq '.["virtual-size"]')b -n hipchat-store vg1  
lvcreate -L$(qemu-img info --output=json chat_history.vmdk | jq '.["virtual-size"]')b -n hipchat-history vg1  
</code></pre>

<p>Next, extract the contents from the vmdk images to the new LVM volumes (be sure to update the paths to match your own):  </p>

<pre><code>qemu-img convert -np -O raw system.vmdk /dev/vg0/hipchat-system  
qemu-img convert -np -O raw file_store.vmdk /dev/vg1/hipchat-store  
qemu-img convert -np -O raw chat_history.vmdk /dev/vg1/hipchat-history  
</code></pre>

<h3 id="qcow2storage">QCOW2 Storage</h3>

<p>If you are using QCOW2 images instead of LVM2 as your virtual machines' backing store, then a simple conversion should suffice:  </p>

<pre><code>qemu-img convert -O qcow2 system.vmdk hipchat-system.qcow2  
qemu-img convert -O qcow2 file_store.vmdk hipchat-store.qcow2  
qemu-img convert -O qcow2 chat_history.vmdk hipchat-history.qcow2  
</code></pre>

<h2 id="createvmconfig">Create VM Config</h2>

<p>Below is an example of the disk configuration for LVM2. Click <a href="https://ycc.io/conf/hipchat_spice.xml">here</a> for a full example, including Spice console access.</p>

<p>It should be noted that it doesn't really matter which target devices are associated with the disks, as all partitions on the system image are mounted via UUID, and the other volumes are part of an LVM2 pool that has been pre-configured on the system image.</p>

<pre><code>    &lt;disk type='block' device='disk'&gt;
      &lt;driver name='qemu' type='raw'/&gt;
      &lt;source dev='/dev/vg0/hipchat-system'/&gt;
      &lt;backingStore/&gt;
      &lt;target dev='vda' bus='virtio'/&gt;
      &lt;boot order='1'/&gt;
      &lt;alias name='system'/&gt;
      &lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&gt;
    &lt;/disk&gt;
    &lt;disk type='block' device='disk'&gt;
      &lt;driver name='qemu' type='raw'/&gt;
      &lt;source dev='/dev/vg1/hipchat-store'/&gt;
      &lt;backingStore/&gt;
      &lt;target dev='vdb' bus='virtio'/&gt;
      &lt;alias name='file_store'/&gt;
      &lt;address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/&gt;
    &lt;/disk&gt;
    &lt;disk type='block' device='disk'&gt;
      &lt;driver name='qemu' type='raw'/&gt;
      &lt;source dev='/dev/vg1/hipchat-history'/&gt;
      &lt;backingStore/&gt;
      &lt;target dev='vdc' bus='virtio'/&gt;
      &lt;alias name='chat_history'/&gt;
      &lt;address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/&gt;
    &lt;/disk&gt;
</code></pre>

<h2 id="networkconfig">Network Config</h2>

<p>If your network does not use DHCP, then you will want to ensure that the VM is configured with a Spice or VNC console.</p>

<p>On initial startup, you will need to login to the VM with the username <code>admin</code> and password <code>hipchat</code>. Once logged in, you will be able to set a static IP address, and configure the hostname. Once this has been done, you can complete the setup by accessing the web interface via its IP address or hostname you have assigned it (assuming the DNS entry has been added on your DNS server).</p>

<p><img src="https://ss.ycnrg.org/jotunn_20170220_002629.png" alt="Running HipChat Server on KVM/libvirt"></p>

<h3 id="staticroutes">Static Routes</h3>

<p>Unfortunately, my network configuration requires adding an additional route to allow traffic to be passed to the gateway (eg. with OVH). See this article on how to accomplish this so that your changes are not obliterated the next time Chef runs: <a href="https://confluence.atlassian.com/hipchatkb/how-to-force-network-configurations-and-routes-858703697.html">How to force network configurations and routes</a>.</p>

<p>Create a file at <code>/home/admin/startup_scripts/static_routes</code> (file below shows example for a server at OVH):  </p>

<pre><code>#!/bin/bash
/usr/sbin/sudo /bin/dont-blame-hipchat -c "/sbin/route add 149.56.21.254 dev eth0"
/usr/sbin/sudo /bin/dont-blame-hipchat -c "/sbin/route add default gw 149.56.21.254"
</code></pre>

<p>Then ensure it's executable:  </p>

<pre><code>chmod +x /home/admin/startup_scripts/static_routes  
</code></pre>

<p>You can then run the script to immediately add the routes,  or reboot the VM if you want to be extra-sure it works.</p>

<h2 id="completesetup">Complete Setup</h2>

<p>Once networking has been configured and your machine is reachable from the Internet, complete the setup by visiting the IP or hostname of your new HipChat server in a web browser.</p>

<p><img src="https://ss.ycnrg.org/jotunn_20170220_013824.png" alt="Running HipChat Server on KVM/libvirt"></p>]]></content:encoded></item><item><title><![CDATA[Cobbler on CentOS 7 with NGINX and https]]></title><description><![CDATA[<p><img src="https://ss.ycnrg.org/cobbler_nginx_card.png" class="no-fluid" title="Cobbler &amp; NGINX on CentOS 7">  </p>

<blockquote>
  <p>Rather than running Cobbler with the default Apache2 installation, below details how to serve Cobbler's WSGI Python services via NGINX's uwsgi_pass. This details the setup on CentOS 7.</p>
</blockquote>

<h2 id="installcobbler">Install Cobbler</h2>

<p>The latest version of Cobbler can be installed from the EPEL repository on CentOS 7. First, be sure EPEL</p>]]></description><link>https://ycnrg.org/cobbler-on-centos-7-with-nginx-and-https/</link><guid isPermaLink="false">856bc0a0-b45e-425d-8d00-afce2b1abf44</guid><category><![CDATA[cobbler]]></category><category><![CDATA[nginx]]></category><category><![CDATA[linux]]></category><category><![CDATA[centos]]></category><dc:creator><![CDATA[Jacob Hipps]]></dc:creator><pubDate>Sat, 17 Sep 2016 10:45:27 GMT</pubDate><media:content url="https://ss.ycnrg.org/cobbler_nginx_card.png" medium="image"/><content:encoded><![CDATA[<img src="https://ss.ycnrg.org/cobbler_nginx_card.png" alt="Cobbler on CentOS 7 with NGINX and https"><p><img src="https://ss.ycnrg.org/cobbler_nginx_card.png" class="no-fluid" title="Cobbler &amp; NGINX on CentOS 7" alt="Cobbler on CentOS 7 with NGINX and https">  </p>

<blockquote>
  <p>Rather than running Cobbler with the default Apache2 installation, below details how to serve Cobbler's WSGI Python services via NGINX's uwsgi_pass. This details the setup on CentOS 7.</p>
</blockquote>

<h2 id="installcobbler">Install Cobbler</h2>

<p>The latest version of Cobbler can be installed from the EPEL repository on CentOS 7. First, be sure EPEL repos have been enabled, then install Cobbler and its friends. This guide is only focused on getting Cobbler WebUI working behind NGINX&mdash; full configuration of Cobbler, dnsmasq, and so on, is outside the scope of the article.  </p>

<pre><code>yum -y install epel-release  
yum -y install cobbler cobbler-web dnsmasq pykickstart  
</code></pre>

<h2 id="ensurenginxisinstalled">Ensure NGINX is installed</h2>

<p>NGINX can be installed from source, or via a package. I typically <a href="https://ycc.io/build/nginx.sh">compile the latest version from git</a>, but it can also be installed from the EPEL repos.  </p>

<pre><code>yum -y install nginx  
</code></pre>

<p>This setup assumes that NGINX is configured to run as user <strong>www-data</strong>. If this is not the case, then be sure to modify the uWSGI configuration and Cobbler file ownerships as necessary. Failure to do this will result in the WebUI being unusable. The default Cobbler package in EPEL for CentOS 7 ships with a default owner of <strong>apache</strong> (all of my other servers use <strong>www-data</strong>, so I have chosen to change it to match that convention).</p>

<h2 id="createnginxconfig">Create NGINX config</h2>

<pre><code>server {  
    listen 80 default;
    listen [::]:80 default;
    server_name cobbler.example.com _;

    location ~ ^/cblr(?!/svc/)(.*)?$ {
        alias /var/www/cobbler/$1;
    }

    location ~ ^/cobbler_track/(.*)?$ {
        alias /var/www/cobbler/$1;
    }

    location /cobbler {
        alias /var/www/cobbler;
    }

    location /cblr/svc/ {
        include uwsgi_params;
        uwsgi_pass unix:/run/cobbler_svc.sock;
    }

    location /cobbler_api {
        rewrite ^/cobbler_api/?(.*) /$1 break;
        proxy_pass http://127.0.0.1:25151;
    }

    # only force-redirect the web ui
    rewrite ^/$ https://cobbler.example.com/cobbler_web permanent;
    rewrite ^/cobbler_web https://cobbler.example.com$request_uri? permanent;
}

server {  
    # NOTE: remove 'http2' if using nginx &lt; 1.9
    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    ssl on;
    ssl_certificate /path/to/your/cert.pem;
    ssl_certificate_key /path/to/your/private.key;

    ssl_prefer_server_ciphers on; 
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;
    ssl_buffer_size 8k; 

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';

    # NOTE: remove or adjust this line as needed,
    # if you're using custom DH params
    # (more info: https://weakdh.org/sysadmin.html)
    ssl_dhparam /etc/ssl/private/dhparams.pem;

    server_name cobbler.example.com;

    access_log /var/log/nginx/cobbler.access.log;
    error_log  /var/log/nginx/cobbler.error.log;

    location /cobbler_webui_content {
        alias /var/www/cobbler_webui_content;
    }

    location ~ ^/cblr(?!/svc/)(.*)?$ {
        alias /var/www/cobbler/$1;
    }

    location /cblr/svc/ {
        include uwsgi_params;
        uwsgi_pass unix:/run/cobbler_svc.sock;
    }

    location /cobbler {
        alias /var/www/cobbler;
    }

    location /cobbler_web {
        rewrite ^/cobbler_web/?(.*) /$1 break;
        include uwsgi_params;
        uwsgi_pass unix:/run/cobbler_web.sock;        
    }

    # redirect requests for / to the Web UI
    rewrite ^/$ https://cobbler.example.com/cobbler_web permanent;
}
</code></pre>

<p>Once NGINX has been configured, test the configuration to be sure there are no issues:  </p>

<pre><code>nginx -t  
</code></pre>

<p>If all is well, reload (or restart) NGINX to pull in the new config.  </p>

<pre><code>systemctl reload nginx  
</code></pre>

<h2 id="uwsgiinstallconfig">uWSGI Install &amp; Config</h2>

<p>Install latest uWSGI via <code>pip</code>, then create a directory for our config files.  </p>

<pre><code>yum -y install python-devel python-pip  
pip install uwsgi  
mkdir /etc/uwsgi  
</code></pre>

<p>Create the first config file at <code>/etc/uwsgi/cobbler_web.ini</code>  </p>

<pre><code>[uwsgi]
wsgi-file = /usr/share/cobbler/web/cobbler.wsgi

master = true  
processes = 2  
max-requests = 5000

socket = /run/cobbler_web.sock  
chmod-socket = 660  
chown-socket = www-data:www-data  
uid = www-data  
gid = www-data  
vacuum = true

die-on-term = true  
</code></pre>

<p>And the second config file at <code>/etc/uwsgi/cobbler_svc.ini</code>  </p>

<pre><code>[uwsgi]
wsgi-file = /var/www/cobbler/svc/services.py

master = true  
processes = 2  
max-requests = 5000

socket = /run/cobbler_svc.sock  
chmod-socket = 666  
chown-socket = www-data:www-data  
uid = www-data  
gid = www-data  
vacuum = true

die-on-term = true  
</code></pre>

<h3 id="fault1loginfailed">Fault 1: ... 'login failed'</h3>

<p>If Nginx is running as <code>www-data</code>, you will likely receive the following error message:  </p>

<pre><code>Fault 1: "&lt;class 'cobbler.cexceptions.CX'&gt;:'login failed'"  
</code></pre>

<p>Fix ownership of the <code>web.ss</code> auth file and <code>webui_sessions</code>. Failure to do this will result in a 500 error with the above error message.  </p>

<pre><code>chown www-data.www-data /var/lib/cobbler/web.ss  
chown www-data.www-data /var/lib/cobbler/webui_sessions  
</code></pre>

<p>After this is done, you need to fix the issue permanently with one of the following fixes:</p>

<ul>
<li>Open <code>/usr/lib/systemd/system/cobblerd.service</code> and add the following line:</li>
</ul>

<pre><code>ExecStartPost=/bin/bash -c "/bin/sleep 5 ; /bin/chown www-data.www-data /var/lib/cobbler/web.ss"  
</code></pre>

<p>Then reload systemd configuration: <code>systemctl daemon-reload</code></p>

<ul>
<li>Alternatively, open <code>/usr/lib/python2.7/site-packages/cobbler/cobblerd.py</code>, find the line reading <code>http_user = "apache"</code> (line 65 in my version of cobblerd.py), and change it to <code>http_user = "www-data"</code>. This may not be a good solution, as your changes may get overwritten during updates.</li>
</ul>

<h2 id="servicesetup">Service Setup</h2>

<p>Next, we need to create systemd manifests.</p>

<p>Create <code>/lib/systemd/system/cobbler-web.service</code> and add the following:  </p>

<pre><code>[Unit]
Description=uWSGI instance for Cobbler WebUI  
After=syslog.target network.target remote-fs.target nss-lookup.target

[Service]
ExecStart=/usr/bin/uwsgi /etc/uwsgi/cobbler_web.ini  
ExecStopPost=/usr/bin/rm /run/cobbler_web.sock

[Install]
WantedBy=multi-user.target  
</code></pre>

<p>Create <code>/lib/systemd/system/cobbler-svc.service</code> and add the following:  </p>

<pre><code>[Unit]
Description=uWSGI instance for Cobbler Service  
After=syslog.target network.target remote-fs.target nss-lookup.target

[Service]
ExecStart=/usr/bin/uwsgi /etc/uwsgi/cobbler_svc.ini  
ExecStopPost=/usr/bin/rm /run/cobbler_svc.sock

[Install]
WantedBy=multi-user.target  
</code></pre>

<p>Then, perform a <code>daemon-reload</code> and enable &amp; start the services  </p>

<pre><code>systemctl daemon-reload  
systemctl enable cobbler-web  
systemctl enable cobbler-svc  
systemctl start cobbler-web  
systemctl start cobbler-svc  
</code></pre>

<p>After starting the services, visit <a href="https://cobbler.example.com/cobbler_web">https://cobbler.example.com/cobbler_web</a> and enter your credentials to login (or cobbler/Cobbler if you haven't configured authentication yet).</p>

<p>Good luck ^_^</p>]]></content:encoded></item><item><title><![CDATA[Xen to KVM Node Migration]]></title><description><![CDATA[Discussion and experiences involving the migration of a Xen 4.4 host and domains to using KVM.]]></description><link>https://ycnrg.org/xen-to-kvm-node-migration/</link><guid isPermaLink="false">055ed200-f6ab-4373-bd77-80dae6b8c1a6</guid><category><![CDATA[linux]]></category><category><![CDATA[virtualization]]></category><category><![CDATA[kvm]]></category><category><![CDATA[xen]]></category><category><![CDATA[libvirt]]></category><dc:creator><![CDATA[Jacob Hipps]]></dc:creator><pubDate>Tue, 13 Sep 2016 18:19:45 GMT</pubDate><media:content url="https://ss.ycnrg.org/xen_to_kvm.png" medium="image"/><content:encoded><![CDATA[<img src="https://ss.ycnrg.org/xen_to_kvm.png" alt="Xen to KVM Node Migration"><p><img src="https://ss.ycnrg.org/xen_to_kvm.png" class="no-fluid" alt="Xen to KVM Node Migration">  </p>

<blockquote>
  <p>Converting Xen 4.4 guest domains to run under KVM</p>
</blockquote>

<p>I've been using Xen 4.4 with the XL (XenLite) toolchain for the past 18 months or so, and it has worked very reliably during that time. However, my biggest issue with the XL toolchain is that the support from third-party applications for monitoring, provisioning, and deployment is very slim. I wrote my own crude automated provisioning tool, <a href="https://git.ycnrg.org/projects/YTL/repos/xlctl/browse">xlctl</a>, for this reason. I suppose this lack of support is why most folks choose to use XAPI/XCP (XenServer), as this toolchain is widely supported by various third-party tools, and allows you to use Citrix's GUI management tool, and other nice tools, like the web-based <a href="https://xen-orchestra.com/">Xen Orchestra</a>, or even Microsoft's <a href="https://msdn.microsoft.com/en-us/library/ee943322.aspx">Virtual Machine Manager (SCVMM)</a>.</p>

<h2 id="conversiontokvm">Conversion to KVM</h2>

<p>Luckily, I already have <code>libvirtd</code> installed and running, and am using it to manage my Xen domains (unfortunately for me, many libvirt applications will only work with the KVM or QEMU backends). Because I've already converted my native Xen configuration files over to XML (to use with libvirt), most of my configuration is already done, save for a bit of tweaking to define additional values needed by KVM.</p>

<p>For storage, I am using LVM2 volumes, distributed among 2 primary storage pools (<code>vg0</code> for SSD-based storage, <code>vg1</code> for spinny-disk storage), as well as additional <code>img</code> and <code>iso</code> pools for VM images and installation ISOs.</p>

<pre><code>root@mirai ~ # virsh pool-list --all  
 Name                 State      Autostart 
------------------------------------------- 
 img                  active     yes       
 iso                  active     yes       
 vg0                  active     yes       
 vg1                  active     yes       
</code></pre>

<h2 id="domainconversion">Domain conversion</h2>

<h3 id="conversionoflvstopartitionedvolumes">Conversion of LVs to partitioned volumes</h3>

<p>When using xenpv (paravirtualization), one could simply use PyGrub to boot directly from a supplied kernel on a filesystem. This bypasses all of the normal bootstrapping shenanigans that are typically required. However, with KVM, this is not possible. For this reason, we need to ensure that our volumes have a valid partition label (eg. MSDOS or GPT). If you partitioned your Xen domain logical volumes, then you can skip this step (lucky for you).</p>

<ul>
<li>Rename your existing LV with a <code>-old</code> suffix</li>
</ul>

<pre><code>lvrename /dev/vg1/xr1-disk xr1-disk-old  
</code></pre>

<ul>
<li>Create a new LV of the same size</li>
</ul>

<pre><code>lvcreate -L30G -n xr1-disk vg1  
</code></pre>

<ul>
<li><p>Use <code>fdisk</code>, <code>gdisk</code>, <code>parted</code> or whatever to partition the new LV. Assuming there will be only a single filesystem, creating one primary partition should be fine. <strong>Make sure to set the boot flag!</strong> You may need to run <code>partprobe</code> afterwards to force the kernel to load the new partition table.</p></li>
<li><p>Ensure that the device mapper creates a mapping for your new LV partitions by using <code>kpartx</code></p></li>
</ul>

<pre><code>kpartx -al /dev/vg1/xr1-disk  
</code></pre>

<ul>
<li>At this point, you can either format the filesystem and copy the files with <code>rsync</code>, or shrink the old filesystem by a couple megabytes and copy the entire filesystem with <code>dd</code>. The rsync option is likely going to be less error-prone and complete much faster, but the <code>dd</code> method will provide you with an exact copy of the previous filesystem state.</li>
</ul>

<h3 id="optionamkfsrsync">Option A: mkfs + rsync</h3>

<ul>
<li>To create a new filesystem (eg. <code>ext4</code>), use the following, making sure to double-check how your new partitions were mapped</li>
</ul>

<pre><code>mkfs.ext4 /dev/mapper/vg1-xr1--disk1  
</code></pre>

<ul>
<li>Ensure that both the source and destination filesytems are mounted. I like to create a <code>/mnt2</code> for this purpose, which will contain our source (original) filesystem, and <code>/mnt</code> will contain our target (new) filesystem.</li>
</ul>

<pre><code>mkdir /mnt2  
mount /dev/vg1/xr1-disk-old /mnt2  
mount /dev/mapper/vg1-xr1--disk1 /mnt  
</code></pre>

<ul>
<li>Perform the rsync. The chosen options will preserve all permissions, attributes, xattribs, ownership, times, and so on. <code>--stats</code> and <code>--progress</code> are optional, but provide some feedback about what's happening. Make sure to retain the trailing forward-slash (<code>/</code>) on both the source and destination.</li>
</ul>

<pre><code>rsync --stats --progress -aAXv --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} /mnt2/ /mnt/  
</code></pre>

<ul>
<li>Ensure that everything was copied over correctly, then we can unmount the old filesystem. Once you've successfully booted the domain (later), you may then want to remove it via <code>lvremove</code></li>
</ul>

<pre><code>umount /mnt2  
</code></pre>

<p>We will keep <code>/mnt</code> mounted (the new filesystem), since it will be needed later.</p>

<h3 id="optionbresize2fsdd">Option B: resize2fs + dd</h3>

<p>This method assumes an Extended-type filesystem (ext2/3/4)</p>

<ul>
<li>First, determine the extents of the new partition. We do this by running <code>fdisk</code> against the new LV</li>
</ul>

<pre><code>fdisk -l /dev/vg1/xr1-disk  
</code></pre>

<p>In the output, take note of the number of sectors. Example:  </p>

<pre><code>Disk /dev/vg1/xr1-disk: 30 GiB, 32212254720 bytes, 62914560 sectors  
Units: sectors of 1 * 512 = 512 bytes  
Sector size (logical/physical): 512 bytes / 512 bytes  
I/O size (minimum/optimal): 512 bytes / 512 bytes  
Disklabel type: dos  
Disk identifier: 0xa49e92bb

Device             Boot Start      End  Sectors Size Id Type  
/dev/vg1/xr1-disk1       2048 62914559 62912512  30G 83 Linux
</code></pre>

<p>In this example: <strong>62912512</strong> sectors, with a sector size of <strong>512</strong> bytes = <strong>32211206144</strong> bytes</p>

<ul>
<li>Use this information to resize your existing filesystem via <code>resize2fs</code>. <strong>Be sure to take a backup BEFORE doing this if the filesystem contains any crucial data.</strong> That being said, I've never had an issue where <code>resize2fs</code> has borked my filesystem when shrinking. You must ensure the filesystem is not mounted before proceeding.</li>
</ul>

<pre><code>umount /dev/vg1/xr1-disk-old  
e2fsck -f /dev/vg1/xr1-disk-old  
resize2fs -p /dev/vg1/xr1-disk-old 62912512s  
</code></pre>

<p>The <code>s</code> unit denotes 512-byte sectors.</p>

<ul>
<li>If all went well, your filesystem should now be slightly smaller (note that the LV will remain the same size). Now take the block size output by resize2fs, and this will be used as our <em>count</em> value in dd. resize2fs will specify blocks in terms of 4KiB chunks, so if your version of resize2fs does not output this data, then multiply your sectors by 512 (bytes per sector) and divide by 4096 (number of bytes per block) to arrive at a block count (62912512 * 512 / 4096 = 7864064 4KiB blocks).</li>
</ul>

<pre><code>dd count=7864064 bs=4K if=/dev/vg1/xr1-disk-old of=/dev/mapper/vg1-xr1--disk1  
</code></pre>

<blockquote>
  <p><strong>4K</strong> is always going to be the smallest size you'll need to ensure an exact transfer (4KiB = 1 block). However, if your filesystem happens to be evenly-divisible by a higher value, then you can use that instead to speed things up (just be sure to adjust the <em>count</em> accordingly).</p>
</blockquote>

<ul>
<li>And to finish things up, run a fsck to ensure everything is intact</li>
</ul>

<pre><code>e2fsck -f /dev/mapper/vg1-xr1--disk1  
</code></pre>

<h3 id="mountguestdisks">Mount guest disk(s)</h3>

<p>First, we need to enter each domain via SSH or console when running -- or if your guest is stopped or the host machine is already running without a Xen hypervisor, you can mount the disk for each domain and perform the conversion in a chrooted environment.</p>

<p>Example of setting up a chrooted environment for one of the guest domains (this domain's disk is an LVM2 logical volume):  </p>

<pre><code>mount /dev/vg1/xr1-disk /mnt  
mount --bind /dev /mnt/dev  
mount --bind /proc /mnt/proc  
chroot /mnt  
</code></pre>

<h3 id="updatefstab">Update fstab</h3>

<p>Be sure to change any reference from <code>/dev/xvda1</code> to <code>/dev/vda</code>, so that VirtIO can be utilized for best performance. Example:</p>

<pre><code>/dev/vda1      /              ext4   noatime,nodiratime,errors=remount-ro   0   1
</code></pre>

<h3 id="installupdatekernel">Install/Update Kernel</h3>

<p>On each guest domain, we need to perform the following setups to configure a kernel and bootloader.</p>

<ul>
<li>Install a suitable kernel</li>
</ul>

<p><strong>For Debian/Ubuntu:</strong>  </p>

<pre><code>apt-get update  
apt-get -y install linux-image-virtual  
</code></pre>

<p><strong>For CentOS/RedHat:</strong>  </p>

<pre><code>yum -y install kernel  
</code></pre>

<p><strong>For ArchLinux:</strong></p>

<pre><code>pacman -S linux  
</code></pre>

<ul>
<li>Ensure any Xen modules added to the initrd are removed. Edit <code>/etc/mkinitcpio.conf</code> and remove any Xen modules from the <code>MODULES=</code> line. Once this is done, run:</li>
</ul>

<pre><code>mkinitcpio --kernel=$(file /boot/vmlinuz-linux | perl -pe 's/.*version ([^ ]+) .*/$1/')  
</code></pre>

<ul>
<li>Ensure <code>biosdevname=0 net.ifnames=0</code> is added to <code>GRUB_CMDLINE_LINUX_DEFAULT</code> in <code>/etc/default/grub</code> to use the classic network interface naming scheme (<code>eth0</code> rather than <code>ens3</code>, for example)</li>
</ul>

<p><strong>All - Ensure VirtIO support in your guest's new kernel:</strong></p>

<pre><code>find /lib/modules/ -name virtio*  
</code></pre>

<p>If you see various <code>virtio_*.ko</code> kernel modules for the installed kernel version, then the kernel should be good to go.</p>

<h3 id="grub2setupinstall">GRUB2 Setup &amp; Install</h3>

<ul>
<li>Install GRUB2</li>
</ul>

<pre><code>apt-get install grub2  
</code></pre>

<p>GRUB installer may freak out if you're running it chrooted, or doesn't detect a normal configuration, just choose 'Yes' to continue with the installation if prompted.</p>

<h5 id="serialconsole">Serial console</h5>

<p>Edit <code>/etc/default/grub</code> with the following to enable serial console for GRUB and kernel messages:  </p>

<pre><code>GRUB_CMDLINE_LINUX_DEFAULT="console=tty0 console=ttyS0"  
GRUB_TERMINAL="serial console"  
</code></pre>

<p>(When running <code>grub-mkconfig</code> you may receive a warning about default serial parameters for <code>GRUB_SERIAL_COMMAND</code>-- that's OK)</p>

<p><strong>Note:</strong> Some kernels may not output systemd startup messages on the video console if these boot options are used (such as the kernels built for Arch).</p>

<h5 id="bootloaderconfiguration">Bootloader configuration</h5>

<p>First, create a file located at <code>/boot/grub/devices.map</code> (or <code>/mnt/boot/devices.map</code> on the host). This should contain a reference to the <strong>host's</strong> device (or a loopback) that is used by guest domain. You will have needed to have run <code>kpartx -al</code> on this device so that the partitions are accessible via <code>/dev/mapper</code> (see previous section on LV conversion).</p>

<p>Contents of <code>devices.map</code>:  </p>

<pre><code>(hd0) /dev/mapper/vg1-xr1--disk
</code></pre>

<p>Once created, we are ready to build the GRUB configuration (still running in the domain itself, or via a chrooted environment). For Ubuntu/Debian:</p>

<pre><code>grub-mkconfig -o /boot/grub/grub.cfg  
</code></pre>

<p>For CentOS/RedHat:  </p>

<pre><code>grub2-mkconfig -o /boot/grub2/grub.cfg  
</code></pre>

<p>Once complete, be sure to unmount any bind mounts, then unmount the guest domain's disk. This can be done all at once with <code>-R</code>:  </p>

<pre><code>umount -R /mnt  
</code></pre>

<h5 id="bootloaderinstallation">Bootloader installation</h5>

<p>Unfortunately, I could not determine an easy solution for this, since <code>grub-install</code> is a fucking pain to work with, as far as non-physical disks are concerned. There is probably some better way, but I didn't have time to fuck about any longer.</p>

<p>To finish the bootloader installation, boot up into a rescue disk for Ubuntu 16.04 (will work with Ubuntu, Debian, ArchLinux, and CentOS 7 -- these are the ones I've tested). You can skip network configuration, just be sure to select <code>/dev/vda1</code> as the mounted filesystem, then enter <code>/dev/vda</code> as the <em>Device for boot loader installation</em>. Once installed, change the boot order such that <em>hd</em> is the primary boot device, then restart (might require destroying, then starting the domain after the reboot for boot order change to take effect). <br>
<img src="https://ss.ycnrg.org/jotunn_20160913_024543.png" alt="Xen to KVM Node Migration">
<img src="https://ss.ycnrg.org/jotunn_20160913_025100.png" alt="Xen to KVM Node Migration"></p>

<p>Excerpt from domain's libvirt XML configuration for the boot disk and VNC configuration. This can (and probably should) be disabled once the bootloader is installed.  </p>

<pre><code>    &lt;disk type='file' device='cdrom'&gt;
      &lt;driver name='qemu' type='raw'/&gt;
      &lt;source file='/opt/repo/iso/ubuntu-16.04.1-server-amd64.iso'/&gt;
      &lt;target dev='hdc' bus='ide'/&gt;
      &lt;readonly/&gt;
    &lt;/disk&gt;
    &lt;input type='mouse' bus='ps2'/&gt;
    &lt;graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'/&gt;
</code></pre>

<p>If all went well, you should be able to boot from <em>hd</em> and receive a GRUB menu (both on ttyS0 serial console and VNC, if you left it enabled). <br>
<img src="https://ss.ycnrg.org/jotunn_20160913_030008.png" alt="Xen to KVM Node Migration">
<img src="https://ss.ycnrg.org/jotunn_20160913_030457.png" alt="Xen to KVM Node Migration"></p>]]></content:encoded></item><item><title><![CDATA[OSM Tile Server - Ubuntu 16.04]]></title><description><![CDATA[Setting up an Ubuntu 16.04 server for serving OSM map tiles via mod_tile. Covers software installation, configuration, and importing PostGIS data.]]></description><link>https://ycnrg.org/osm-tile-server-ubuntu-16-04/</link><guid isPermaLink="false">11030862-a1ee-48eb-bff8-478afe3cbe4f</guid><category><![CDATA[linux]]></category><category><![CDATA[osm]]></category><category><![CDATA[maps]]></category><category><![CDATA[mapnik]]></category><category><![CDATA[postgres]]></category><category><![CDATA[gis]]></category><dc:creator><![CDATA[Jacob Hipps]]></dc:creator><pubDate>Sat, 06 Aug 2016 07:14:25 GMT</pubDate><media:content url="https://ss.ycnrg.org/jotunn_20160806_212238.png" medium="image"/><content:encoded><![CDATA[<img src="https://ss.ycnrg.org/jotunn_20160806_212238.png" alt="OSM Tile Server - Ubuntu 16.04"><p><img src="https://ss.ycnrg.org/jotunn_20160806_212238.png" alt="OSM Tile Server - Ubuntu 16.04"></p>

<p>This page exists to document the process I went through when setting up a tile server for OpenStreetMap data. During the process, I found the <em><a href="https://switch2osm.org/serving-tiles/manually-building-a-tile-server-14-04/">Manually building a tile server</a></em> page over at switch2osm.org very helpful, and it serves as a basis for this guide.</p>

<p>There are a few reasons you may want to set up your own tileserver: first, you may want to create or tweak map styles to create your own unique tileset; or, you may have an application that makes heavy use of mapping features and does not qualify for free use of the OSM tile servers.</p>

<p>It should be noted that the process described below <strong>will not</strong> allow you to perform geocoding with the imported data. Geocoding is the process of converting an address or place name to coordinates on the globe (or vice-versa in the case of "reverse" geocoding). For this, you'll want to check out the <a href="http://wiki.openstreetmap.org/wiki/Nominatim/Installation">Nominatim installation instructions</a> over at the OSM wiki. It also relies on the osm2pgsql tool, but the data is indexed and stored differently. With both the Nominatim and Mapnik datasets, you'll be able to perform searching, geocoding, and tile rendering without relying on external sources.</p>

<h3 id="recommendedhardware">Recommended hardware</h3>

<ul>
<li>8GB of RAM (recommended 64GB+)</li>
<li>4+ CPU threads is recommended</li>
<li>600GB+ of free local storage for full planet import (SSD or other high-speed storage recommended)</li>
<li>64-bit architecture (x86_64/amd64)</li>
</ul>

<blockquote>
  <p><strong>A full planet import on minimal hardware can take 8 to 14 days to complete. For a dedicated server with SSDs and a lot of RAM, this can be reduced to less than a day with optimal settings.</strong> Check out <a href="http://hetzner.de/us/">Hetzner</a> or <a href="https://www.ovh.com/">OVH</a> to pick up well-spec'd a server for cheap.</p>
</blockquote>

<h1 id="prerequisites">Prerequisites</h1>

<p>We're going to need a whole bunch of stuff to get started. Thankfully, the packages in the <em>xenial</em> repos are recent, so we can install Mapnik 3 and friends via the package manager.</p>

<p>We'll also need PostgreSQL with PostGIS support. The commands below will set up Postgres 9.5 and PostGIS 2, as well as various supporting libraries that we may need for compilation of <em>mod_tile</em> and <em>renderd</em>.</p>

<pre><code>apt-get update  
apt-get install libboost-all-dev screen subversion git unzip wget bzip2 build-essential autoconf libtool libxml2-dev libgeos-dev libgeos++-dev libpq-dev libbz2-dev libproj-dev libprotobuf-c0-dev protobuf-c-compiler libfreetype6-dev libpng12-dev libtiff4-dev libicu-dev libgdal-dev libcairo-dev libcairomm-1.0-dev apache2 apache2-dev libagg-dev liblua5.2-dev ttf-unifont lua5.1 liblua5.1-dev libgeotiff-epsg postgresql-9.5 postgresql-9.5-postgis-2.2 postgresql-9.5-postgis-scripts libpq-dev libmapnik3.0 libmapnik-dev mapnik-utils mapnik-reference mapnik-doc python-mapnik python3-mapnik node-carto osm2pgsql  
</code></pre>

<h2 id="mod_tilerenderd">mod_tile + renderd</h2>

<p><strong>mod_tile</strong> is a DSO module for Apache 2. It allows the user to define location(s) to serve tiles from (such as a <code>/tiles/</code> URI in one of your VirtualHosts). It works in tandem with <strong>renderd</strong>, which utilizes the Mapnik library to do the actual rendering. Rendered tiles are then cached (either to disk, memcached, or Ceph) and used for subsequent requests until they expire.</p>

<h3 id="installation">Installation</h3>

<p>We'll need to compile mod_tile &amp; renderd ourselves, but it's an easy build. Below are the instructions for cloning the repo into <code>/opt/src</code>, but another directory can be used. The steps below assume you are running as root.</p>

<pre><code>mkdir -p /opt/src  
cd /opt/src  
git clone https://github.com/openstreetmap/mod_tile.git  
cd mod_tile  
./autogen.sh
./configure --prefix=/usr
make -j`nproc` &amp;&amp; make install &amp;&amp; make install-mod_tile  
cp /opt/src/mod_tile/debian/renderd.init /etc/init.d/renderd  
chmod +x /etc/init.d/renderd  
</code></pre>

<h3 id="configuration">Configuration</h3>

<p>First, we need to point <em>renderd</em> to our config file.  </p>

<pre><code>echo 'DAEMON_ARGS="-c /usr/etc/renderd.conf"' &gt; /etc/default/renderd  
</code></pre>

<p>Then open the configuration file (<code>/usr/etc/renderd.conf</code>) and make any necessary changes for your setup. Below are the changes I made:  </p>

<pre><code>[renderd]
num_threads=8  
tile_dir=/var/lib/mod_tile  
...

[default]
URI=/tiles/  
TILEDIR=/var/lib/mod_tile  
XML=/opt/maps/style/OSMBright/OSMBright.xml  
HOST=localhost  
</code></pre>

<p>Finally, to enable <em>renderd</em> to start when your machine boots, run the following:  </p>

<pre><code>systemctl enable renderd  
</code></pre>

<h1 id="postgres">Postgres</h1>

<h2 id="configuration">Configuration</h2>

<p>Open the main configuration file. For PostgreSQL 9.5 on Ubuntu 16.04, this is located at <code>/etc/postgresql/9.5/main/postgresql.conf</code>. Below are settings that I have adjusted for my machine with 256GB of RAM and 40 cores. These settings have provided the best performance for me, and between 24GB to 32GB of cache seems to be sweet spot, if the benchmarks on the OSM wiki are anything to go by. <strong>These settings should be adjusted depending on the amount of RAM available and number of CPU threads in your machine.</strong>  </p>

<pre><code>max_connections = 200  
shared_buffers = 128MB  
maintenance_work_mem = 4GB  
max_worker_processes = 16  
effective_cache_size = 24GB  
autovacuum = off  
</code></pre>

<p>The above configuration is for <em>import only</em>. Once you've successfully completed an import of OSM data into Postgres, the above parameters can be reverted to their defaults. At the very least, <code>autovacuum</code> should be turned back on, as it's responsible for reclaiming deleted objects, and helps to ensure your tables are optimal (unless you're doing manual VACUUM'ing).</p>

<blockquote>
  <p>For more examples, as well as other parameters you may want to tweak, check out the <a href="http://wiki.openstreetmap.org/wiki/Osm2pgsql/benchmarks">osm2pgsql/Benchmarks page</a> on the OSM wiki.</p>
</blockquote>

<h2 id="usersetup">User setup</h2>

<p>First, open a Postgres shell as the superuser (<code>postgres</code>)  </p>

<pre><code>sudo -u postgres psql  
</code></pre>

<p>Next, create our <code>osm</code> user that will be used by <em>renderd</em>.  </p>

<pre><code>CREATE ROLE osm WITH login PASSWORD 'supersecret';  
</code></pre>

<p>If you'll be running the import as the root user, you can use the following to grant <code>root</code> with superuser privs, just like the <code>postgres</code> user:  </p>

<pre><code>CREATE ROLE root WITH login superuser;  
</code></pre>

<h2 id="creatingtheosmdatabase">Creating the OSM database</h2>

<p>With the <code>root</code> role created as a superuser, you can authenticate with <em>peer</em> authentication (Postgres authenticates you via the UID the process is running as) and connect via <em>local</em> rather than via <em>tcp</em>, which will help speed up the import. Using <em>peer</em> authentication also means you don't need to provide a password or username.</p>

<p>At the Postgres prompt, run the following to create a new <code>osm</code> database with UTF-8 encoding, owned by the <code>osm</code> user, and enable the PostGIS and hstore extensions.</p>

<blockquote>
  <p>The <strong>hstore</strong> extension is available in PostgreSQL 9.x, and is optional. It allows storing attributes as a hash/dictionary for fields that do not have corresponding dedicated columns. More info: <a href="https://www.postgresql.org/docs/9.0/static/hstore.html">https://www.postgresql.org/docs/9.0/static/hstore.html</a></p>
</blockquote>

<pre><code>CREATE DATABASE osm WITH OWNER osm ENCODING 'UTF-8' TEMPLATE template0;  
\c osm
CREATE EXTENSION hstore;  
CREATE EXTENSION postgis;  
ALTER TABLE geometry_columns OWNER TO osm;  
ALTER TABLE spatial_ref_sys OWNER TO osm;  
</code></pre>

<p>If the import aborts or fails, I would recommend DROP'ing your existing database before trying again. In my experience, it seemed like osm2pgsql was not removing the existing data, but this may also be due to <em>autovacuum</em> being disabled.</p>

<p>To drop your existing database:  </p>

<pre><code>DROP DATABASE osm;  
</code></pre>

<p>Then the creation commands above can be re-run to recreate your database.</p>

<h1 id="osm2pgsql">osm2pgsql</h1>

<p>Now it's time to import some data. Depending on your hardware, this can take anywhere between 8 hours and a couple weeks.</p>

<p>The <code>--slim</code> import process is divided into two phases: phase one is reading all of the nodes, ways, and relations, then caching &amp; indexing them in temporary tables in Postgres; phase two is assembling the actual GIS tables that are used by Mapnik to render map tiles.</p>

<p>During the first phase, speed is primarily dependent upon your disk speed and amount of RAM available, as well as the type of file being imported (XML/bz2 or PBF). PBF is considerably faster, and there is no initial delay before node processing starts. Using a flat node cache also considerably speeds up the node processing if you have an SSD (creates a ~33GB file).</p>

<p>The second phase is more CPU-intensive, and depends upon the number of processes that are assigned via <code>--number-processes</code>.</p>

<blockquote>
  <p><strong>Important note:</strong> If <code>--number-processes</code> is increased beyond 10, you will need to increase the <code>max_connections</code> setting in Postgres to 8x the number of processes. (eg. if <code>--number-processes=16</code> then increase <code>max_connections</code> in postgresql.conf to <code>128</code>, plus some buffer room). Don't forget to restart Postgres after any config changes. Failure to make this change will cause Postgres to reach max connections, and osm2pgsql will fail after the first phase. You don't get those hours/days back...</p>
</blockquote>

<h3 id="obtainosmdata">Obtain OSM data</h3>

<ul>
<li>Full planet: <a href="http://wiki.openstreetmap.org/wiki/Planet.osm">http://wiki.openstreetmap.org/wiki/Planet.osm</a></li>
<li>Regional extracts: <a href="http://download.geofabrik.de/">http://download.geofabrik.de/</a></li>
<li>Metro extracts: <a href="https://mapzen.com/data/metro-extracts/">https://mapzen.com/data/metro-extracts/</a></li>
</ul>

<p>Click one of the links above to download your preferred data source, then choose a mirror close to you to obtain a download link for a <code>planet-latest</code> file, or a regional extract. With the link copied, <code>wget</code> it to your server. Unless you have good reason to do otherwise, you should choose a <strong>.pbf file</strong>, as the import is considerably faster.</p>

<h3 id="importosmdata">Import OSM data</h3>

<p>Assuming you've downloaded your planet file, or a local/regional extract, we can proceed to the import. You'll almost certainly want to run this in a <code>screen</code> or with <code>tmux</code>. There is no resume support, so if osm2pgsql crashes, or you lose connection, or you accidentally hit Ctrl+C... then you'll have to start the process all over again.</p>

<p>First, create a screen (or tmux session):  </p>

<pre><code>screen -S osm_import  
</code></pre>

<p>Change to the directory that you want to run the import from, then adjust the parameters in the command below to suit your needs.</p>

<ul>
<li><code>--hstore</code> - Enable hstore usage</li>
<li><code>--slim</code> - Enable <a href="http://wiki.openstreetmap.org/wiki/Osm2pgsql#Slim_mode">slim mode</a></li>
<li><code>-r pbf</code> - Use PBF parser; do not use for bz2/xml imports</li>
<li><code>-C 32000</code> - Cache size in MiB (approx 32GB in this case). If you have sufficient RAM, use between 20GB and 32GB. Otherwise, use 60% of your available memory</li>
<li><code>--flat-nodes node.cache</code> - Flat node cache; this is a ~33GB file on disk; using this option drastically speeds up node processing phase; should only be used with SSDs</li>
<li><code>--number-processes 16</code> - Number of helper processes to spawn during the second phase</li>
<li><code>-d osm</code> - Use the <em>osm</em> Postgres database</li>
<li><code>-U root</code> - Use the <em>root</em> Postgres user</li>
<li><code>planet-latest.osm.pbf</code> - Source file</li>
</ul>

<p>If you're using md5 authentication with Postgres, you can also specify the <code>-H</code> option for Postgres hostname, and then set the <code>PGPASS</code> env var with your Postgres user's password (eg. <code>export PGPASS="supersecret"</code>). In this example, we're using peer authentication with a local (UNIX socket) connection, so that's not required.</p>

<pre><code>osm2pgsql --hstore --slim -r pbf -C 32000 --flat-nodes node.cache --number-processes 16 -d osm -U root planet-latest.osm.pbf  
</code></pre>

<p>If using <code>screen</code>, you can use <code>Ctrl+A</code> then <code>d</code> to detach from your session, then <code>screen -r osm_import</code> to re-attach.</p>

<p><img src="https://ss.ycnrg.org/jotunn_20160805_235117.png" alt="OSM Tile Server - Ubuntu 16.04"></p>

<h4 id="runtimeresults">Run-time results</h4>

<p>Machine:</p>

<ul>
<li>2x Xeon E5-2670v2 2.50GHz Ivy Bridge (total 20 cores/40 threads)</li>
<li>256GB DDR3 RAM (16 x 16GB ECC Reg. Buffered)</li>
<li>2x Intel DC 480GB SSDs (SSDSC2BB480G6), using LVM+ext4</li>
<li>Linux kernel 4.4.0-31-generic</li>
<li>Using osm2pgsql &amp; Postgres settings described in this doc</li>
</ul>

<p><strong>Full planet import:</strong> <a href="https://ss.ycnrg.org/jotunn_20160806_110333.png">16.7 hours</a> <br>
<strong>Virginia, US State extract import:</strong> 3 minutes, 12 seconds</p>]]></content:encoded></item><item><title><![CDATA[LVM Volume Migration via SSH]]></title><description><![CDATA[We create a script to migrate LVM volumes between servers with dd, gzip, and ssh. Also investigate its potential use in automating Xen semi-live migrations.]]></description><link>https://ycnrg.org/lvm-ssh-migration/</link><guid isPermaLink="false">c011e7e9-7e67-4ce8-b15c-53a8face4e0d</guid><category><![CDATA[linux]]></category><category><![CDATA[scripts]]></category><category><![CDATA[xen]]></category><dc:creator><![CDATA[Jacob Hipps]]></dc:creator><pubDate>Sat, 09 Jan 2016 08:03:47 GMT</pubDate><media:content url="https://ss.ycnrg.org/jotunn_20160109_022212.png" medium="image"/><content:encoded><![CDATA[<img src="https://ss.ycnrg.org/jotunn_20160109_022212.png" alt="LVM Volume Migration via SSH"><p>I've been looking for an easy way to migrate Xen domains between my two Xen hypervisors. I made an (unsuccessful) attempt at setting up an iSCSI target for shared storage between the two servers. However, they are in different datacenters, so this is less than ideal.</p>

<p>Both servers share a similar setup-- storage for each domain's disk is held on a local LVM group, <strong>vg0</strong>. For instance, the DomU <strong>xr11</strong> would be stored on <strong>/dev/vg0/xr11-disk</strong> on <em>onodera</em>, one of my servers. Therefore, to use the <code>xl migrate</code> command to perform a "live" migration, this storage would simply need to be mirrored over to the other server. I use "live" loosely, since you wouldn't really want to be writing to the disk on your DomU while doing this. In fact, it's probably a good idea to issue a <code>sync</code> command on the guest, then use <code>lvchange -pr</code> to mark the volume as read-only until the migration is complete. But meh, we can't really be bothered with that.</p>

<p>Anyway, this can also be used to create backups of any LV to another server, or to assist in moving part of a volume group to another server (rather than the entire VG). While looking for some official LVM <em>lvdump</em> tool or similar, I found many folks just use <code>dd</code> and <code>ssh</code> to move volumes around their machines.</p>

<p>This script expands upon that idea-- provide it with the path to your LV (such as <em>/dev/vg0/xr11-disk</em>), and the server you would like to migrate it to, and it will create the LV on the new server with the exact same size, then copy the contents across with <code>dd</code> and <code>ssh</code>. I also decided to use <code>gzip</code> with a fast compression setting, since most disks are quite sparse, and there's no sense in wasting bandwidth sending a bunch of zeros or other unimportant data. As you can see from the screenshots below, I was able to migrate a 30GiB volume in about 4 minutes (actual filesystem utilization is about 2GiB).</p>

<p><img src="https://ss.ycnrg.org/jotunn_20160109_021555.png" alt="LVM Volume Migration via SSH"></p>

<p>Once the transfer completes, the script also runs an <code>fsck -fp</code> against the newly-copied volume to ensure filesystem consistency (in case you decided to copy the filesystem while the DomU was running).</p>

<p>The core of the script:  </p>

<pre><code>dd if=$LVPATH bs=1M status=none | pv --size=$LVSIZE | gzip -2 | ssh $NEWHOST -- "gunzip | dd of=$LVPATH bs=1M status=none"  
</code></pre>

<p>After the migration completes, if you're moving a disk volume for a Xen guest, you can then run <code>xl migrate</code> to do a live migration of the DomU to its new home (memory and state information should be retained). The screenshot below shows me testing this out, and it worked without a problem!</p>

<blockquote>
  <p><a href="https://bitbucket.org/snippets/yellowcrescent/y8XM7">View lvmigrate.sh on Bitbucket</a> or grab the script via wget, as shown below.</p>
</blockquote>

<p><img src="https://ss.ycnrg.org/jotunn_20160109_022212.png" alt="LVM Volume Migration via SSH"></p>

<p>Also, be sure to install <code>pv</code>, as this is not included with most distros by default. The line below should install <code>pv</code> on Debian-based and RedHat-based distros, as well as Arch.</p>

<pre><code>apt-get -y install pv || yum -y install pv || pacman -S pv  
</code></pre>

<p>Grab the script:  </p>

<pre><code>wget https://ycc.io/util/lvmigrate.sh  
chmod +x lvmigrate.sh  
</code></pre>]]></content:encoded></item><item><title><![CDATA[Shrinking a Logical Volume (LVM)]]></title><description><![CDATA[Step-by-step instructions on shrinking a logical volume on Linux's Logical Volume Manager (LVM): filesystem check and resize, LV resize, and post checks.]]></description><link>https://ycnrg.org/shrinking-a-logical-volume-lvm/</link><guid isPermaLink="false">65bae04e-7458-4c19-b9f0-eaabc6fdc010</guid><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Jacob Hipps]]></dc:creator><pubDate>Wed, 30 Sep 2015 00:30:00 GMT</pubDate><media:content url="https://ycnrg.org/content/images/2016/08/lvm_img.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://ycnrg.org/content/images/2016/08/lvm_img.jpg" alt="Shrinking a Logical Volume (LVM)"><p>Today I will be going through the process of shrinking a logical volume to free up space on a particular volume group (<em>vg0</em>) in Linux's Logical Volume Manager (LVM). This same/similar process can also be used for expanding volumes, albeit much less dangerous.</p>

<p>We will be shrinking <code>/dev/vg0/onodera-web</code>&mdash; that is, the logical volume <em>onodera-web</em>, which is a part of the volume group <em>vg0</em>. Its current size is 300.0GiB, however, I need space for other things, so it needs to be trimmed a bit.</p>

<p>Currently, we only have about ~66GiB free on <em>vg0</em>. After the resize, we should have well over 100GiB to play with:  </p>

<pre><code>root@onodera ~ # vgs  
  VG   #PV #LV #SN Attr   VSize   VFree 
  vg0    2   1   0 wz--n- 366.63g 66.63g
</code></pre>

<h4 id="unmountandinitialcheck">Unmount and initial check</h4>

<p>First, make sure any running programs or processes that utilize data from this mountpoint have been stopped&mdash; in my case, I needed to stop <em>nginx</em> and <em>pm2</em> (Node.js). Then unmount the volume, and run an fsck against it. You can use <code>-f</code> to force a check, even if the volume is marked clean. The <code>-C0</code> option outputs status information to <em>fd 0</em> (<em>stdout</em>).</p>

<pre><code>umount /dev/vg0/onodera-web
fsck -C0 -f /dev/vg0/onodera-web
</code></pre>

<blockquote>
  <p>If you are running a website or other service from your LV, and would like to minimize downtime for large volumes while scanning, you can remount the filesystem in read-only mode prior to running fsck via <code>mount -o remount,ro /dev/vg0/onodera-web</code> &mdash; just be sure to unmount once the fsck finished successfully.</p>
</blockquote>

<p>If the filesystem was modified, I would recommend re-running fsck once again. Once you've confirmed all is clean, time to move on.</p>

<p>I would also recommend saving information about the LV prior to resizing (such as the exact size of the volume), in case you should need to revert your changes, or the post-resize fsck fails:</p>

<pre><code>lvdisplay --units=b /dev/vg0/onodera-web &gt; preshrink.log
</code></pre>

<h4 id="filesystemresizeresize2fs">Filesystem resize (resize2fs)</h4>

<p>We will now perform the actual resizing of the filesystem-- in this case, the filesystem type is <em>ext4</em> (resize2fs can be used for any of the Extended filesystems: ext2/ext3/ext4).</p>

<pre><code>resize2fs -p /dev/vg0/onodera-web 250G
</code></pre>

<p>If all went well, you should see a message such as: <em>The filesystem on /dev/vg0/onodera-web is now 65536000 (4k) blocks long.</em> (Note: 65536000 blocks * 4096 = 268435456000 bytes = exactly 250.0 GiB)</p>

<h4 id="logicalvolumeresizelvreduce">Logical volume resize (lvreduce)</h4>

<p>Now to resize the logical volume to match the filesize:</p>

<pre><code>lvreduce -L 250G /dev/vg0/onodera-web
</code></pre>

<h4 id="finalcheckremount">Final check &amp; remount</h4>

<pre><code>fsck -C0 -f /dev/vg0/onodera-web
</code></pre>

<p>If the check returned happily, you should now be able to safely remount the volume (note that you may need to adjust the next command if this volume is not in your fstab)</p>

<pre><code>mount /dev/vg0/onodera-web
</code></pre>

<p>Now we can check our newly liberated free space:</p>

<pre><code>root@onodera ~ # vgs  
  VG   #PV #LV #SN Attr   VSize   VFree  
  vg0    2   1   0 wz--n- 366.63g 116.63g
</code></pre>

<p>And that's it! Good luck ^_~</p>]]></content:encoded></item><item><title><![CDATA[Xen 4.4: Windows HVM Networking]]></title><description><![CDATA[Taking a look at the settings and steps required to successfully set up and configure networking on a Windows HVM domain on Xen 4.4 running Debian]]></description><link>https://ycnrg.org/xen-windows-networking/</link><guid isPermaLink="false">a408d7a6-002c-42f9-a010-86eeb20445d0</guid><category><![CDATA[linux]]></category><category><![CDATA[xen]]></category><category><![CDATA[networking]]></category><dc:creator><![CDATA[Jacob Hipps]]></dc:creator><pubDate>Thu, 20 Aug 2015 19:03:06 GMT</pubDate><media:content url="https://ycnrg.org/content/images/2015/08/XenLogoBlackGreen.jpg" medium="image"/><content:encoded><![CDATA[<blockquote>
  <img src="https://ycnrg.org/content/images/2015/08/XenLogoBlackGreen.jpg" alt="Xen 4.4: Windows HVM Networking"><p>Taking a look at the settings and steps required to successfully set up and configure networking on a Windows 7 or Windows Server HVM domain on Xen 4.4 hypervisor running Debian</p>
</blockquote>

<p>Getting networking to work in Windows under Xen is not a straightforward task, it seems. Here are my notes while attempting to get a Windows 7 domain set up on a server with Hetzner.</p>

<p>The host machine has a subnet statically routed to it by Hetzner, so we can't use straight bridging, because Hetzner's switches would just drop the packets from an unknown MAC address. So here, we take a look at setting up a bridge and using NAT in iptables to bridge the traffic from an "internal" 10.0.9.101 address to the machine's "external" IP address, 176.9.22.223 -- this IP is dedicated to this Windows virtual machine. Note that this type of NAT is <em>basic</em> or <em>one-to-one</em> NAT, since one address directly translates to another-- there's no port forwarding or any other nonsense.</p>

<h4 id="preface">Preface</h4>

<p>The <strong>xenbr1</strong> is a virtual interface and bridge with an IP of <strong>10.0.9.1</strong>. Configuration from <code>/etc/network/interfaces</code> shown below. We will be adding our Windows domain to this bridge later on.  </p>

<pre><code>auto xenbr1  
iface xenbr1 inet static  
  address   10.0.9.1
  broadcast 10.0.9.255
  netmask   255.255.255.0
  pre-up brctl addbr xenbr1
</code></pre>

<h4 id="setupontheclientdomain">Set up on the client domain</h4>

<p>Note that I could not get this to work <em>at all</em> with the Intel <em>e1000</em> ioemu device on Windows 7 x64, but ymmv. I needed to install the GPLPV drivers to even be able to ping the dom0 host.</p>

<p>If you need to install the GPLPV drivers on a host machine without network access, try the following steps to create a new LVM volume that's formatted NTFS.</p>

<ul>
<li>Install ntfs-3g utils if they are not already installed.</li>
</ul>

<pre><code>apt-get install ntfs-3g  
</code></pre>

<ul>
<li>Create a new 8GB logical volume 'winstrap' on group vg0. Format it as NTFS.</li>
</ul>

<pre><code>lvcreate -L8G -nwinstrap vg0  
mkfs.ntfs /dev/vg0/winstrap  
</code></pre>

<ul>
<li>Mount the volume so that we can put the drivers and things on it</li>
</ul>

<pre><code>mkdir /mnt/winstrap  
mount /dev/vg0/winstrap /mnt/winstrap  
</code></pre>

<ul>
<li>Download the .NET 4.5 Redist package (required for the Xen Shutdown driver to work) and the latest GPLPV drivers. Here we are downloading the GPLPV drivers for 64-bit Windows; adjust as needed.</li>
</ul>

<pre><code>cd /mnt/winstrap  
wget http://download.microsoft.com/download/1/6/7/167F0D79-9317-48AE-AEDB-17120579F8E2/NDP451-KB2858728-x86-x64-AllOS-ENU.exe  
wget http://apt.univention.de/download/addons/gplpv-drivers/gplpv_Vista2008x64_signed_0.11.0.373.msi  
</code></pre>

<blockquote>
  <p>Check <a href="http://wiki.univention.de/index.php?title=Installing-signed-GPLPV-drivers">this page</a> for signed up-to-date GPLPV drivers from Univention, as well as downloads for older versions of Windows and 32-bit OSes</p>
</blockquote>

<p>Now that everything is ready, unmount our little bootstrapper volume, and add it to your domain's config as an additional disk.</p>

<ul>
<li>Unmount the <em>winstrap</em> volume</li>
</ul>

<pre><code>umount /mnt/winstrap  
</code></pre>

<ul>
<li>Edit the <code>disk</code> line in the domain's XM config to include the <em>winstrap</em> volume (in addition to the primary disk, of course)</li>
</ul>

<pre><code>'phy:/dev/vg0/winstrap,xvdc,rw'  
</code></pre>

<ul>
<li>Now boot your domain, Windows should have mounted the raw NTFS volume as drive (D:) or some such (thankfully it doesn't care that there isn't a partition table). First install the .NET framework. Do not reboot. Then install the Xen GPLPV drivers, and reboot when prompted. After the machine reboots, it may ask to reboot <em>again</em>, so go ahead and shutdown, but this time destroy the machine so that we can edit its config.</li>
<li>At this point, remove any references to ioemu devices in the domain's XM config. Your <code>vif</code> line should look similar to the following:</li>
</ul>

<pre><code>vif = [ 'mac=00:16:3E:29:7B:51, bridge=xenbr1' ]  
</code></pre>

<h4 id="windowsnetworksettings">Windows network settings</h4>

<p>Once the Xen Network Adapter is available, configure the following settings for IPv4. Substitute with your own settings where applicable.</p>

<ul>
<li>Address: <strong>10.0.9.101</strong> (should be on the same subnet as <strong>xenbr1</strong>)</li>
<li>Netmask: <strong>255.255.255.0</strong></li>
<li>Gateway: <em>Same gateway as the Dom0 host</em></li>
<li>DNS resolvers: <em>Use your ISP's DNS resolvers or Google DNS (<strong>8.8.8.8</strong>, <strong>8.8.4.4</strong>)</em></li>
</ul>

<h3 id="hostnetworksettings">Host network settings</h3>

<h4 id="bridgeinterfaces">Bridge interfaces</h4>

<p>Add the domain's interface, named <strong>vifXX.0</strong> (where <em>XX</em> is the domain ID, run <code>xl list</code> to check, or better yet <code>xl network-list DOMNAME</code>). <strong>xenbr1</strong> is a secondary interface with an IP address of <em>10.0.9.1</em> in this case. This should probably be implemented as a vif script or something, so that it doesn't need to be done or undone everytime the machine is created or destroyed.  </p>

<pre><code>brctl addif xenbr1 vifXX.0  
</code></pre>

<p>Once you have destroyed the domain, its interface will be removed, and will in turn be removed from the bridge configuration. Keep this in mind when recreating the domain again.</p>

<h4 id="setupforwardingrulesiniptables">Set up forwarding rules in iptables</h4>

<p>Forwards incoming traffic on <strong>176.9.22.223</strong> to <strong>10.0.9.103</strong> on the <strong>xenbr1</strong> bridge. Outbound traffic from the Windows domain (<strong>10.0.9.103</strong>) will have its <em>source</em> address rewritten as <strong>176.9.22.223</strong> so that the remote host can actually send a response and make its way back here. Intra-machine traffic to other domains (routed or bridge) still works as expected.  </p>

<pre><code>iptables -t nat -A PREROUTING -d 176.9.22.223 -j DNAT --to 10.0.9.103  
iptables -t nat -A POSTROUTING -s 10.0.9.103 -d 176.9.22.223 -j MASQUERADE  
iptables -t nat -A POSTROUTING -s 10.0.9.103 -j SNAT --to-source 176.9.22.223  
</code></pre>

<h3 id="conclusion">Conclusion</h3>

<p>After this, you should be able access the domain just as if it were directly connected to your main interface, <em>eth0</em> or similar, with both ingress and egress traffic being directly forwarded. There is probably a more elegant solution for this, but this seems to work, at least for my set up.</p>

<p><img src="https://ss.ycnrg.org/jotunn_20150820_134733.png" alt="Xen 4.4: Windows HVM Networking"></p>]]></content:encoded></item><item><title><![CDATA[Xen 4.4: Installing PV OS from an ISO image]]></title><description><![CDATA[This article covers creating a DomU by installation of an OS from its native install media, first as an HVM domain during install, then conversion to PV]]></description><link>https://ycnrg.org/xen-install-os-from-iso-pv/</link><guid isPermaLink="false">ed09fa87-14a5-4e1f-8dfa-f854338bd602</guid><category><![CDATA[linux]]></category><category><![CDATA[xen]]></category><dc:creator><![CDATA[Jacob Hipps]]></dc:creator><pubDate>Tue, 18 Aug 2015 17:53:00 GMT</pubDate><media:content url="https://ycnrg.org/content/images/2015/08/XenLogoBlackGreen.jpg" medium="image"/><content:encoded><![CDATA[<blockquote>
  <img src="https://ycnrg.org/content/images/2015/08/XenLogoBlackGreen.jpg" alt="Xen 4.4: Installing PV OS from an ISO image"><p>This article covers creating a DomU on Xen 4.4 by installation of an OS from its native install media (typically an ISO image), first as an HVM domain during installation, then converting to a PV domain after installation is complete. To demonstrate, we setup a paravirtualized CentOS 7 (64-bit) domain.</p>
</blockquote>

<p>For many candidate DomU distributions, installation via the <em>xen-tools</em> <code>xen-create-image</code> (eg. with <code>debootstrap</code>, <code>rinse</code>, etc.) is not easy to accomplish. This works great for Debian, Ubuntu, and CentOS 5 &mdash; but other OSes takes a  bit of work, or may not be easily possible without installation from its original media.</p>

<p>Here, we will look at installing an operating system on to a Xen domain by first creating an HVM-based domain which will run during our install of the OS. Then, once installation is completed, we can convert the domain to boot via PyGrub so that it can benefit from the increased speed that comes with paravirtualization (PV) over full hardware virtualization (HVM).</p>

<p>This article looks at installing CentOS 7 (64-bit), but the process can be used for other OSes as well.</p>

<p>Here, I am using <strong>Xen 4.4</strong> with the <strong>xl</strong> toolstack on a <strong>Debain 8</strong> Dom0, but can be adjusted as needed for older versions of Xen, or if using the <strong>xm</strong> toolstack.</p>

<h3 id="creatinganhvmdomu">Creating an HVM DomU</h3>

<h4 id="createlogicalvolumefordisk">Create logical volume for disk</h4>

<ul>
<li><code>-L60G</code> create <strong>60GiB</strong> disk</li>
<li><code>-nxr1-disk</code> use volume label <strong>xr1-disk</strong></li>
<li><code>vg0</code> create volume in LVM group <strong>vg0</strong></li>
</ul>

<pre><code>lvcreate -L60G -nxr1-disk vg0  
</code></pre>

<p>Verify all is good with <code>lvdisplay /dev/vg0/xr1-disk</code></p>

<h4 id="fetchinstallationmedia">Fetch installation media</h4>

<ul>
<li>Select a nearby mirror from <a href="http://isoredirect.centos.org/centos/7/isos/x86_64/">http://isoredirect.centos.org/centos/7/isos/x86_64/</a></li>
</ul>

<pre><code>mkdir -p /opt/iso  
cd /opt/iso  
wget http://mirror.solarvps.com/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1503-01.iso  
</code></pre>

<h4 id="createxlxmconfig">Create XL/XM config</h4>

<p>Manually create config in <code>/etc/xen/config.d/xr1.cfg</code> (<em>xr1</em> is our DomU hostname)</p>

<pre><code># Use HVM instead of PV
builder = "hvm"

# Set memory and vcpus as needed
memory = 4096  
vcpus = 2

# Host/Domain name
name = "xr1"

# Setup bridged interface with Intel e1000
vif = [ 'type=ioemu, model=e1000, mac=00:16:3E:29:QQ:QQ, bridge=xenbr1' ]

# Disks - our LVM we just created &amp; the installer ISO image
disk = [  
        'phy:/dev/vg0/xr1-disk,xvda,rw',
        'file:/opt/iso/CentOS-7-x86_64-Minimal-1503-01.iso,xvdb:cdrom,r'
       ]

# Set boot order (d = CDROM, c = HDD)
boot = "dc"

# Use VESA-compliant display with more VRAM
vga = "stdvga"  
videoram = 64

# Use VNC for display
vnc = 1  
vnclisten  = "176.9.0.X"  
vncdisplay = 0  
vncpasswd  = "supersecret"
</code></pre>

<h3 id="bootthedomainandbeginosinstallation">Boot the domain and begin OS installation</h3>

<p>Start the DomU  </p>

<pre><code>xl create /etc/xen/config.d/xr1.cfg  
</code></pre>

<p>Now connect via a VNC client to the <code>vnclisten</code> IP address on port <strong>5900</strong>+<em>vncdisplay</em> (in this case, port 5900)</p>

<p><img src="https://ss.ycnrg.org/jotunn_20150818_122622.png" alt="Xen 4.4: Installing PV OS from an ISO image"></p>

<p>Now complete the installation steps. Go through the localization portions as needed, important bits outlined below:</p>

<ul>
<li>CentOS should auto-detect the installation media and display the installation source as <em>Local media</em>. If not, check to ensure the CDROM device is configured properly in the XM config file</li>
<li>Choose <em>Installation Destination</em> icon and set up the partitions. Your disk should be displayed here, if it's not, destroy the domain and check the XM config to ensure your disk settings are correct. I created the following partitions...
<ul><li>Create first partition, mounted on <code>/boot</code> with a size of <em>512MiB</em>. Ensure partition type is set to <em>ext3</em></li>
<li>Create second partition, mounted on <code>/</code> -- leave the size field empty to consume the remaining space. Set partition type to <em>ext4</em></li>
<li>Note that no swap space was created, as I do not want to use swap for this domain. It can be added later, though, by expanding the LV this DomU is using, then using <code>fdisk</code> to create a swap partition on the extra space (I think this should be possible, but haven't actually tried it)</li></ul></li>
<li>Under <em>Network &amp; Host name</em>, set your DomU's fully-qualified hostname and network settings. Note that in my configuration, this domain will ultimately be routed using the <strong>xenbr0:1</strong> interface, but that won't work just yet. It can still be set though, so that when we reboot the DomU in paravirtualized mode, it should connect to the network without issues (and without having to use <code>xl console</code> hopefully)</li>
</ul>

<p><img src="https://ss.ycnrg.org/jotunn_20150818_122957.png" alt="Xen 4.4: Installing PV OS from an ISO image"></p>

<p>Make sure to set a root password while the installation is proceeding. Then, once the install is complete, choose the <em>Reboot</em> option. This will probably cause the DomU to hang, but that's OK. Hop back over to Dom0 and kill it:</p>

<pre><code>xl destroy xr1  
</code></pre>

<p>(<em>xr1</em> is the name of our DomU, as specified as <code>name</code> in the config file)</p>

<h3 id="bootinginhvmmode">Booting in HVM mode</h3>

<p>After the domain has been destroyed, edit the XM config file again, and comment-out or remove the installation media disk. The new <code>disk</code> and <code>boot</code> lines are shown below.</p>

<pre><code>disk = [ 'phy:/dev/vg0/xr1-disk,xvda,rw' ]  
boot = "c"  
</code></pre>

<p>Now re-create the domain, and it should boot into our newly-installed OS.</p>

<pre><code>xl create /etc/xen/config.d/xr1.cfg  
</code></pre>

<p>Once again, reconnect via VNC, and once booting is complete, you should be greeted by the login prompt! Go ahead and login as root using the password created during installation.</p>

<p>We then want to check the network settings to ensure that the domain will be properly configured when booting in PV mode, since Xen's console kinda sucks, and we won't have VNC access at that point. Also check to ensure the kernel supports paravirtualization, and also detects that it is being virtualized. To do this, run <code>dmesg | grep paravirtual</code> -- this should something like <em>Booting paravirtualized kernel on Xen HVM</em>. If so, we should be good to go. Go ahead and <code>shutdown -h now</code>. Then <code>xl destroy xr1</code> from the hypervisor.</p>

<p><img src="https://ss.ycnrg.org/jotunn_20150818_114724.png" alt="Xen 4.4: Installing PV OS from an ISO image"></p>

<h3 id="bootinginpvmode">Booting in PV mode</h3>

<p>Now, we will be converting the DomU from an HVM domain to a PV domain. This is to obtain much better I/O performance, as CentOS 7 has a paravirtualized Xen-aware kernel. First, move the old configuration out of the way:  </p>

<pre><code>mv /etc/xen/config.d/xr1.cfg /etc/xen/config.d/xr1-hvm.cfg  
</code></pre>

<p>Then create a new config for booting into PV mode</p>

<pre><code># We will be using PyGrub as the bootloader
bootloader = "/usr/lib/xen-4.4/bin/pygrub"

# Set hostname, memory, vpcus, etc.
name = "xr1"  
memory = 4096  
vcpus = 2

# Use the same disk as used previously, with the same device name
disk = [ 'phy:/dev/vg0/xr1-disk,xvda,rw' ]

# Set up a proper routed network connection
vif = [ 'ip=176.9.XXX.XXX, mac=00:16:3E:29:QQ:QQ, gatewaydev=xenbr0:1' ]  
</code></pre>

<p>Save the new configuration as <code>/etc/xen/config.d/xr1.cfg</code>, then start up the domain</p>

<pre><code>xl create /etc/xen/config.d/xr1.cfg  
</code></pre>

<p>If no errors were shown, it should have booted, as long as you didn't choose any exotic boot or partition type options during installation. We can now try ssh'ing to the newly created DomU with <code>ssh xr1</code> or <code>ssh 1.2.3.4</code>.</p>

<p><img src="https://ss.ycnrg.org/jotunn_20150818_131536.png" alt="Xen 4.4: Installing PV OS from an ISO image"></p>

<h3 id="cloningprovisioning">Cloning &amp; Provisioning</h3>

<p>Now that you have set up one virtual machine, you can use this first image as a master to create others. This can be done by cloning the LVM volume (or image file), or by mounting the new DomU's disk locally (eg. to <code>/mnt/xr1</code>) so that you can create a tarball that can later be deployed with <code>xen-create-image</code>, which would also allow automatic setup of network parameters and copying of the skeleton directory (<code>/etc/xen-tools/skel</code>).</p>]]></content:encoded></item><item><title><![CDATA[WHOIS for new TLDs]]></title><description><![CDATA[In this article, we take a look at how to do a whois lookup (on Linux and Mac) for new gTLD domains, and how to generate an updated whois.conf file]]></description><link>https://ycnrg.org/whois-for-new-tlds/</link><guid isPermaLink="false">e9a710cb-4b02-448d-bb7f-92fdbe7ac2b9</guid><category><![CDATA[whois]]></category><category><![CDATA[php]]></category><category><![CDATA[linux]]></category><category><![CDATA[scripts]]></category><dc:creator><![CDATA[Jacob Hipps]]></dc:creator><pubDate>Sat, 27 Jun 2015 21:23:10 GMT</pubDate><media:content url="https://ss.ycnrg.org/whois_splash.png" medium="image"/><content:encoded><![CDATA[<img src="https://ss.ycnrg.org/whois_splash.png" alt="WHOIS for new TLDs"><p><img src="https://ss.ycnrg.org/whois_splash.png" class="no-fluid" alt="WHOIS for new TLDs"></p>

<p>With the huge amount of new TLDs being released over the past few years, the built-in <code>whois</code> client in Linux can't seem to keep its builtin WHOIS server database up-to-date. Here, we'll look at an easy way to determine the registry's WHOIS server, and query it from the shell. Then generate our own <code>whois.conf</code> file to make future lookups seamless.</p>

<blockquote>
  <p>If you're short on time, jump to the end of the article for a link to the updated <code>whois.conf</code> file containing up-to-date WHOIS servers for the new gTLDs</p>
</blockquote>

<h2 id="determiningthecorrectserver">Determining the correct server</h2>

<p>Likely, you've come across a few domains that you've tried to query due to a DNS problem, or to check the registration info, and you came across a message similar to this:</p>

<pre><code>jacob@minorin:~$ whois mori.moe  
No whois server is known for this kind of object.  
</code></pre>

<p>This is because the GNU <code>whois</code> program on most Linux distros comes with the WHOIS server list compiled into the binary. The good news is that you can create a <code>/etc/whois.conf</code> file, which contains a list of regular expressions that correspond to WHOIS servers. However, if you don't want to muck around with this, we can use the <code>whois-servers.net</code> DNS entries to determine the correct WHOIS server.</p>

<pre><code>jacob@minorin:~$ dig +short CNAME moe.whois-servers.net  
whois.nic.moe.  
</code></pre>

<p>Yay! Now that we have the WHOIS server, we can use the <code>-h</code> flag in whois to query that server directly. Also, before we move on, it's important to note that "official" source for WHOIS server information is <code>whois.iana.org</code>. For most TLDs, IANA will return an attribute titled <em>whois</em>. Example:</p>

<pre><code>jacob@minorin:~$ whois org -h whois.iana.org | grep whois  
whois:        whois.pir.org  
</code></pre>

<p>Moving on to the query:</p>

<pre><code>jacob@minorin:~$ whois mori.moe -h whois.nic.moe  
Domain Name:                                 MORI.MOE  
Domain ID:                                   D389770-MOE  
Sponsoring Registrar:                        BR domain Inc.  
Sponsoring Registrar IANA ID:                1898  
Registrar URL (registration services):       http://www.brdomain.jp  
Domain Status:                               clientTransferProhibited  
</code></pre>

<p>You can combine the two if you're lazy and don't want to re-type or copy/paste the server name, like so:</p>

<pre><code>jacob@minorin:~$ whois mori.moe -h `dig +short CNAME moe.whois-servers.net`  
</code></pre>

<blockquote>
  <p>For 3rd level registrations, such as <code>amazon.co.uk</code>, you would query the WHOIS server for the top-level domain (such as <code>uk</code> in this example) in most cases -- however, there are certain 2nd level domains that have their own WHOIS server. If you can't find the domain in the TLD registry's database, do a WHOIS query to IANA for the 2nd level domain to determine if it has its own WHOIS server (eg. <code>whois co.il -h whois.iana.org</code>)</p>
</blockquote>

<h2 id="creatingawhoisconffile">Creating a whois.conf file</h2>

<p>For a more elegant solution, you can add the entries for each of the new TLD WHOIS servers to the <code>/etc/whois.conf</code> file. By default, this file doesn't exist, so you'll probably have to create it.</p>

<p>To automate this process, I wrote a simple PHP script which fetches the official list from IANA to enumerate all (public) TLDs, then iterates through the list to determine WHOIS server information from <code>whois-servers.net</code> and <code>whois.iana.org</code> for each of the TLDs. It then uses this information to generate a shiny new <code>whois.conf</code> file and a JSON file that can be used for other purposes.</p>

<blockquote>
  <p><a href="https://ycc.io/util/yc_whoisgen">yc_whoisgen</a> (PHP source, plain text)</p>
</blockquote>

<p>As of this writing (15 Mar 2015), there are over 800 top-level domains. You can simply add new entries to your <code>whois.conf</code> as you encounter them, or utilize the script above to generate a new list for yourself. Linked below is updated <code>whois.conf</code> generated by <code>yc_whoisgen</code></p>

<blockquote>
  <p><a href="https://ycc.io/conf/whois.conf">whois.conf</a> (plain text, 07 Mar 2017) &mdash; copy to <code>/etc/whois.conf</code> as root</p>
</blockquote>

<p>An excerpt from an example <code>whois.conf</code> file showing the regular expression to match a domain on the left, and the WHOIS server where the lookup should be routed on the right side.</p>

<pre><code>\.blue$ whois.afilias.net
\.moe$ whois.nic.moe
\.wtf$ whois.donuts.co
\.ninja$ whois.unitedtld.com
</code></pre>

<p>To get started looking up new gTLD domains, you can go ahead and grab the updated config that I've generated above and copy it to <code>/etc/whois.conf</code>. You can also check out the <a href="https://www.iana.org/domains/root/db">IANA Root Zone Database</a>, if you're still curious about TLDs, or the <a href="https://data.iana.org/TLD/tlds-alpha-by-domain.txt">IANA Official list of TLDs</a>.</p>

<p><img src="https://ss.ycnrg.org/jotunn_20170312_053343.png" class="no-fluid" alt="WHOIS for new TLDs"> <br>
<strong>Checking out the new whois.conf in vim</strong></p>]]></content:encoded></item><item><title><![CDATA[Deluge WebUI Single IP]]></title><description><![CDATA[How to hack the Deluge WebUI to bind to a single IP address, along with a diff patch to make the changes. By default, Deluge will bind to 0.0.0.0]]></description><link>https://ycnrg.org/deluge-webui-single-ip/</link><guid isPermaLink="false">5fc38a7b-e2d3-4953-a775-cb0245c35573</guid><dc:creator><![CDATA[Jacob Hipps]]></dc:creator><pubDate>Sat, 27 Jun 2015 02:02:00 GMT</pubDate><media:content url="https://ycnrg.org/content/images/2015/06/deluge-icon.png" medium="image"/><content:encoded><![CDATA[<blockquote>
  <img src="https://ycnrg.org/content/images/2015/06/deluge-icon.png" alt="Deluge WebUI Single IP"><p><strong>DEPRECATED:</strong> This patch has been superseded by the inclusion of this feature within Deluge 1.3.14 and later (finally!). The info and patches below still apply to 1.3.x prior to revision 14, but it is best to upgrade if possible.</p>
</blockquote>

<h3 id="stingywebui">Stingy WebUI</h3>

<p>The current release of Deluge, 1.3.11, as well as the development trunk do not contain a way to force the WebUI to bind only to a single interface or IP address. <strong>deluged</strong> does include both <code>--interface</code> and <code>--ui-interface</code> options. However, these both specify the IP address of the interface to listen on for BitTorrent connections and JSON-RPC connections respectively.</p>

<p>The result of this being that the Deluge WebUI will instead bind to <em>0.0.0.0</em>, or all available interfaces on your machine.</p>

<pre><code>jacob@ikumi:~$ sudo netstat -tlpn | grep deluged  
tcp  0  0 0.0.0.0:8112      0.0.0.0:*     LISTEN      1144/deluged.pid  
tcp  0  0 17.9.23.38:4433   0.0.0.0:*     LISTEN      1144/deluged.pid  
tcp  0  0 17.9.23.38:58846  0.0.0.0:*     LISTEN      1144/deluged.pid  
tcp  0  0 17.9.23.38:63617  0.0.0.0:*     LISTEN      1144/deluged.pid  
</code></pre>

<p>The first line of the above output of <strong>netstat</strong> shows the undesirable results of being unable to specify the WebUI interface IP address. While not a problem in most circumstances, if you have a server with many IP addresses, you generally don't want an application such as Deluge occupying the same port on an entire <em>/29</em> net block. You may have already noticed this issue if you have attempted to run multiple instances of Deluge, both using the default port <em>8112</em> for the WebUI, as you would have received an error when trying to bind to the address, as it's already occupied. Of course, you could just change each instance to use its own unique port number.</p>

<h3 id="makingthechanges">Making the changes</h3>

<blockquote>
  <p>A git-diff patch to apply the necessary changes automagically to the source is probably the best/easiest solution. Patch and application instructions are provided in the next section</p>
</blockquote>

<h4 id="beforestarting">Before starting</h4>

<p>Ensure you have a copy of the Deluge source code, preferably cloned from the git repo. Full instructions on fetching, building and installing Deluge from source can be found <a href="http://dev.deluge-torrent.org/wiki/Installing/Source#DownloadSource">here</a>.</p>

<pre><code>git clone git://deluge-torrent.org/deluge.git  
</code></pre>

<p>Be sure that you have checked out the current stable branch of Deluge.  </p>

<pre><code>git checkout -b 1.3-stable origin/1.3-stable  
</code></pre>

<h4 id="diggingin">Digging in</h4>

<p>Searching through the source, we can see the file where the listener is bound is located in <code>deluge/ui/web/server.py</code></p>

<pre><code>self.socket = reactor.listenTCP(self.port, self.site)  
</code></pre>

<p>Since Deluge uses the <em>Twisted</em> library to handle communication, the documentation is readily available for reference. Checking the (rather terse) documentation page for <a href="http://twistedmatrix.com/documents/9.0.0/api/twisted.internet.interfaces.IReactorTCP.listenTCP.html">listenTCP</a>, we can see that there are two additional arguments that can be passed to the <em>Twisted</em> listener object. We can modify our <em>listenTCP</em> call to include the interface we want to use, instead of using the default listen-to-all-the-things mode.</p>

<pre><code>self.socket = reactor.listenTCP(self.port, self.site, 50, self.iface)  
</code></pre>

<p>The third argument is <em>backlog</em>-- we don't really care about this, so we'll use the default value of <strong>50</strong> (see the linked documentation). <code>self.iface</code> is our new property we'll be using for the interface. Note that the same change can be made for the SSL listener:</p>

<pre><code>self.socket = reactor.listenSSL(self.port, self.site, ServerContextFactory(), 50, self.iface)  
</code></pre>

<p><a href="http://twistedmatrix.com/documents/8.1.0/api/twisted.internet.interfaces.IReactorSSL.listenSSL.html">listenSSL</a> has an additional <em>ServerContextFactory()</em> argument which was a part of the original call.</p>

<h4 id="addingournewconfigvalue">Adding our new config value</h4>

<p>We also need to initialize the <code>iface</code> attribute to the value specified in the <em>webui.conf</em> file parsed by <strong>deluged</strong>. To do so, find the <strong>__init__</strong> method, and you will notice the section where the other attributes are being initialized-- add our new option to the list:</p>

<pre><code>self.iface = self.config["iface"]  
</code></pre>

<p>It's also probably a good idea to set a default value, in case one isn't specified in the config file, otherwise <em>Twisted</em> might freak out. Find the <code>CONFIG_DEFAULTS</code> object and add a default value for <code>iface</code>:</p>

<pre><code>"iface": "0.0.0.0",
</code></pre>

<h2 id="patching">Patching</h2>

<p>Rather than editing the source (for fun or out of curiosity), the easiest way would be to just apply the patch.</p>

<blockquote>
  <p><strong>Patch</strong> (diff): <a href="https://ycc.io/patch/deluge_1313_webui_ip.diff">https://ycc.io/patch/deluge_1313_webui_ip.diff</a></p>
  
  <p><em>Updated for v1.3.13</em></p>
</blockquote>

<p>To apply the patch, you should have a clean, working copy/clone of the source. Change the the directory where the source is located and run the following:  </p>

<pre><code>wget https://ycc.io/patch/deluge_1313_webui_ip.diff  
git apply deluge_1313_webui_ip.diff  
</code></pre>

<h2 id="buildinginstalling">Building &amp; Installing</h2>

<p>Full build and installation instructions can be found on Deluge's website, so here is the tl;dr version:</p>

<pre><code>python setup.py build  
sudo python setup.py install  
</code></pre>

<p>Building plugins:</p>

<pre><code>cd deluge/plugins  
for i in */setup.py; do python $i bdist_egg ; done ; rm -Rf *.egg-info ; rm -Rf build ; mv dist/*.egg ./  
</code></pre>

<p>You can then either symlink, or copy the plugins to <code>~/.config/deluge/plugins</code> for the user deluge will be running under.</p>

<p>If you receive errors about <strong>setuptools</strong>, then you will need to install them first, then rebuild/reinstall.</p>

<ul>
<li>For Debian/Ubuntu/Mint: <code>sudo apt-get install python-setuptools</code></li>
<li>For RedHat/CentOS/Fedora: <em>(as root)</em> <code>yum -y install python-setuptools</code></li>
</ul>

<h2 id="conclusion">Conclusion</h2>

<p>After the build and installation are complete, ensure that you stop and restart <strong>deluged</strong> completely. I would also recommend enabling debug mode initially in case any problems arise.</p>

<h4 id="wheredoichangetheip">Where do I change the IP?</h4>

<p>Note that since our new option isn't integrated into the GUI, you will have to manually edit the config file to change it. On most systems, this is located in <code>~/.config/deluge/web.conf</code>. This is a JSON file, so you can simply add a value like the following:  </p>

<pre><code>   "iface": "1.2.3.4",
</code></pre>

<p>The config parser in Deluge will pick this up and use it for the WebUI IP address! You can still retain the old functionality by using "0.0.0.0" to bind to all IP addresses, if you'd like.</p>]]></content:encoded></item></channel></rss>