<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[randu]]></title><description><![CDATA[Thoughts, stories and ideas.]]></description><link>https://randu.me/</link><generator>Ghost 4.48</generator><lastBuildDate>Thu, 24 Oct 2024 15:03:47 GMT</lastBuildDate><atom:link href="https://randu.me/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[CentOS 7 Life Support]]></title><description><![CDATA[<p>IBM has chosen to destroy CentOS. &#xA0;RHEL 7 is now past regular EOL and is on extended support.<br><br>On July 5, 2024, mirrors for package management of CentOS 7 are no longer working. &#xA0;Content is still available, but only in the vault. &#xA0;Ideally, all systems would be</p>]]></description><link>https://randu.me/centos-7-life-support/</link><guid isPermaLink="false">668b2375126224ebd23281e8</guid><dc:creator><![CDATA[Brad Murray]]></dc:creator><pubDate>Mon, 08 Jul 2024 01:04:52 GMT</pubDate><content:encoded><![CDATA[<p>IBM has chosen to destroy CentOS. &#xA0;RHEL 7 is now past regular EOL and is on extended support.<br><br>On July 5, 2024, mirrors for package management of CentOS 7 are no longer working. &#xA0;Content is still available, but only in the vault. &#xA0;Ideally, all systems would be migrated to a current Debian release. &#xA0;For those where short term migration is not an option, I set up a quick mirror list generator for CentOS. &#xA0;It is available at <a href="https://centosmirrorlist.randu.me/?release=7&amp;arch=x86_64&amp;repo=os">centosmirrorlist.randu.me</a> over both http and https. &#xA0;Be aware that you could always adjust the base repo location to point directly to a single mirror.</p><h2 id="usage">Usage</h2><p>Adjust any repos pointing to mirrorlist.centos.org to centosmirrorlist.randu.me. &#xA0;</p><p><code>sed -i &apos;s/<a href="http://mirrorlist.centos.org/http://centosmirrorlist.randu.me/g">http://mirrorlist.centos.org/http://centosmirrorlist.randu.me/g</a>&apos; /etc/yum.repos.d/*.repo</code></p><h2 id="architecture">Architecture</h2><p>Some of the choices here were a combination of using what I already had available and trying some tools I have not previously used.</p><p>The code backing this is a very simple Go application using no external libraries and is around ~100 LOC.</p><p>I set up a new service using <a href="https://cloud.google.com/run">Google Cloud Run</a> using the console. &#xA0;During setup, this gave an option to automatically pull from github and integrate their CI/CD system. &#xA0;Any push to the <code>main</code> branch of the repo will automatically build and deploy.</p><p>In front of all this is a pull endpoint on <a href="https://bunny.net">bunny cdn</a> serving out the updated mirror list to the public. &#xA0;The CDN endpoint is configured to serve cached responses during update and if the source service is offline.</p><h2 id="server-list">Server List</h2><p>At the moment, the following vault sources are returned:</p><ul><li>vault.centos.org</li><li>archive.kernel.org/centos-vault</li><li>mirror.nsc.liu.se/centos-store</li></ul><p>The application will make it easy to add/remove mirrors, including private mirrors as a safety mechanism in the event that IBM decides to push an update killing the vault or in the event that someone puts up an updates mirror containing packages from extended support RHEL 7.</p>]]></content:encoded></item><item><title><![CDATA[Bot Detection Methods]]></title><description><![CDATA[<p>Over the past few years, I&apos;ve come up with a few bot detection methods that don&apos;t seem to be in common use. &#xA0;These won&apos;t do a job at stopping targeted attacks, but will block common scan tools and scripts.</p><h3 id="detect-a-bot-by-missing-name">Detect a bot by</h3>]]></description><link>https://randu.me/bot-detection-by-protocol/</link><guid isPermaLink="false">5e62fed780e03516d30f0db6</guid><dc:creator><![CDATA[Brad Murray]]></dc:creator><pubDate>Mon, 11 May 2020 19:23:58 GMT</pubDate><content:encoded><![CDATA[<p>Over the past few years, I&apos;ve come up with a few bot detection methods that don&apos;t seem to be in common use. &#xA0;These won&apos;t do a job at stopping targeted attacks, but will block common scan tools and scripts.</p><h3 id="detect-a-bot-by-missing-name">Detect a bot by missing name</h3><p>Most of the web has gone HTTPS. &#xA0;Security certificates are usually for a single hostname or a wildcard within a domain. &#xA0;The old practice of having a fallback to a default host can be turned on its head here. &#xA0;Set up you virtualhosts as usual, but set the fallback virtualhost to a static page explaining the user has been blocked. &#xA0;Use your server&apos;s log formatting to mimic a bot attack that would be caught by your normal bot detection systems.</p><h3 id="detect-a-bot-by-http-protocol-version">Detect a bot by HTTP protocol version</h3><p>This one leaves open the possibility of false-positives if you have things other than web browsers hitting your site.</p><p>Most libraries used by bots and various programming environments use HTTP 1.1. &#xA0;Any connection not using IE, but using HTTP/1.1 for TLS is unlikely to be a real user.</p><h3 id="detect-a-bot-by-http-status-code">Detect a bot by HTTP status code</h3><p>I don&apos;t think I&apos;ve ever seen a HTTP 400 result from a legitimate request.</p>]]></content:encoded></item><item><title><![CDATA[WAV playback in Internet Explorer (IE10,11) without ActiveX]]></title><description><![CDATA[<p>One of the problems that has existed since the demise of Flash is the lack of a consistent media experience. &#xA0;Flash definitely had its problems... but it was reasonably consistent across platforms and browsers. &#xA0;Things are getting better, but that is still not the case with modern audio/</p>]]></description><link>https://randu.me/wav-playback-in-internet-explorer-ie10-11/</link><guid isPermaLink="false">5e25f77080e03516d30f0d73</guid><dc:creator><![CDATA[Brad Murray]]></dc:creator><pubDate>Mon, 20 Jan 2020 19:52:17 GMT</pubDate><content:encoded><![CDATA[<p>One of the problems that has existed since the demise of Flash is the lack of a consistent media experience. &#xA0;Flash definitely had its problems... but it was reasonably consistent across platforms and browsers. &#xA0;Things are getting better, but that is still not the case with modern audio/video tag based playback, MSE, etc.</p><p>One odd outlier is playback of raw pcm data or WAV files. &#xA0;Recent versions of Chrome, Edge and Firefox all play back WAV files with an audio tag without any problem. &#xA0;Internet explorer will not. &#xA0;<a href="https://developer.mozilla.org/en-US/docs/Web/API/MediaSource">MSE </a>also does not specify an option for PCM or WAV playback. &#xA0;MSE also requires IE 11. &#xA0;The Web Audio APIs / AudioBuffer allow this, but they are not available in any version of IE.</p><p>I thought about the problem a bit and remembered the <code>&lt;bgsound&gt;</code> tag that has existed since early versions of IE... and confirmed that it will indeed play WAV files. &#xA0;The only problem is, there is no way of managing controls / seeking. &#xA0;I put together a proof of concept to work around this limitation.</p><p>The WAV file format is easy to parse. &#xA0;My implementation here is taking quite a few shortcuts... so it might not work with all files. &#xA0;The solution is pretty simple... read the WAV file as an ArrayBuffer with XHR, seek by modifying the size fields in the header and cutting the output... then feed a Blob as a URL for the <code>src</code> attribute to <code>&lt;bgsound&gt;</code>. &#xA0;If you were working with larger files, you could use 2 range requests &#x2013; 1 for the header and 1 for the body. &#xA0;My method below works under the assumption that seek performance will be improved by the browser cache.</p><pre><code class="language-javascript">function playWavIE(url, seek, callback) {
	xhr = new XMLHttpRequest();
	xhr.onload = function(e) {
		var view = new DataView(xhr.response);
		var fmt = {
			fileSize: view.getUint32(4,true),
			dataType: view.getUint16(20,true),
			channels: view.getUint16(22,true),
			sampleRate: view.getUint32(24,true),
			sampleValue: view.getUint32(28,true),
			sampleBytes: view.getUint16(32,true),
			bitsPerSample: view.getUint16(34,true),
			dataSize: view.getUint32(40,true)
		};
		fmt.duration = fmt.dataSize / fmt.sampleValue;

		if(seek&gt; 0) {
			var cutBytes =  Math.floor(fmt.sampleValue*seek);
			var header = xhr.response.slice(0, 44);
			var hview = new DataView(header);
			hview.setUint32(4, fmt.fileSize - cutBytes,true);
			hview.setUint32(40, fmt.dataSize - cutBytes,true);

			blob = new Blob([header, xhr.response.slice(hview.byteLength + cutBytes)], {&quot;type&quot;: &quot;audio/x-wav&quot;});
			url = URL.createObjectURL(blob);
		}

		if(document.getElementsByTagName(&quot;bgsound&quot;).length) {
			var bgsound = document.getElementsByTagName(&quot;bgsound&quot;)[0]
		}
		else {
			bgsound = document.createElement(&quot;bgsound&quot;);
			document.body.appendChild(bgsound);
		}

		bgsound.src = url;
		if(callback !== undefined) {
			callback(fmt);
		}

	}
	xhr.open(&apos;GET&apos;, url, true);
	xhr.responseType = &apos;arraybuffer&apos;;
	xhr.send();
}

function playCB(fmt) {
	console.log(&quot;playCB called&quot;);
	console.log(fmt);
}
</code></pre>]]></content:encoded></item><item><title><![CDATA[Listing top-level functions in Python]]></title><description><![CDATA[<p>I had an interesting conversation today that brought up an unusual use case for parsing out (top-level) functions in a python file.</p><p>My first draft produced code that could have a number of problems... the biggest being security. &#xA0;As it imported the python file, code could be executed. &#xA0;</p>]]></description><link>https://randu.me/listing-top-level-functions/</link><guid isPermaLink="false">5e0ad3a980e03516d30f0d23</guid><dc:creator><![CDATA[Brad Murray]]></dc:creator><pubDate>Tue, 31 Dec 2019 05:00:52 GMT</pubDate><content:encoded><![CDATA[<p>I had an interesting conversation today that brought up an unusual use case for parsing out (top-level) functions in a python file.</p><p>My first draft produced code that could have a number of problems... the biggest being security. &#xA0;As it imported the python file, code could be executed. &#xA0;Another likely problem revolves around long running processes and how python handles imports. &#xA0;Even if the import is handled in a function&apos;s namespace, the setup functions, etc will only be executed once. &#xA0;Even trying to catch for this with re-import, the way the internal dictionaries are updated would likely result in unexpected behavior.</p><p>This may not be the ideal solution, but it is fairly safe (no code execution) and still uses the CPython interpreter/compiler rather than trying to parse with regex, etc. &#xA0;This is a quick bit of code and does not include proper error handling.</p><pre><code class="language-python">#!/bin/env python3
# NOTE: requires python &gt;= 3.2

import parser

def moduleFunctions(fileName):
	d = {}

	with open(fileName, &apos;r&apos;) as myfile:
		code_str = myfile.read()

	code_obj = parser.suite(code_str).compile()
	for obj in code_obj.co_consts:
		if isinstance(obj, type(code_obj)):
			d[obj.co_name] = obj.co_varnames[slice(obj.co_argcount)]
	return d</code></pre>]]></content:encoded></item><item><title><![CDATA[iptables ASN sets]]></title><description><![CDATA[<p>I previously wrote about using iptables to restrict access based on a country. &#xA0;Please refer to that article for implementing these ipsets. &#xA0;</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://randu.me/ssh-brute-force/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">SSH &amp; iptables geo restriction</div><div class="kg-bookmark-description">SSH brute force attacks are incredibly common. I primarily use sshguard toauto-block individual IPs after a certain number of failed requests.</div></div></a></figure>]]></description><link>https://randu.me/iptables-asn-lists/</link><guid isPermaLink="false">5de91c6580e03516d30f0cfc</guid><dc:creator><![CDATA[Brad Murray]]></dc:creator><pubDate>Thu, 05 Dec 2019 16:58:33 GMT</pubDate><content:encoded><![CDATA[<p>I previously wrote about using iptables to restrict access based on a country. &#xA0;Please refer to that article for implementing these ipsets. &#xA0;</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://randu.me/ssh-brute-force/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">SSH &amp; iptables geo restriction</div><div class="kg-bookmark-description">SSH brute force attacks are incredibly common. I primarily use sshguard toauto-block individual IPs after a certain number of failed requests. This willcontinue, but I wanted to add another layer to reduce the number of connectionsmaking it to my SSH server processes without either changing the&#x2026;</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://randu.me/favicon.ico"><span class="kg-bookmark-author">randu</span><span class="kg-bookmark-publisher">Brad Murray</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://randu.me/content/images/size/w600/2019/10/main_window-1.png"></div></a></figure><p>At times, you may want to restrict (or open) access to a particular network that may have many IP ranges. &#xA0;The most common use cases would be to whitelist access from a corporate network or a specific ISP.</p><p>If you don&apos;t know the ASN you will be working with, I suggest using <a href="https://bgp.he.net/">bgp.he.net</a> to look it up.</p><p>Here is a quick bash script to create an ipset for a particular ASN.</p><!--kg-card-begin: markdown--><pre><code class="language-bash">#!/bin/bash
if [ ! -x &quot;/usr/bin/whois&quot; ]; then
        echo &quot;whois must be installed at /usr/bin/whois&quot;
        exit 2
elif [ $# -ne 1 ]; then
        echo &quot;Usage $0 [ASN #]&quot;
        exit 1
elif ! [ $1 -eq $1 ] 2&gt; /dev/null; then
        echo &quot;ASN must be a number&quot;
        exit 1
fi

setname=&quot;as$1&quot;

echo &quot;create $setname hash:net family inet hashsize 1024 maxelem 2048&quot; &gt; &quot;$setname.ipset&quot;
/usr/bin/whois -h whois.radb.net -i origin AS$1 | /bin/grep -Eo &apos;([0-9.]+){4}/[0-9]+&apos; | sed &quot;s/^/add $setname /&quot; &gt;&gt; &quot;$setname.ipset&quot;
lines=`wc -l &lt; $setname.ipset`

if (($lines &lt; 2)); then
        echo &quot;ASN $1 not found or no routes found&quot;
        rm -f &quot;$setname.ipset&quot;
        exit
elif (($lines &gt; 2049)); then
        echo &quot;More than 2048 records found, adjust maxelem for set&quot;
fi
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Terraform Revisited]]></title><description><![CDATA[<p>It&apos;s now been several months since our <a href="https://www.terraform.io/">terraform </a>based scaling system has needed any manual intervention.</p><h2 id="a-recap">A Recap</h2><p>Among other things, we use terraform to scale CDN edge capacity, as well as custom compute instances for live video transcoding.</p><p>We use multiple cloud providers with overlapping locations and</p>]]></description><link>https://randu.me/terraform-revisited/</link><guid isPermaLink="false">5db8449a29b700044f53e252</guid><dc:creator><![CDATA[Brad Murray]]></dc:creator><pubDate>Tue, 29 Oct 2019 15:24:26 GMT</pubDate><content:encoded><![CDATA[<p>It&apos;s now been several months since our <a href="https://www.terraform.io/">terraform </a>based scaling system has needed any manual intervention.</p><h2 id="a-recap">A Recap</h2><p>Among other things, we use terraform to scale CDN edge capacity, as well as custom compute instances for live video transcoding.</p><p>We use multiple cloud providers with overlapping locations and preferred data centers for each location. &#xA0;The preference is based on a combination of factors including transit providers at the location, long term reliability and cost. &#xA0;In most cases we use <a href="https://www.linode.com/">Linode</a>, <a href="https://www.vultr.com/">Vultr </a>or <a href="https://www.hetzner.com/cloud">Hetzner</a>, although we are configured to work with additional providers.</p><p>Linode is generally our top choice, primarily because our data transfer is high and they pool all allocated transfer. &#xA0;Linode&apos;s host instances are on 40G links, all VMs include a fair amount of data transfer (per hour/month) and the per-VM port speeds are also generous.</p><p>We do have issues with Linode, as CPU steal is sometimes high. &#xA0;This is mostly an issue with their standard plans. &#xA0;In the past, I&apos;ve also been disappointed with the N Cal data center, due to both power problems and transit (at the time, mostly HE).</p><p>We have seen times when steal bumps up a bit, even on their dedicated CPU instances. &#xA0;I suspect these are sold by the thread, not by the core so on very rare occasion you may end up on a host that is heavily loaded.</p><p>Commands to start and stop instances can be triggered by our clients, by our backend systems or through scheduling. &#xA0;All of this is implemented through our own scripts.</p><h2 id="troubleshooting-and-improvements">Troubleshooting and Improvements</h2><p>When we first started offering services that included live transcoding, we occasionally got complaints of video breakup, dropped frames, etc. &#xA0;It didn&apos;t take long to confirm that this was caused by steal (oversold cpu). &#xA0;Buying a larger plan doesn&apos;t work, as a single low performance core breaks things. &#xA0;For most applications, slight inconsistencies in performance aren&apos;t a huge problem. &#xA0;In the case of video transcoding, it makes a huge difference.</p><p>When we first started doing this, Linode didn&apos;t offer &apos;Dedicated CPU&apos; plans, either.</p><p>The first step was a simple warning. &#xA0;I added a method in the provisioning scripts to run <a href="https://en.wikipedia.org/wiki/Mpstat">mpstat </a>and send an alert to my devices through <a href="https://pushover.net/">pushover </a>if the steal rate at time of provisioning was too high (&gt;1%). &#xA0;That 1% steal has virtually no effect on a web server, but these aren&apos;t web servers.</p><p>The second step was returning an error code on the provisioning script if the steal was too high. &#xA0;To prevent infinite loops (and large bills) I added a safeguard in our provisioning script that keeps a count of retries (that resets after a full success). The script also parses the output to determine which VM is failing. &#xA0;In our case, there are often multiple VMs provisioning at any time. &#xA0;After too many retries, it sends a high priority alert via pushover and prevents future attempts at running until a manual reset. &#xA0;This made a huge difference in the amount of manual effort involved, but there were still occasions where after a few retries, it could not get a host with a clean CPU.</p><h2 id="failover">Failover</h2><p>The final solution builds on the above, but adds a failover between plans and providers. &#xA0;Our generated .tf files now add a comment line with json encoded data containing important metadata such as the provider used, datacenter, cpu/plan type and retry count. &#xA0;Our wrapper parses the output from <code>terraform apply</code> and keeps track of which VMs failed due to CPU steal. &#xA0;After terraform completes its operation, this information is used to run a failover function. &#xA0;</p><p>We have an internal flow used to determine a failover plan. &#xA0;Here&apos;s an example...</p><p>Initial configuration was for a standard 6 core linode instance. &#xA0;On the first try, it spots CPU steal. &#xA0;Regenerate the configuration, but with an 8 core dedicated (there is no 6 core standard). &#xA0;There is still a failure (this is very rare), regenerate the configuration at Vultr. &#xA0;There is still a failure (this hasn&apos;t happened in real world only in our simulated environment), try a different Vultr data center.</p>]]></content:encoded></item><item><title><![CDATA[Advice]]></title><description><![CDATA[<p>Here are some things I would tell my younger self... and things I plan to teach my children. &#xA0;Some of these seem obvious, but they are lessons I&apos;ve had to learn.</p><p>Go for it. &#xA0;You&apos;ll miss out on many opportunities by being too cautious.</p>]]></description><link>https://randu.me/untitled/</link><guid isPermaLink="false">5da74830fdbedb04401a6ca5</guid><dc:creator><![CDATA[Brad Murray]]></dc:creator><pubDate>Wed, 16 Oct 2019 16:41:54 GMT</pubDate><content:encoded><![CDATA[<p>Here are some things I would tell my younger self... and things I plan to teach my children. &#xA0;Some of these seem obvious, but they are lessons I&apos;ve had to learn.</p><p>Go for it. &#xA0;You&apos;ll miss out on many opportunities by being too cautious. &#xA0;Fear and failure are part of life. &#xA0;Learn from the failures and you&apos;ll learn to tolerate the fear.</p><p>You&apos;re right that being different can be a very good thing... but learning to relate is important.</p><p>Get an outside perspective. &#xA0;Learn to respect the opinion of others even when they are misinformed or under-informed. &#xA0;Get perspective from other cultures. &#xA0;Even if you reject most of their ideas, you gain understanding and the ability to think flexibly. &#xA0;You might even change your mind on important matters.</p><p>Yes, substance is more important than style -- but in a marketplace of ideas, style often wins -- and as a result has more influence. &#xA0;Always take this into consideration, and you won&apos;t be so surprised at the way things go.</p><p>Most people have no problem with initiating violence to accomplish their goals or allay their fears. &#xA0;This force is usually carried out through government, but not always. &#xA0;Most of these people don&apos;t even recognize their actions are violent and will continue to deny it even when it&apos;s made clear.</p><p>There&apos;s nothing wrong with being wrong. &#xA0;It&apos;s only foolish when you choose to stay that way after being confronted with the facts.</p><p>Don&apos;t fault people for valuing feelings over facts. &#xA0;Although the heart is often wrong, validated facts can do just as much damage when applied without feeling. &#xA0;(for an extreme example, think German bioethics).</p>]]></content:encoded></item><item><title><![CDATA[SSH & iptables geo restriction]]></title><description><![CDATA[<p>SSH brute force attacks are incredibly common. &#xA0;I primarily use sshguard to auto-block individual IPs after a certain number of failed requests. &#xA0;This will continue, but I wanted to add another layer to reduce the number of connections making it to my SSH server processes without either changing</p>]]></description><link>https://randu.me/ssh-brute-force/</link><guid isPermaLink="false">5d9e1cb3fdbedb04401a6b7d</guid><dc:creator><![CDATA[Brad Murray]]></dc:creator><pubDate>Thu, 10 Oct 2019 03:06:50 GMT</pubDate><content:encoded><![CDATA[<p>SSH brute force attacks are incredibly common. &#xA0;I primarily use sshguard to auto-block individual IPs after a certain number of failed requests. &#xA0;This will continue, but I wanted to add another layer to reduce the number of connections making it to my SSH server processes without either changing the port or restricting to a specific network.</p><h3 id="geographic-solutions">Geographic Solutions</h3><p>I found a number of people were using <a href="https://dev.maxmind.com/geoip/geoip2/geolite2/">maxmind&apos;s geolite country</a> database with <a href="https://www.axllent.org/docs/view/ssh-geoip/">tcp wrappers / hosts.allow and a shell script</a>. &#xA0;I really don&apos;t want a shell process spawned with every request, I want them blocked by the firewall. &#xA0;I looked at the plain text dump of this database, and was surprised to see a number of /32 addresses in a country level database. &#xA0;I didn&apos;t want a crazy number of entries added to iptables.</p><p>I then looked at using the simple<a href="https://www.iana.org/assignments/ipv4-address-space/ipv4-address-space.xhtml"> IANA assignments of /8 addresses</a>. &#xA0;This would probably work for me... as to my knowledge there aren&apos;t any /8&apos;s assigned to outside regions that contain smaller assignments back to ARIN.</p><p>After a little more digging, I found a site/service that offers regularly updated aggregated <a href="http://www.ipdeny.com/ipblocks/">IP blocks by country</a>. &#xA0;The sizes here are reasonable, with around 18,000 blocks for the US.</p><h3 id="netfilter-sets">netfilter sets</h3><p>After a little digging and performance tests, it looks like creating whitelists will work fine. &#xA0;I&apos;m not a fan of firewalld, so I will use iptables/ipset for this.</p><p><em>Note: All sets must be created before the iptables rules are generated. &#xA0;It&apos;s OK if they are empty, with the notable exception of the whitelist, as you don&apos;t want to risk getting locked out of your server.</em></p><p>To prepare the sets, I created a quick bash script. &#xA0;I found something similar written by someone else, but it was very slow for large numbers of networks, as it spawned a process for every row being added. </p><!--kg-card-begin: markdown--><pre><code class="language-bash">#!/bin/bash
if [ $# -ne 1 ];
	then echo &quot;Usage $0 [2 char country code]&quot;
	exit 1
fi

url=&quot;http://www.ipdeny.com/ipblocks/data/aggregated/$1-aggregated.zone&quot;
setname=&quot;geo_$1&quot;

echo &quot;create $setname hash:net family inet hashsize 8192 maxelem 65536&quot; &gt; &quot;$setname.ipset&quot;
curl -s $url | sed &quot;s/^/add $setname /&quot; &gt;&gt; &quot;$setname.ipset&quot;

</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>This will create a file that can be imported using: <code>cat $filename | ipset restore</code></p>
<!--kg-card-end: markdown--><p><strong>iptables rules</strong></p><p>We use sshguard to handle temporary blocking for those doing brute-force attacks.</p><p>For SSH:</p><ul><li>Allow IPs in the whitelist</li><li>Block anything in the sshguard blocklist</li><li>Allow anything else in the US (or your country/countries of choice)</li><li>Block anything in the rest of the world</li></ul><!--kg-card-begin: markdown--><pre><code class="language-iptables">*filter

# Allow all loopback (lo0) traffic and reject traffic
# to localhost that does not originate from lo0.
-A INPUT -i lo -j ACCEPT
-A INPUT ! -i lo -s 127.0.0.0/8 -j REJECT

# Allow ping.
-A INPUT -p icmp -m state --state NEW --icmp-type 8 -j ACCEPT

# Allow mosh
-A INPUT -p udp -m multiport --dports 60000:61000 -j ACCEPT

# Allow SSH connections from whitelist
-A INPUT -p tcp --dport 22 -m state --state NEW -m set --match-set whitelist src -j ACCEPT

# Block SSH from sshguard blocklist
-A INPUT -p tcp --dport 22 -m state --state NEW -m set --match-set sshguard4 src -j REJECT

# Allow SSH connections from usa
-A INPUT -p tcp --dport 22 -m state --state NEW -m set --match-set geo_us src -j ACCEPT

# Allow HTTP and HTTPS connections from anywhere
# Allow RTMP from anywhere
-A INPUT -p tcp --dport 80 -m state --state NEW -j ACCEPT
-A INPUT -p tcp --dport 443 -m state --state NEW -j ACCEPT
-A INPUT -p tcp --dport 1935 -m state --state NEW -j ACCEPT

# Allow inbound traffic from established connections.
# This includes ICMP error returns.
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# Reject all other inbound.
-A INPUT -j REJECT

# Reject all traffic forwarding.
-A FORWARD -j REJECT

COMMIT
</code></pre>
<!--kg-card-end: markdown--><h3 id="making-it-work-on-centos-7-8-">Making it work on CentOS (7/8)</h3><p>CentOS 7+ by default uses firewalld. &#xA0;It must be disabled and iptables/ipset tools and systemd unit files must be installed</p><!--kg-card-begin: markdown--><pre><code class="language-bash">yum -y install iptables iptables-services ipset ipset-service
systemctl disable firewalld
systemctl stop firewalld
systemctl mask firewalld

systemctl enable iptables
systemctl enable ipset
</code></pre>
<p>After you have finished creating your rules and sets, you&apos;ll want to save them.</p>
<pre><code class="language-bash">/usr/libexec/ipset/ipset.start-stop save whitelist
/usr/libexec/ipset/ipset.start-stop save sshguard4
/usr/libexec/ipset/ipset.start-stop save geo_us
/usr/libexec/iptables-iptables.init save
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Multi-Site DVR - Open Source]]></title><description><![CDATA[<p><em>Note: this article is a work in progress</em></p><p>Code available at <a href="https://github.com/eutychus/multisite-dvp">https://github.com/eutychus/multisite-dvp</a> . &#xA0;Hosted service available from https://www.lightcastmedia.com/</p><p>Many of our clients are churches and groups running large live or partially delayed events across multiple locations. &#xA0;In some circumstances, a simple normal</p>]]></description><link>https://randu.me/multi-site-dvr/</link><guid isPermaLink="false">5d9b5dbffdbedb04401a6aff</guid><dc:creator><![CDATA[Brad Murray]]></dc:creator><pubDate>Mon, 07 Oct 2019 16:12:11 GMT</pubDate><media:content url="https://randu.me/content/images/2019/10/main_window-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://randu.me/content/images/2019/10/main_window-1.png" alt="Multi-Site DVR - Open Source"><p><em>Note: this article is a work in progress</em></p><p>Code available at <a href="https://github.com/eutychus/multisite-dvp">https://github.com/eutychus/multisite-dvp</a> . &#xA0;Hosted service available from https://www.lightcastmedia.com/</p><p>Many of our clients are churches and groups running large live or partially delayed events across multiple locations. &#xA0;In some circumstances, a simple normal remote video stream works great... but it&apos;s not ideal.</p><h3 id="a-common-scenario">A common scenario</h3><p>A multi-site church runs multiple services at each location. &#xA0;The music and announcements are local to each campus, but the sermon is broadcast from a central location. &#xA0;</p><p>Internet connections aren&apos;t always consistent, and in some occasions, remote sites are connected over unreliable links (e.g. rural wifi based internet, cellular). &#xA0;</p><p>Our solution allows local downloading of video segments, scheduling and cueing playback. &#xA0;It also allows playback when the destination locations internet can be slow or unreliable.</p><p>This works regardless of whether the client site was online when the broadcast began (server side dvr) and would even allow playback at a site without internet (sneakernet &#x2013; all necessary segments downloaded while live and moved to the destination location).</p><h3 id="interface">Interface</h3><figure class="kg-card kg-image-card"><img src="https://randu.me/content/images/2019/10/main_window.png" class="kg-image" alt="Multi-Site DVR - Open Source" loading="lazy"></figure><p>In the top left is an indicator showing whether there is a a current live broadcast. &#xA0;It will also indicate if the net connection or connection to the dvr server has failed.</p><p>Video controls are available on hover. &#xA0;Below the video are timecodes and seek options. &#xA0;This allows you to seek to a specific clock time, rather than just an offset in the video.</p><p>Storage gives you an overview of information about video available on the server. &#xA0;White blocks are on the server, but not on the client, green blocks are downloaded segments. &#xA0;Purple is the current playback position. &#xA0;Black blocks indicate discontinuity / transitions between multiple recordings. &#xA0;The amount of local storage is configurable and the system automatically purges oldest first when storage gets low.</p><figure class="kg-card kg-image-card"><img src="https://randu.me/content/images/2019/10/schedule.png" class="kg-image" alt="Multi-Site DVR - Open Source" loading="lazy"></figure><p>A simple scheduling tool allows you to automatically download sections of video. &#xA0;In our church service scenario, you might want to skip the first 20-30 minutes (announcements, music) but download the next 45min.</p><p>The system is designed for multi-monitor use (one display for command and control, another output sent to a projector or video distribution system) and can be controlled using only the numeric keypad.</p><figure class="kg-card kg-image-card"><img src="https://randu.me/content/images/2019/10/status_window.png" class="kg-image" alt="Multi-Site DVR - Open Source" loading="lazy"></figure><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th>Purpose</th>
<th>Shortcut</th>
</tr>
</thead>
<tbody>
<tr>
<td>Toggle full screen</td>
<td>Enter, F</td>
</tr>
<tr>
<td>Toggle play/pause</td>
<td>Space, Keypad 5</td>
</tr>
<tr>
<td>Begin recording</td>
<td>R, *</td>
</tr>
<tr>
<td>Seek +10 minutes</td>
<td>+, Up Arrow</td>
</tr>
<tr>
<td>Seek -10 minutes</td>
<td>-, Down Arrow</td>
</tr>
<tr>
<td>Seek +1 minute</td>
<td>Right Arrow</td>
</tr>
<tr>
<td>Seek -1 minute</td>
<td>Left Arrow</td>
</tr>
<tr>
<td>Set focus to seek input</td>
<td>Keypad dot</td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Browser upload to Backblaze b2]]></title><description><![CDATA[<p>Today, I released another piece of software under the <a href="https://en.wikipedia.org/wiki/MIT_License">MIT license</a>. &#xA0;This time, it is to allow web users to securely upload files directly to a <a href="https://www.backblaze.com/b2/cloud-storage.html">b2</a> bucket.</p><p>The <a href="https://www.backblaze.com/b2/docs/b2_authorize_account.html">b2 api</a> is reasonably nice, although it has 1 major problem &#x2013; tokens for small file uploads are not secure</p>]]></description><link>https://randu.me/browser-upload-to-backblaze-b2/</link><guid isPermaLink="false">5d8e0868fdbedb04401a6a85</guid><dc:creator><![CDATA[Brad Murray]]></dc:creator><pubDate>Fri, 27 Sep 2019 13:24:35 GMT</pubDate><content:encoded><![CDATA[<p>Today, I released another piece of software under the <a href="https://en.wikipedia.org/wiki/MIT_License">MIT license</a>. &#xA0;This time, it is to allow web users to securely upload files directly to a <a href="https://www.backblaze.com/b2/cloud-storage.html">b2</a> bucket.</p><p>The <a href="https://www.backblaze.com/b2/docs/b2_authorize_account.html">b2 api</a> is reasonably nice, although it has 1 major problem &#x2013; tokens for small file uploads are not secure in a way that would allow them to be passed back to the client.</p><p>I started with <a href="https://github.com/eutychus/resumable.js">resumable.js</a>. &#xA0;We&apos;ve been using a modified version of this to handle large media uploads for a few years now. &#xA0;It&apos;s a simple, relatively clean bit of code without any major external dependencies. &#xA0;We had to make a <a href="https://github.com/eutychus/resumable.js/commit/af4ea0314bb1c292b7d9bc60e4490cf4182ac471">few changes</a> to resumable.js to allow our extension to work. </p><p>The frontend is a set of callbacks for resumable.js to perform sha1 hashing, fetch tokens, alter upload addresses, etc.</p><p>The backend (written in PHP 7) handles authenticating end user upload requests and creating backblaze tokens. &#xA0;It also handles receiving small (up to 5MB) files from the end user and uploading them to b2. &#xA0;The only thing it uses that may not be in everyone&apos;s PHP stack is redis, which is optional but highly recommended for performance.</p><p>A work in progress...</p><ul><li>More testing</li><li>Example page with upload progress reporting</li><li>Better handling of error conditions / reporting back to resumable.js</li><li>Multi-threaded uploads of large files (we do this on our previous custom version of resumable.js)</li></ul><p></p>]]></content:encoded></item><item><title><![CDATA[Object Storage]]></title><description><![CDATA[<p><em>Updated 9/22/20</em></p><p>I&apos;ve been reviewing options for Object Storage. &#xA0;For many years, we&apos;ve maintained our own systems for this purpose. &#xA0;Right now, most of our data is stored on a set of colocated storage servers (in different data centers in the same</p>]]></description><link>https://randu.me/object-storage/</link><guid isPermaLink="false">5d83c2d6fdbedb04401a6946</guid><dc:creator><![CDATA[Brad Murray]]></dc:creator><pubDate>Thu, 19 Sep 2019 20:49:48 GMT</pubDate><content:encoded><![CDATA[<p><em>Updated 9/22/20</em></p><p>I&apos;ve been reviewing options for Object Storage. &#xA0;For many years, we&apos;ve maintained our own systems for this purpose. &#xA0;Right now, most of our data is stored on a set of colocated storage servers (in different data centers in the same city) running zfs in a raidz2 configuration. &#xA0;Data is mirrored between servers using <a href="https://axkibe.github.io/lsyncd/">lsyncd</a>.</p><p>There are many factors to consider, although most stick with the option given by their primary cloud or datacenter provider.</p><!--kg-card-begin: markdown--><h2 id="featuresandpricingoverview">Features and Pricing Overview</h2>
<table>
<thead>
<tr>
<th></th>
<th>BackBlaze b2</th>
<th>Wasabi</th>
<th>Linode</th>
<th>Vultr</th>
<th>Amazon S3</th>
<th>Google</th>
</tr>
</thead>
<tbody>
<tr>
<td>Storage $/TB</td>
<td>$5.00</td>
<td>$5.99</td>
<td>$20.00</td>
<td>$20.00</td>
<td>$23.00</td>
<td>$20.00</td>
</tr>
<tr>
<td>Transfer $/TB</td>
<td>$10.00</td>
<td>*</td>
<td>$10.00*</td>
<td>$10.00</td>
<td>$90.00*</td>
<td>$80.00*</td>
</tr>
<tr>
<td>Protocol</td>
<td>b2 &amp; S3</td>
<td>S3</td>
<td>S3</td>
<td>S3</td>
<td>S3</td>
<td>GCS</td>
</tr>
<tr>
<td>Locations</td>
<td>2/5</td>
<td>3/5</td>
<td>3/5</td>
<td>1/5</td>
<td>5/5</td>
<td>5/5</td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><h3 id="backblaze-b2">BackBlaze B2</h3><p>BackBlaze have been offering cloud backup since 2007 and b2 object storage since 2015. &#xA0;They have a long history of providing a quality product at a low price.</p><p>Their strength lies in their solid track record and low straight-forward pricing. &#xA0;The biggest weakness is the datacenter options. &#xA0;In the US, your data will be stored in either Sacramento CA or Phoenix AZ. &#xA0;You do not have a choice of data centers. &#xA0;In Europe, your data will be stored in Amsterdam.</p><p>The b2 protocol is fairly easy to work with. &#xA0;One gotcha &#x2013; there is no way to pass a token to allow upload of a specific file smaller than 5MB. &#xA0;This isn&apos;t a problem for backups or transfers from servers you control. &#xA0;It is a problem if you want to allow your end users to upload directly to B2.</p><p>The recent addition of the S3 protocol for new buckets, makes it much easier to integrate into existing code bases without vendor lock-in. &#xA0;IAM-like features are much more limited than in AWS or Wasabi.</p><h3 id="wasabi">Wasabi</h3><p>Wasabi is a newer offering (launched May 2017) started by the co-founders of Carbonite (online backup, launched in 2006). &#xA0;It&apos;s well funded and likely to be a strong contender for many years.</p><p>Since Wasabi uses the S3 protocol and supports many of the same IAM features as Amazon&apos;s offering, tool/code options are good and migration should be reasonably simple for those currently using S3.</p><p>They offer storage in US-East (Ashburn VA and Manassas VA), US-West (Hillsboro OR), Europe (Amsterdam NL) and Asia (Tokyo JP) and allow you to select the data center for each bucket.</p><p>One thing that makes Wasabi unique is the elimination of transfer and API call fees. &#xA0;This model only works if your average monthly <a href="https://wasabi.com/pricing/pricing-faqs/">transfer is less than your storage</a>. &#xA0;It also only works if you have a limited amount of transient data, as you will be billed for 90 days for every file stored. &#xA0;It is my understanding that if you exceed the transfer limits, you will be asked to move to their older pricing model ($3.90/TB storage + $40/TB transfer).</p><h3 id="linode">Linode</h3><p>Linode&apos;s Object Storage is now available in Newark, Frankfurt and Singapore. &#xA0;The minimum price is $5/mo and includes 250GB storage and 1TB t ransfer. &#xA0;Additional storage is $0.02/GB and outbound transfer is $0.01/GB. &#xA0;I suspect it will mostly be attractive to those who have large deployments using Linode&apos;s computing systems. &#xA0;The biggest benefits will likely be fast internal network traffic to your Linode instances and using your existing transfer pool for data transfer, reducing costs.</p><h3 id="vultr">Vultr</h3><p>Vultr&apos;s Object storage is a new offering. &#xA0;Availability is currently limited to Newark / US-East.</p><h3 id="amazon-s3">Amazon S3</h3><p>Amazon&apos;s S3 service is the big contender, although the pricing model is <a href="https://aws.amazon.com/s3/pricing/">very complex</a> and can get very expensive. &#xA0;The biggest advantage of S3 is the tight integration with Amazon&apos;s other services. &#xA0;When configured properly, data transfer from S3 to other AWS services is free... although API call fees can be significant.</p><h3 id="google-object-storage">Google Object Storage</h3><p><a href="https://cloud.google.com/storage/pricing">Google object storage</a>, similar to S3 would mostly be useful to services inside GCP.</p>]]></content:encoded></item><item><title><![CDATA[Roku BIF files]]></title><description><![CDATA[<p>I just did a little code cleanup and released one of our internal tools under the MIT license.</p><p>For on-demand video, Roku devices use a custom file format that stores a number of images (video frames) in a single file. &#xA0;This file is used for trick-play where you can</p>]]></description><link>https://randu.me/roku-bif-files/</link><guid isPermaLink="false">5d7f960afdbedb04401a690a</guid><dc:creator><![CDATA[Brad Murray]]></dc:creator><pubDate>Mon, 16 Sep 2019 14:11:19 GMT</pubDate><content:encoded><![CDATA[<p>I just did a little code cleanup and released one of our internal tools under the MIT license.</p><p>For on-demand video, Roku devices use a custom file format that stores a number of images (video frames) in a single file. &#xA0;This file is used for trick-play where you can see a preview of what the video looks like when you fastforward and rewind.</p><p>The format is a very simple archive format with an index and a number of JPGs embedded. &#xA0;We re-use the Roku BIF files with a custom clappr player plugin (we will likely release this as well) along with a mouseover effect on our video lists page.</p><p><a href="https://github.com/eutychus/bifjs">github page with source code</a></p><p><a href="https://eutychus.github.io/bifjs/example/index.html">Example using our library</a></p>]]></content:encoded></item><item><title><![CDATA[Customizing Armbian]]></title><description><![CDATA[<p>I&apos;ve been working on a hardware project that includes a small single board computer (SBC) running Linux... in this case, <a href="https://www.armbian.com/">Armbian</a>. &#xA0;I didn&apos;t find any good instructions or scripts for customizing the image without fully rebuilding everything from source, so I did a little investigating.</p>]]></description><link>https://randu.me/customizing-armbian/</link><guid isPermaLink="false">5cb15b5a2b280c0c69dc4ae8</guid><dc:creator><![CDATA[Brad Murray]]></dc:creator><pubDate>Sat, 13 Apr 2019 03:57:54 GMT</pubDate><content:encoded><![CDATA[<p>I&apos;ve been working on a hardware project that includes a small single board computer (SBC) running Linux... in this case, <a href="https://www.armbian.com/">Armbian</a>. &#xA0;I didn&apos;t find any good instructions or scripts for customizing the image without fully rebuilding everything from source, so I did a little investigating. &#xA0;Here are some of my notes and what I came up with.</p><p>The armbian images are shippped as 7z compressed disk images. &#xA0;This may be board specific, but the image I used contained a MBR boot record as well as a single ext4 partition. &#xA0;This partition is resized on first boot.</p><p>To modify the image, I have a debian development environment running under VirtualBox.</p><p><em>losetup -P /dev/loop0 filename.img</em> (this sets up the loopback device and forces a partition scan)</p><p>All my configuration is done from a script called from rc.local. &#xA0;It changes permission to 000 on rc.local when it is complete. &#xA0;Among other things, this installs additional packages I include in the distribution.</p><p>From another identical SBC with a clean install of Armbian, I run the following to fetch the necessary packages</p><p><em>apt-get clean &amp;&amp; apt-get install --download-only packge1 package2 pacakge3</em></p><p>where package1, etc are all the packages I want to add. &#xA0;The deb files and necessary dependencies will be at /var/cache/apt/archives. &#xA0;These can be copied onto the image and installation done by the first boot script.</p><p>After unmounting the loopback device, it can be written back to an SD card using dd on Linux or balenaEthcer, etc on Windows.</p><p>dd if=armbianimage.img of=/dev/sdc bs=1M oflag=direct status=progress</p>]]></content:encoded></item><item><title><![CDATA[Cloud Computing Management]]></title><description><![CDATA[<p>I previously wrote on our use of terraform to manage cloud/vps instances. &#xA0;Here are more updates and notes on what I&apos;ve ran into.</p><h2 id="consistency-and-availability">Consistency and Availability</h2><p><em>There is no cloud, it&apos;s just someone else&apos;s computer</em></p><p>One of the things I&apos;ve</p>]]></description><link>https://randu.me/cloud-management/</link><guid isPermaLink="false">5bc9e3d32b280c0c69dc4ace</guid><dc:creator><![CDATA[Brad Murray]]></dc:creator><pubDate>Sun, 06 Jan 2019 22:47:11 GMT</pubDate><content:encoded><![CDATA[<p>I previously wrote on our use of terraform to manage cloud/vps instances. &#xA0;Here are more updates and notes on what I&apos;ve ran into.</p><h2 id="consistency-and-availability">Consistency and Availability</h2><p><em>There is no cloud, it&apos;s just someone else&apos;s computer</em></p><p>One of the things I&apos;ve spent quite a bit of time dealing with is keeping consistent performance. &#xA0;How important this is depends on your usage scenario, but in our case (streaming media) it&apos;s vital. &#xA0;Be aware that most of these problems are not the rule, but rather the exception. &#xA0;Other than hyper-budget providers, most VPS/cloud providers manage resources well.... but when they do happen, they could ruin your day.</p><p>We take a hybrid approach to deployments &#x2013; a combination of colo, dedicated servers, vps/cloud computing and SaaS using a variety of providers. &#xA0;Our colo systems are mirrored to multiple providers, and we try to keep anything on a VPS/CCI set up for automated deployment... but not all instances are created equal &#x2013; even when on the same plan with the same provider.</p><h3 id="cpu-steal">CPU Steal</h3><p>CPU steal can be caused by a number of factors... but it&apos;s effectively a noisy neighbor problem (Linode) or throttling (AWS t2/lightsail). &#xA0;Since multiple clients are using the same computer/host node, your processes may slow down significantly due to what other clients are doing. &#xA0;In my experience, both networking and encoding performance are significantly effected with a CPU steal value higher than 2%. &#xA0;Our provisioning scripts check the kernel steal value and exit / report a failed installation with a value higher than 1%.</p><h3 id="availability">Availability</h3><p>On rare occasion, cloud provisioning systems become unavailable. &#xA0;This is why we aren&apos;t dependent on a single provider.</p><p>We can easily switch a task to a different provider or data center in about 3 minutes.</p><p>We have also seen times where larger VMs are temporarily unavailable at a particular data center (mostly happens with Vultr). &#xA0;For this case, we run a preflight check before generating provisioning commands for terraform confirming availability. &#xA0;If the provisioning still fails, we switch the VM to a different data center.</p><h2 id="some-provider-details-">Some provider details...</h2><p>This covers only some of the cloud providers we use. &#xA0;Later, I may cover my experiences with dedicated and colo providers.</p><h3 id="linode">Linode</h3><p>Performance here is usually good, but we have to monitor things closely. Linode hosts nodes are all connected at 40gbps, and plans offer generous bandwidth caps, which is fantastic for our purposes, and we seldom see any real network issues. &#xA0;Linode also pools transfer quotas across instances, which is also great for our relatively high volume, high burst usage scenario. &#xA0;We have, however seen problems with CPU congestion/overselling, though never ran into throttling. &#xA0;Here are some of the processors we have encountered with Linode. &#xA0;Note that I count threads here rather than cores, as all providers I have encountered have HT enabled and assign CPU based on threads rather than running without HT and locking CPU affinity to specific cores.</p><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th>CPU</th>
<th>Clock</th>
<th>Threads</th>
<th>Max per server</th>
</tr>
</thead>
<tbody>
<tr>
<td>E5-2680v2</td>
<td>2.8</td>
<td>20</td>
<td>2</td>
</tr>
<tr>
<td>E5-2680v3</td>
<td>2.5</td>
<td>24</td>
<td>2</td>
</tr>
<tr>
<td>E5-2697v4</td>
<td>2.3</td>
<td>36</td>
<td>2</td>
</tr>
<tr>
<td>Xeon Gold 6148</td>
<td>2.4</td>
<td>40</td>
<td>4</td>
</tr>
<tr>
<td>AMD Epyc 7451</td>
<td>2.3</td>
<td>48</td>
<td>2</td>
</tr>
<tr>
<td>AMD Epyc 7501</td>
<td>2.0</td>
<td>64</td>
<td>2</td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><h3 id="vultr">Vultr</h3><p>CPU performance has been more consistent with vultr. &#xA0;I believe each host is connected at 10gbps. &#xA0;To the best of my knowledge, vultr does not publish available port speeds/caps.... although 1gbps is pretty safe. &#xA0;Since VPS nodes support AVX512 and I haven&apos;t had a problem with CPU steal, so far they have worked well for transcoder instances.</p><p>Vultr also offers &apos;bare metal&apos; instances which are great all around at the current price point. &#xA0;They connect at 10gbps and are dedicated, so no concerns about noisy neighbors. &#xA0;Bare metal instances take a little longer to provision... typically around 5 minutes (while VPS provisioning at most providers is under 2 minutes).</p><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th>Type</th>
<th>CPU</th>
<th>Clock</th>
<th>Threads</th>
<th>Max per server</th>
</tr>
</thead>
<tbody>
<tr>
<td>VPS</td>
<td>Xeon Gold 5120</td>
<td>2.6</td>
<td>28</td>
<td>4</td>
</tr>
<tr>
<td>Bare Metal</td>
<td>Xeon E3-1270v6</td>
<td>3.8</td>
<td>8</td>
<td>1</td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><h3 id="amazon">Amazon</h3><p>We mostly use amazon for AI services. &#xA0;We seldom use their cloud computing products. &#xA0;The price points for EC2 are not competitive with other options, especially for high bandwidth consumption. &#xA0;Lightsail is priced appropriately but has poor performance (speculated that these are T2 burstable EC2 under the hood). &#xA0;Both CPU and network are heavily throttled after about 20-30 minutes. &#xA0;This can be detected both with benchmark software and kernel CPU steal information.</p><h3 id="digital-ocean">Digital Ocean</h3><p>We seldom use Digital Ocean, mostly due to network performance. &#xA0;The listed port speed is 1gbps, but performance is inconsistent, likely due to other clients using the same host name. &#xA0;We keep them configured as another backup provider for transcoding.</p><h3 id="hetzner-cloud">Hetzner Cloud</h3><p>Hetzner&apos;s offering is very nice and the price is fantastic, although data centers are only in Germany and Finland. &#xA0;We mostly use Hetzner for overflow for on demand video encoding. &#xA0;I believe each host is connected at 10gbps. &#xA0;CPU is Xeon Scalable (Skylake-SP) @ 2.1ghz. &#xA0;I am unsure on the specific model. &#xA0;Performance is great for encoding, since Skylake supports AVX-512.</p>]]></content:encoded></item><item><title><![CDATA[Sewell HD Link HDMI over IP]]></title><description><![CDATA[<h3 id="summary">Summary</h3><p><a href="https://www.amazon.com/gp/product/B00UKEO32M/">Devices </a>link at 100mbit, with a data rate of around 60-80mbps</p><p>Video quality is good, but compression artifacts are noticeable on text with dark backgrounds at close viewing distances</p><p>Latency is pretty good for a device like this in this price range (~$110)... ~65ms = ~2 frames @ 30fps.</p><p>Port isolation</p>]]></description><link>https://randu.me/sewell-hd-link-hdmi-over-ip/</link><guid isPermaLink="false">5bbd6da22b280c0c69dc4ac4</guid><dc:creator><![CDATA[Brad Murray]]></dc:creator><pubDate>Wed, 10 Oct 2018 03:16:07 GMT</pubDate><media:content url="https://randu.me/content/images/2018/10/box2-1.jpg" medium="image"/><content:encoded><![CDATA[<h3 id="summary">Summary</h3><img src="https://randu.me/content/images/2018/10/box2-1.jpg" alt="Sewell HD Link HDMI over IP"><p><a href="https://www.amazon.com/gp/product/B00UKEO32M/">Devices </a>link at 100mbit, with a data rate of around 60-80mbps</p><p>Video quality is good, but compression artifacts are noticeable on text with dark backgrounds at close viewing distances</p><p>Latency is pretty good for a device like this in this price range (~$110)... ~65ms = ~2 frames @ 30fps.</p><p>Port isolation / filtering is necessary if other devices are on same switch</p><p>Sewell claims they have tested with 8 receivers without issue</p><h3 id="box-contents">Box / Contents</h3><p>One nice touch -- they include both an IR receiver and transmitter so you can control IR equipment if the transmitter is in a cabinet.</p><figure class="kg-card kg-image-card"><img src="https://randu.me/content/images/2018/10/box1.jpg" class="kg-image" alt="Sewell HD Link HDMI over IP" loading="lazy"></figure><figure class="kg-card kg-image-card"><img src="https://randu.me/content/images/2018/10/box2.jpg" class="kg-image" alt="Sewell HD Link HDMI over IP" loading="lazy"></figure><h3 id="latency">Latency</h3><p>Tested by connecting HDMI out of a laptop to transmitter, passing through to receiver with a crossover cable. &#xA0;I didn&apos;t test it after running through the switch, but it shouldn&apos;t have any significant effect.</p><figure class="kg-card kg-image-card"><img src="https://randu.me/content/images/2018/10/image.png" class="kg-image" alt="Sewell HD Link HDMI over IP" loading="lazy"></figure><h3 id="video-quality"><br>Video Quality</h3><p>Compression artifacts were visible on dark background when viewed at close range. &#xA0;Bear in mind these were taken close up on a 42in 1080p TV... close enough that you can see the RGB dots and grid for each pixel. &#xA0;This won&apos;t be a problem at normal viewing distances, but is not ideal.<br></p><figure class="kg-card kg-image-card"><img src="https://randu.me/content/images/2018/10/IMG_20180805_113408.jpg" class="kg-image" alt="Sewell HD Link HDMI over IP" loading="lazy"></figure><figure class="kg-card kg-image-card"><img src="https://randu.me/content/images/2018/10/IMG_20180805_113438.jpg" class="kg-image" alt="Sewell HD Link HDMI over IP" loading="lazy"></figure><h3 id="network"><br><br>Network</h3><p>I initially tried to test with the transmitter connected directly to my main switch (Mikrotik CSS326) and the receiver connected to my cheap 5x1G trendnet in my office. &#xA0;The heavy broadcast traffic killed the CPU on my Netgear R7000 running DD-WRT. &#xA0;</p><p>After connecting both the transmitter and receiver to the Mikrotik and isolating the ports, everything went back to normal. &#xA0;These devices will need to be isolated either by physical switch or VLAN to prevent problems with other network devices.</p><p>If you wanted to switch multiple transmitters, the best bet would probably be to use a managed switch and VLANs.<br></p><figure class="kg-card kg-image-card"><img src="https://randu.me/content/images/2018/10/solarwinds.png" class="kg-image" alt="Sewell HD Link HDMI over IP" loading="lazy"></figure>]]></content:encoded></item></channel></rss>