Jekyll2022-12-25T23:02:50+00:00https://www.wengier.com/feed.xmlDavid WengierA blog? A site? Definitely an about page at least.Debugging IIS Rewrite Rules2018-05-21T00:00:00+00:002018-05-21T00:00:00+00:00https://www.wengier.com/debugging-iis-rewrite-rules<p>Where I work we have a number of ASP.NET web applications that run different parts of our site so that we can have some segregation of code and containment of scope without just having an enormous monolithic project that holds everything intermingled together. It’s nothing too exciting technically, but the marketing department also needs to be able to present the entire site as a whole to visitors, and the Google Bot, for that sweet sweet SEO juice (and easier navigation and other less cynical reasons I’m sure). The way we achieve that is with prodigious use of the IIS URL Rewrite engine, which allows us to create a set of rules that take the incoming HTTP requests and either route them through to different web applications, or different virtual paths, or stop some in their tracks entirely.</p>
<p>There is lots of documentation and examples on the web about setting these up, and I certainly don’t claim to be an expert in the full range of their capabilities, but one thing I do know is that whilst they are fantastic when they are working, and just sit there happily doing their job without complaint, when adding new ones it can sometimes appear to be a bit of a mystery as to whether they are working. Additionally, because we use them to consolidate a lot of different applications and URLs into one coherent set of public URLs, getting them running locally quickly ends up with requests to local environments being redirected to live environments, with no real way of knowing if it was because the rules are working perfectly, or simply because they fell through to some catch all at the end.</p>
<p>Fortunately there is a way to debug the rules, or at least get logging out of the engine, albeit a little hidden.</p>
<h2 id="not-all-failures-are-failures">Not all failures are failures</h2>
<p>The answer lies in the IIS Failed Request Tracing feature and the fact that it is possible to configure it to trace successful requests just as easily as failed ones. The feature can be access through the IIS manager or the configuration can be specified in the <code class="language-plaintext highlighter-rouge">web.config</code> file of your application.</p>
<p><img src="https://wengier.com/images/posts/FRT.png" alt="Failed Request Tracing" /></p>
<p>The module itself has quite a nice wizard to guide you through setting up a new rule, however to debug rewrites in the way that I want to, its a little unintuitive so I’ll detail exactly what I did.</p>
<p>The first step is straight forward enough, where you select which filenames you want to trace. In the modern era of MVC and WebApi this feels a little antiquated, since file names are a bit naff, so probably best, and certainly easiest, to just leave this selection on “All content”.</p>
<p>The second screen is where the real magic happens:</p>
<p><img src="https://wengier.com/images/posts/frt-step-2.png" alt="Failed Request Tracing Step 2" /></p>
<p>The first input option here is to specify which HTTP status codes should be traced, and this is where we flip the “failure” title on its head. By specifying a successful code here (ie, 2xx or 3xx) we get tracing for successful requests and not just failed ones.</p>
<p>Depending on how much logging you want happening, you could narrow this down to just the statuses you want to track, for example just specify 301 to trace permanent redirects, or you could widen it. I think starting as wide as you can and specifying <code class="language-plaintext highlighter-rouge">200-399</code> for this value is the best, that way even if you’re adding a new permanent redirect rule that you want to trace, you’ll get the logs even if you had something wrong with your rule and the request fell through to a different rewrite rule.</p>
<p>If the requests you’re trying to trace are getting through to your site and resulting in errors or bad URLs you also might want to add in <code class="language-plaintext highlighter-rouge">404</code> and <code class="language-plaintext highlighter-rouge">500</code> to the list.</p>
<p><img src="https://wengier.com/images/posts/frt-step-3.png" alt="Failed Request Tracing Step 3" /></p>
<p>The 3rd screen allows for the selection of which IIS modules you want to trace so in order to keep some of the noise out of the log its best to untick everything except <code class="language-plaintext highlighter-rouge">Rewrite</code> and <code class="language-plaintext highlighter-rouge">RequestRouting</code>. Leave the verbosity at verbose, mainly because its fun to say “verbose verbosity”.</p>
<p>And thats it, you’re all configured. You can also configure the equivalent of all of this in the web.config file with the follow config:</p>
<div class="language-xml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nt"><tracing></span>
<span class="nt"><traceFailedRequests></span>
<span class="nt"><add</span> <span class="na">path=</span><span class="s">"*"</span><span class="nt">></span>
<span class="nt"><traceAreas></span>
<span class="nt"><add</span> <span class="na">provider=</span><span class="s">"WWW Server"</span> <span class="na">areas=</span><span class="s">"Rewrite,RequestRouting"</span> <span class="na">verbosity=</span><span class="s">"Verbose"</span> <span class="nt">/></span>
<span class="nt"></traceAreas></span>
<span class="nt"><failureDefinitions</span> <span class="na">timeTaken=</span><span class="s">"00:00:00"</span> <span class="na">statusCodes=</span><span class="s">"200-399"</span> <span class="nt">/></span>
<span class="nt"></add></span>
<span class="nt"></traceFailedRequests></span>
<span class="nt"></tracing></span>
</code></pre></div></div>
<p>Finally make sure the feature itself is enabled by clicking “Edit Site Tracing…” from the right hand bar and tick the Enabled checkbox. If it’s already enabled then great, but the screen is still useful to grab the path to the log files, which by default is <code class="language-plaintext highlighter-rouge">%SystemDrive%\inetpub\logs\FailedReqLogFiles</code>.</p>
<h2 id="looking-at-the-log-files">Looking at the log files</h2>
<p>The Failed Request Tracing module logs to XML files which I definitely don’t recommend looking at in raw form. Fortunately IIS also generates an XSL file for you which nicely formats the logs into something vaguely readable. I’ve found the easiest way to view the logs is simply to open the XML file in Internet Explorer (yes, I know) as that will automatically find and apply the xsl file, whereas Chrome did not.</p>
<p>The logs themselves are quite verbose, as we requested:</p>
<p><img src="https://wengier.com/images/posts/frt-output.png" alt="Failed Request Tracing Sample Log" /></p>
<p>You’ll see the output for each rule you have in your rewrite configuration, you can see the input values and the patterns matched against. You can see whether each on succeeded, though its worth noting that you need to apply the <code class="language-plaintext highlighter-rouge">negate</code> value yourself so a negative rule might say “Succeeded: false” and you have to remember that that means the rule as you wrote it did in fact match.</p>
<p>Hopefully after trawling through the file you can work out what is going on, though personally I found it easier to search the file for the rule you think is probably at fault, rather than scanning through them all, but I’m coming from a codebase with a <em>lot</em> of rules.</p>Where I work we have a number of ASP.NET web applications that run different parts of our site so that we can have some segregation of code and containment of scope without just having an enormous monolithic project that holds everything intermingled together. It’s nothing too exciting technically, but the marketing department also needs to be able to present the entire site as a whole to visitors, and the Google Bot, for that sweet sweet SEO juice (and easier navigation and other less cynical reasons I’m sure). The way we achieve that is with prodigious use of the IIS URL Rewrite engine, which allows us to create a set of rules that take the incoming HTTP requests and either route them through to different web applications, or different virtual paths, or stop some in their tracks entirely.Promoting Binaries and Hotfixable Deployments2018-05-07T00:00:00+00:002018-05-07T00:00:00+00:00https://www.wengier.com/promoting-binaries-and-hotfixable-deployments<p>There are a two different schools of thought when it comes to deploying to production environments. Well okay, we’re developers, so there are probably 100 different schools of thought but bear with me. One option is to promote the same binaries from testing, through staging, and all the way to production, and the other is to maintain a branch in your source repository for the current state of production, and deploy from that. The general thinking is that with the former you get safety in knowing that your production deployments is <em>exactly</em> what has been through your testing cycles, and with the latter you’re always in a position to hotfix and correct a production issue regardless of what state your testing branch might be in.</p>
<p>Fortunately its an argument that can be avoided, and instead you can set up an environment where you get the best of both worlds: Predictable results from promoting binaries to production and an insurance policy in case you need to hotfix. I am using <a href="https://www.jetbrains.com/teamcity/">TeamCity</a> and <a href="https://www.jetbrains.com/teamcity/">Octopus Deploy</a> to do this but the ideas are the same no matter what technology you use.</p>
<h2 id="commit-hashes-are-important">Commit hashes are important</h2>
<p>One of the best pieces of advice I have for anyone setting up any kind of CI/CD, automation, devops workflow is to get your commit hashes into your binaries and packages as early and as often as possible. Having a known identifier that can track binaries and directly correlate them to source code is invaluable for all sorts of things, but in this case its especially important so that the build server and deployment packages know what each other is talking about.</p>
<p>To get commit hashes into your build output in TeamCity is as straightforward as configuring a setting on the build in question. The “Build number format” setting dictates how TeamCity should format build numbers in its output, and also the format of the <code class="language-plaintext highlighter-rouge">%build.number%</code> variable that you can use in, or pass in to, scripts and build steps. The normal approach for a build number would be something like <code class="language-plaintext highlighter-rouge">1.0.%build.counter%</code>, where the major and minor versions are hardcoded to 1.0, and the build number increments automatically with every build. Personally I’m a fan of using something like GitVersion to allow the number of commits to be used instead of the build number, as it allows resiliency across build server reinstalls, but thats for another discussion.</p>
<p>Tagging the commit hash on the end is done by adding a hyphen after the build counter, and then inserting the commit hash. In TeamCity this is the <code class="language-plaintext highlighter-rouge">%build.vcs.number%</code> variable, so our full build number format is as follows.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>1.0.%build.counter%-%build.vcs.number%
</code></pre></div></div>
<p>This will give a build number something like <code class="language-plaintext highlighter-rouge">1.0.134-770ac6d169006ce42b5bbc022a6a166135bbe8a7</code>. Success in that we have the commit hash in the build number, but its a bit ugly and unnecessarily long. You only need around 7 or 8 characters to be unique for most repos (the Linux Kernel is starting to need 12 but they have hundreds of thousands of commits) so I like to shorten the hash down a bit. Doing this in TeamCity is a little un-intuitive as there are no operations that can be performed in the simply macro language you use to specify the build format. To change the build number you need to use a build step and use the TeamCity feature called <a href="https://confluence.jetbrains.com/display/TCD10/Build+Script+Interaction+with+TeamCity#BuildScriptInteractionwithTeamCity-servMsgsServiceMessages">service messages</a>, which is a standard pre-defined structure of output, written to standard output, that TeamCity will pick up and process. I’ve done this with a short PowerShell script as the first step in each build I define.</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">Write-Host</span><span class="w"> </span><span class="s2">"Old build number was: %build.number%"</span><span class="w">
</span><span class="nv">$buildNumber</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"1.0.%build.counter%"</span><span class="w">
</span><span class="nv">$shortHash</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"%build.vcs.number%"</span><span class="w">
</span><span class="nv">$shortHash</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nv">$shortHash</span><span class="o">.</span><span class="nf">substring</span><span class="p">(</span><span class="nx">0</span><span class="p">,</span><span class="w"> </span><span class="nx">10</span><span class="p">)</span><span class="w">
</span><span class="nv">$buildNumber</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"</span><span class="nv">$buildNumber</span><span class="s2">-</span><span class="nv">$shortHash</span><span class="s2">"</span><span class="w">
</span><span class="n">Write-Host</span><span class="w"> </span><span class="s2">"New build number is: </span><span class="nv">$buildNumber</span><span class="s2">"</span><span class="w">
</span><span class="n">Write-Host</span><span class="w"> </span><span class="s2">"##teamcity[buildNumber '</span><span class="nv">$buildNumber</span><span class="s2">']"</span><span class="w">
</span></code></pre></div></div>
<p>My script is overly long because of the debugging output but I find build logs verbose enough already so keeping a couple of lines out of it isn’t worth worrying about. Strictly speaking the whole script could be a one-liner.</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">Write-Host</span><span class="w"> </span><span class="s2">"##teamcity[buildNumber '1.0.%build.counter%-</span><span class="si">$(</span><span class="s2">"%build.vcs.number%"</span><span class="o">.</span><span class="nf">substring</span><span class="p">(</span><span class="nx">0</span><span class="p">,</span><span class="w"> </span><span class="nx">10</span><span class="p">)</span><span class="si">)</span><span class="s2">']"</span><span class="w">
</span></code></pre></div></div>
<p>I’m keeping the short has at 10 characters for no good reason, you could easily change that to whatever you desire. Its worth noting that with this as the first step of the build plan the “Build number format” setting has been rendered effectively useless for all but the first few seconds of the build, until it runs this script. With the script in blame the build number will now be <code class="language-plaintext highlighter-rouge">1.0.134-770ac6d169</code>.</p>
<h2 id="pass-hashes-through-to-octopus">Pass hashes through to Octopus</h2>
<p>Now that we have our short build number its important to use that in the version number for any package pushed to Octopus, and the release made from those packages. This gives full traceability from git commit, to build, through to deployment. If you also use something like <a href="https://github.com/AArnott/Nerdbank.GitVersioning">NerdBank.GitVersioning</a> you can tag your DLLs with the same commit hash, which means you can also include it in your application logs or audit tracking.</p>
<p>With the version number in the package being deployed in Octopus it means we can now create a powershell script, and put it in the process for a production deployment. That script fast forwards the master branch to the specific commit that has been deployed, guaranteeing that the master branch will point at exactly where the develop branch was at when that package was built.</p>
<p><img src="../images/posts/fast-forward-to-master-step.png" alt="Fast Forward to Master Step" /></p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">Set-Location</span><span class="w"> </span><span class="nt">-Path</span><span class="w"> </span><span class="s2">"<path to source repository>"</span><span class="w">
</span><span class="nv">$vers</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"</span><span class="si">$(</span><span class="nv">$OctopusParameters</span><span class="p">[</span><span class="s2">"Octopus.Release.Number"</span><span class="p">]</span><span class="si">)</span><span class="s2">"</span><span class="o">.</span><span class="nf">Split</span><span class="p">(</span><span class="s2">"-"</span><span class="p">)[</span><span class="mi">1</span><span class="p">]</span><span class="w">
</span><span class="n">Write-Host</span><span class="w"> </span><span class="s2">"Version is: </span><span class="nv">$vers</span><span class="s2">"</span><span class="w">
</span><span class="nv">$commitHash</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nv">$vers</span><span class="o">.</span><span class="nf">Substring</span><span class="p">(</span><span class="nv">$vers</span><span class="o">.</span><span class="nf">IndexOf</span><span class="p">(</span><span class="s2">"-"</span><span class="p">)</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="nx">1</span><span class="p">)</span><span class="w">
</span><span class="n">Write-Host</span><span class="w"> </span><span class="s2">"This release is from commit hash: </span><span class="nv">$commitHash</span><span class="s2">"</span><span class="w">
</span><span class="n">Write-Host</span><span class="w"> </span><span class="s2">"Fetching latest origin just to be sure"</span><span class="w">
</span><span class="n">git</span><span class="w"> </span><span class="nx">fetch</span><span class="w"> </span><span class="nx">origin</span><span class="w"> </span><span class="nt">--prune</span><span class="w">
</span><span class="n">Write-Host</span><span class="w"> </span><span class="s2">"Resetting to current master"</span><span class="w">
</span><span class="n">git</span><span class="w"> </span><span class="nx">reset</span><span class="w"> </span><span class="nx">origin/master</span><span class="w"> </span><span class="nt">--hard</span><span class="w">
</span><span class="n">Write-Host</span><span class="w"> </span><span class="s2">"Fast forwarding to </span><span class="nv">$hash</span><span class="s2">"</span><span class="w">
</span><span class="n">git</span><span class="w"> </span><span class="nx">merge</span><span class="w"> </span><span class="nv">$hash</span><span class="w"> </span><span class="nt">--ff-only</span><span class="w">
</span><span class="n">Write-Host</span><span class="w"> </span><span class="s2">"Pushing back to origin"</span><span class="w">
</span><span class="n">git</span><span class="w"> </span><span class="nx">push</span><span class="w"> </span><span class="nx">origin</span><span class="w">
</span><span class="n">Write-Host</span><span class="w"> </span><span class="s2">"Finished"</span><span class="w">
</span></code></pre></div></div>
<p>The script needs to be run in a git working copy and assumes master is checked out, though that could be added easily enough. I could have reset to the specific commit and just pushed that, but I like the extra protection that <code class="language-plaintext highlighter-rouge">--ff-only</code> provides. It ensures that if anything goes wrong with the working copy, or the script gets run at an incorrect time, there at least won’t be any commits lost that will require navigating the reflog for. There might be a better way to achieve this, or perhaps that worry is for nothing, but I don’t profess to be a git expert.</p>
<h2 id="hotfixes-are-now-just-another-build">Hotfixes are now just another build</h2>
<p>Now that master is at the point of the deployed production build, hotfix branches can be created from, and merged back into, the master branch, which can then be built and deployed with the normal build and deployment process knowing that any changes that have been made to the develop branch will not be included. In an ideal world develop remains deployable and this process isn’t needed, but an insurance policy is a good idea and in this case, cheap to have. In my case I’ve set up a separate build on TeamCity for the master branch that is not automatically triggered, and considering each production deploy will change the master branch thats best.</p>
<p><img src="../images/posts/hotfix-lifecycle.png" alt="HotFix Lifecycle" /></p>
<p>The hotfix build releases on a hotfix channel in Octopus so that it can deploy direct to staging, avoiding test. This way test still maps to the develop branch so that process isn’t interrupted. Specifying the channel to use is a matter of setting the right parameters in the TeamCity build step that does your Octopus release creation.</p>
<p><img src="../images/posts/push-to-octopus.png" alt="Specify Channel in Octopus" /></p>
<p>The only issue that I ran into with this is that because I’m not using a “smart” build number, but instead just a numerically increasing build counter, the first hotfix build didn’t actually get deployed by Octopus. Looking at the TeamCity and Octopus logs it was clear that while the build and release versions were correct, when it came time to pick which packages went into a release Octopus saw the hotfix build as being older than the last develop build, simply because of the build counter.</p>
<p>To solve this I configured the Octopus release creator to force a package version to use. Since we have commit hashes at every step of the way the actual version numbers all become rather irrelevant so this feels like a perfectly safe thing to do. In theory if two releases point to the same commit hash, it doesn’t matter if one is v2.0.1 and the other is v3.56.231, they have the same code and therefore will function the same way.</p>
<p><img src="../images/posts/push-to-octopus-advanced-options.png" alt="Advanced Octopus Options" /></p>
<p>You might need to click “Show Advanced Options” in TeamCity to get this item to appear.</p>
<h2 id="hope-for-the-best-plan-for-the-worst">Hope for the best, plan for the worst</h2>
<p>Now we have a situation where the devlop branch is build an deployed automatically, as often as we like. We know the commit hash at every step of the way, so we can map everything back to the raw source commit and we have our insurance policy in place if things go wrong, via the master branch moving and hotfix build available for manual triggering.</p>There are a two different schools of thought when it comes to deploying to production environments. Well okay, we’re developers, so there are probably 100 different schools of thought but bear with me. One option is to promote the same binaries from testing, through staging, and all the way to production, and the other is to maintain a branch in your source repository for the current state of production, and deploy from that. The general thinking is that with the former you get safety in knowing that your production deployments is exactly what has been through your testing cycles, and with the latter you’re always in a position to hotfix and correct a production issue regardless of what state your testing branch might be in.Targeting builds for multiple frameworks and machines2018-04-30T00:00:00+00:002018-04-30T00:00:00+00:00https://www.wengier.com/multi-targeting-builds<p>I’ve recently starting working on a new project in my spare time, <a href="https://github.com/davidwengier/dbupgrader">DbUpgrader</a>, and I’m trying to work on it for at least a few minutes every night. I variously use a MacBook Pro or Windows machine, and sometimes I use Visual Studio 2017 but sometimes I’m just using Visual Studio Code and mucking around on the console. I’d like to also try out Visual Studio for Mac sometime soon. All of these different environments have their advantages and features, but I mostly want to make sure that I can work in all of them, on the same project, without issue.</p>
<p>Enter the <a href="https://github.com/dotnet/project-system">new project system</a> in Visual Studio which allows for minimal .csproj files that remain easily editable MSBuild targets without having to compromise and have separate build scripts for each scenario. The challenge I set myself was to see if I could create a single solution with projects that fulfilled the following needs:</p>
<ul>
<li>Opens in Visual Studio on Windows without error</li>
<li>Builds in Visual Studio without issue</li>
<li>Tests appear in the Test Explorer in Visual Studio and tests run as expected</li>
<li>Works with <code class="language-plaintext highlighter-rouge">dotnet build</code> on Mac and Windows</li>
<li>Works with <code class="language-plaintext highlighter-rouge">dotnet test</code> on Mac and Windows</li>
</ul>
<p>This may seem easy but its slightly complicated by the fact that I want to support not only the full .NET Framework v4.6 on Windows, but also .NET Core on Mac and Windows, without the .NET 4.6 support being an issue on Mac. To support .NET 4.6 the shared libraries need to be .NET Standard 1.3 or lower, but I also have some functionality and tests that use <code class="language-plaintext highlighter-rouge">Microsoft.Data.Sqlite</code> which is .NET Standard 2.0, and therefore incompatible with .NET 4.6. So on Windows I want a build for .NET 4.6 without Sqlite support, and a build for .NET Core with it, and on Mac a build for .NET Core with support and no errors relating to missing .NET Framework support.</p>
<h2 id="multi-targeting-means-multi-builds">Multi-targeting means multi-builds</h2>
<p>The easiest way to think about multi-targeting in the new project system is to remember this simple fact: Each target framework acts like its a duplicate of the whole project. Consider a .csproj file with the following declaration.</p>
<div class="language-xml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nt"><TargetFrameworks></span>net46;netcoreapp2.0<span class="nt"></TargetFrameworks></span>
</code></pre></div></div>
<p>When building this project MSBuild will run the build twice, ones for .NET Framework 4.6 (net46) and once for .NET Core (netcoreapp2.0). Knowing this helps explain the logic of how the project file should be laid out in order to change what is built for each target.</p>
<p>In my case I want the Sqlite code to only be built for netcoreapp2.0 because it needs to target .NET Standard 2.0, and net46 is not quite at that level. The full table of versions and what they support is <a href="https://github.com/dotnet/standard/blob/master/docs/versions.md">on GitHub</a> but suffice to say that net46 maps to .NET Standard 1.3.</p>
<p>Armed with this information we know that we need to exclude the Sqlite dependencies and files when building for net46 and this is done with a <code class="language-plaintext highlighter-rouge">Condition</code> attribute on the relevant spots in the project file.</p>
<div class="language-xml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nt"><ItemGroup</span> <span class="na">Condition=</span><span class="s">"'$(TargetFramework)' == 'netcoreapp2.0'"</span><span class="nt">></span>
<span class="nt"><ProjectReference</span> <span class="na">Include=</span><span class="s">"..\..\src\DbUpgrader.Sqlite\DbUpgrader.Sqlite.csproj"</span> <span class="nt">/></span>
<span class="nt"></ItemGroup></span>
</code></pre></div></div>
<p>Here I am instructing the project system to only reference the Sqlite project if the target framework of the build is netcoreapp2.0. This is where thinking about the targets as separate builds makes sense. When its passing through this file building for net46 MSBuild will see that the condition is not met, and simply skip over this part of the file. No reference will be added. When building for netcoreapp2.0 the reference will be added.</p>
<h2 id="excluding-files">Excluding files</h2>
<p>Thats all well and good for the reference but obviously if the reference is there then there must be files that use it. Because the new project system doesn’t need specific file inclusions its unlikely that you would have a node that can have a condition added to it, so we need to be a bit creative.</p>
<p>You can use an <code class="language-plaintext highlighter-rouge">exclude</code> attribute on a <code class="language-plaintext highlighter-rouge"><Compile></code> element alongside the normal <code class="language-plaintext highlighter-rouge">include</code> but I found the usage of that a bit ugly, and since by default there isn’t any <code class="language-plaintext highlighter-rouge"><Compile></code> elements needed in the Sdk projects it seemed a bit clunky to add one back in. The solution I settled on was to simply update the <code class="language-plaintext highlighter-rouge">DefaultItemExcludes</code> property that already exists, and is already used by the default project. The glob support in the new system makes this a breeze too, needing only a single addition to exclude multiple files and folders/subfolders.</p>
<div class="language-xml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nt"><PropertyGroup</span> <span class="na">Condition=</span><span class="s">"'$(TargetFramework)' == 'net46'"</span><span class="nt">></span>
<span class="nt"><DefaultItemExcludes></span>$(DefaultItemExcludes);Integration\Sqlite\**\*<span class="nt"></DefaultItemExcludes></span>
<span class="nt"></PropertyGroup></span>
</code></pre></div></div>
<p>Since we’re now telling MSBuild to <em>exclude</em> items we don’t want, we flip the condition so its based on net46. These two things combined mean we get the project including everything we want when building for .NET Core, and not including the wrong things when building for .NET Framework.</p>
<h2 id="targeting-the-targets">Targeting the targets</h2>
<p>If the conditions so far have been based on the frameworks being targeted, then how do you make the targets conditional? To do that you need something at a higher level and fortunately the operating system fills this role perfectly. We can tell MSBuild to build .NET Core and .NET Framework on Windows, just .NET Core on a Mac, and everything will flow correctly from there based on whichever target is being built at the time. The conditions look very similar too.</p>
<div class="language-xml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nt"><TargetFrameworks></span>netcoreapp2.0<span class="nt"></TargetFrameworks></span>
<span class="nt"><TargetFrameworks</span> <span class="na">Condition=</span><span class="s">"'$(OS)' != 'Unix'"</span><span class="nt">></span>net46;netcoreapp2.0<span class="nt"></TargetFrameworks></span>
</code></pre></div></div>
<p>Two things to note here: The first thing is that the OS for Mac is “Unix”. This surprised me, but is not a big deal. I guessed originally that it would be “Mac” and when that didn’t work I simply added a build task to my project file and observed what the output was. The task is as follows, and its run by specifying <code class="language-plaintext highlighter-rouge">InitialTargets="LogDebugInfo"</code> in the <code class="language-plaintext highlighter-rouge"><Project></code> node, but its a good reminder again that these csproj files are also simply MSBuild scripts and can be treated as such - though double check Visual Studio is still happy afterwards.</p>
<div class="language-xml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nt"><Target</span> <span class="na">Name=</span><span class="s">"LogDebugInfo"</span><span class="nt">></span>
<span class="nt"><Message</span> <span class="na">Text=</span><span class="s">"Building for $(TargetFramework) on $(OS)"</span> <span class="na">Importance=</span><span class="s">"High"</span> <span class="nt">/></span>
<span class="nt"></Target></span>
</code></pre></div></div>
<p>Secondly you’ll notice that there is only a condition on one of the elements. This was not what I tried first, as I assumed that there would be problems having duplicated elements without conditions to differentiate them. Indeed whilst having conditions on both worked fine in the <code class="language-plaintext highlighter-rouge">dotnet build</code> world (on Mac and Windows) Visual Studio itself got very confused. I posted about it on Twitter and the very helpful <a href="https://twitter.com/davkean">David Kean</a> who works for Microsoft on the new project system <a href="https://twitter.com/davkean/status/987820416579223552">pointed</a> me to <a href="https://github.com/dotnet/project-system/issues/1829">this GitHub issue</a> explaining that I’d hit a bug. It wasn’t a big deal to remove one condition I just had to make sure the order was right. Having two <code class="language-plaintext highlighter-rouge"><TargetFrameworks></code> elements means the second one overrides the first, so in order for Windows to still get net46 support it had to come last.</p>
<p>It looks like as long as the project file has one element without a condition Visual Studio (at least v15.6.7 that I’m trying this on) is happy, though I suspect the IDE thinks I’m developing for .NET Core only. When building from Visual Studio however, since it just runs MSBuild, there is no issue. In theory this could mean that the IDE could mark something as correct and have the build subsequently fail, or vice versa, but thats a minor price to pay for the flexibility and I presume that would only be a temporary problem until the build is fixed.</p>
<h2 id="i-like-your-new-stuff-better-than-your-old-stuff">I like your new stuff better than your old stuff</h2>
<p>In general the new project system is great, and I love being able to edit the project file while its open in Visual Studio and seeing the changes take effect immediately. In general getting people to think about project files and build files is a good thing as it encourages the “devops mindset” which I’m personally a fan of, and think every developer should try to attain.</p>
<p>But thats commentary for another time.</p>I’ve recently starting working on a new project in my spare time, DbUpgrader, and I’m trying to work on it for at least a few minutes every night. I variously use a MacBook Pro or Windows machine, and sometimes I use Visual Studio 2017 but sometimes I’m just using Visual Studio Code and mucking around on the console. I’d like to also try out Visual Studio for Mac sometime soon. All of these different environments have their advantages and features, but I mostly want to make sure that I can work in all of them, on the same project, without issue.Codify your coding standards with .editorconfig2018-04-23T00:00:00+00:002018-04-23T00:00:00+00:00https://www.wengier.com/codify-your-coding-standards<p>Every dev team has coding standards. Sometimes they’re established through convention, tradition, example and maybe sometimes there is even a formal document outlining them (hopefully in a living format that can be updated!). No matter how its done though, nobody wants to be the bad guy in code reviews or pull requests and pull people up for what are usually minor infractions, however at the same time nobody wants to see a codebase be neglected and let inconsistency creep in, or readability wane.</p>
<p>Visual Studio has many excellent rules and formatting options to enable it to be fully configured to match your coding standards and conventions, but in a team environment it can be a pain to keep everything in sync. There are “team settings file” options which work most of the time but its not perfect and it still requires everyone to configure Visual Studio to use that shared file any time they join a team, or reinstall their machine.</p>
<p>Fortunately there is a way to enforce some coding standards at tooling level without these concerns, in particular with Visual Studio 2017 it now honours the configuration in a .editorconfig file, which overrides an individual developers settings and tells the IDE how to behave on a per-repository basis. The .editorconfig file is simply committed to the root of the repository and from then on it dictates things like indentation, formatting, style and naming rules. Not all IDEs will support all of the same features but the list on <a href="http://editorconfig.org/#download">the official site</a> is certainly impressive.</p>
<p>In this post I’ll be talking about how to codify some specific .NET related rules for Visual Studio. For more detailed information the <a href="https://docs.microsoft.com/en-us/visualstudio/ide/create-portable-custom-editor-options">official documentation</a> is great, though I might be biased since its where I submitted my first ever PR to the docs project.</p>
<h2 id="naming-rules">Naming Rules</h2>
<p>Naming rules allow you to codify the standards around naming and casing of fields, properties, constants etc. in your codebase. Each naming rule needs a name, which is specified in lower case with underscores, a severity, and a style to apply. For example:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>dotnet_naming_rule.public_members_must_be_pascal.severity = error
dotnet_naming_rule.public_members_must_be_pascal.symbols = public_symbols
dotnet_naming_rule.public_members_must_be_pascal.style = pascal_style
</code></pre></div></div>
<p>In this example <code class="language-plaintext highlighter-rouge">dotnet_naming_rule</code> denotes that we’re defining part of a rule, <code class="language-plaintext highlighter-rouge">public_members_must_be_pascal</code> is the name of our rule, and we’re going to apply it to symbols that match the <code class="language-plaintext highlighter-rouge">public_symbols</code> naming symbols which we’ll define later. We want this rule to be enforced at all times so the <code class="language-plaintext highlighter-rouge">severity</code> is <code class="language-plaintext highlighter-rouge">error</code>, which means Visual Studio will treat violations the same as it treats compiler errors. Lastly we’ve said that things that match this rule should use the style defined in <code class="language-plaintext highlighter-rouge">pascal_style</code> which is the name we will give to our style.</p>
<h2 id="naming-styles">Naming Styles</h2>
<p>Naming styles define how a developer should format symbols that match any applied rules. Like naming rules they have a name, and they can then specify prefixes, suffixes, word separators and capitalization rules. In this case we simple need to define the capitalization rule of <code class="language-plaintext highlighter-rouge">pascal_case</code> like so:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>dotnet_naming_style.pascal_style.capitalization = pascal_case
</code></pre></div></div>
<p>Again <code class="language-plaintext highlighter-rouge">dotnet_naming_style</code> means we’re defining a style and <code class="language-plaintext highlighter-rouge">pascal_style</code> is the name of the style which we used in the rule.</p>
<h2 id="naming-symbols">Naming Symbols</h2>
<p>The final piece of the puzzle tells Visual Studio which symbols the rule should apply to. For our <code class="language-plaintext highlighter-rouge">public_symbols</code> we need to specify the accessibility to be public, and that we want the rule to apply to properties, methods, fields, events and delegates. We could probably also add in classes, structs and enums to this.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>dotnet_naming_symbols.public_symbols.applicable_kinds = property,method,field,event,delegate
dotnet_naming_symbols.public_symbols.applicable_accessibilities = public
</code></pre></div></div>
<p>Naming symbols also allow you to specify <code class="language-plaintext highlighter-rouge">required_modifiers</code> so that you can target static, readonly, async or const symbols differently.</p>
<h2 id="putting-it-all-together">Putting it all together</h2>
<p>Those three elements combined are what makes a rule fully codified and means Visual Studio can be the bad guy when it comes to enforcing coding standards. No more need to have arguments about whether constants are SHOUTING_AT_YOU or are ABitMoreSubtle, you can end the age old battle between <code class="language-plaintext highlighter-rouge">_fields</code> and <code class="language-plaintext highlighter-rouge">m_fields</code> etc.</p>
<p>Additionally naming symbols and styles can be used by multiple naming rules so you only need to define something like <code class="language-plaintext highlighter-rouge">pascal_style</code> once to apply a pascal case capitalization convention to a few different things.</p>
<p>Be warned however, if you’re introducing this to a legacy code base you need to tread carefully and probably just take the hit and fix all of the issues it raises in the same commit. Even if you set the severity to <code class="language-plaintext highlighter-rouge">warning</code> or <code class="language-plaintext highlighter-rouge">suggestion</code> at the very least you’ll be potentially filling up the error window with issues and it’s never a good idea to give anyone a reason to ignore things in the error window.</p>
<p>The .editorconfig file can also be used to specify indentation styles, brace usage and style, <code class="language-plaintext highlighter-rouge">var</code> usage and even whether <code class="language-plaintext highlighter-rouge">this.</code> is required to be used, or where System using statements should go. If you can spend the time to fill out all of the possibilities it makes like much easier in a team, as your codebase is immune to the quirks of individual dev machine configurations, or in open source projects ensuring contributors always match the style of the project they’re contributing to.</p>
<p>A full example of the .editorconfig file I’m currently using for my personal projects can be found in the DbUpgrader project <a href="https://github.com/davidwengier/dbupgrader/blob/master/.editorconfig">here</a>.</p>
<h3 id="gotchas">Gotchas</h3>
<p>Some gotchas with setting up editor config files that I’ve found so far:</p>
<ul>
<li>If you specify constants should be pascal case then VS won’t error when a constant is all caps since thats still valid pascal case.</li>
<li>Ordering of rules in files seems to be inconsistent so rules around private fields and constants sometimes overlap for private constants, and VS will think you’re doing the wrong this.</li>
</ul>
<p>I will update the post if I find others.</p>Every dev team has coding standards. Sometimes they’re established through convention, tradition, example and maybe sometimes there is even a formal document outlining them (hopefully in a living format that can be updated!). No matter how its done though, nobody wants to be the bad guy in code reviews or pull requests and pull people up for what are usually minor infractions, however at the same time nobody wants to see a codebase be neglected and let inconsistency creep in, or readability wane.Reviewable Stored Procedures and Views with DbUp2018-04-16T00:00:00+00:002018-04-16T00:00:00+00:00https://www.wengier.com/reviewable-sprocs<p>We use <a href="https://dbup.github.io/">DbUp</a> at work to manage database changes and migrations and for the most part it works fine as long as you have a known schema that you’re coming from. The downside of the current implementation is that changes to stored procedure definitions are not easily reviewable in source control. Fortunately enabling this workflow with DbUp is relatively straightforward.</p>
<h2 id="project-format">Project format</h2>
<p>Our DbUp project looks fairly standard:</p>
<p><img src="../images/posts/dbup-project.png" alt="DbUp Project" /></p>
<p>DbUp takes care of running the scripts and making sure none are run more than once via its in build journaling system, a record of which is also stored in the database. The problem is that those “Alter Procedure” scripts all simply have a full copy of the stored procedure in them, even if those half-dozen files are all changing the same stored procedure.</p>
<p>The first step in enabling reviewable stored procs and views is to create a new folder for scripts that will be unjournaled, so they are always run whenever DbUp is run. I’m going to just call this StoredProcs for now as thats the first thing I ‘ll be moving across.</p>
<p>The basic idea is that you use that folder for SQL scripts that contain a simple DROP and CREATE script for each stored procedure. DbUp runs these scripts every time, essentially making sure the database definition is always correct, and allowing developers to make changes to the existing scripts in the source repository, rather than having to create new ones all the time, as with normal migration scripts.</p>
<p><img src="../images/posts/drop-and-create-script.png" alt="DROP and CREATE Script" /></p>
<h2 id="the-dbup-script-runner">The DbUp script runner</h2>
<p>The existing DbUp script runner looks fairly basic, like this:</p>
<div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kt">var</span> <span class="n">upgrader</span> <span class="p">=</span> <span class="n">DeployChanges</span><span class="p">.</span><span class="n">To</span>
<span class="p">.</span><span class="nf">SqlDatabase</span><span class="p">(</span><span class="n">connectionString</span><span class="p">)</span>
<span class="p">.</span><span class="nf">WithScriptsEmbeddedInAssembly</span><span class="p">(</span><span class="n">Assembly</span><span class="p">.</span><span class="nf">GetExecutingAssembly</span><span class="p">())</span>
<span class="p">.</span><span class="nf">LogToConsole</span><span class="p">()</span>
<span class="p">.</span><span class="nf">JournalToSqlTable</span><span class="p">(</span><span class="s">"dbo"</span><span class="p">,</span> <span class="s">"SchemaVersions"</span><span class="p">)</span>
<span class="p">.</span><span class="nf">WithTransaction</span><span class="p">()</span>
<span class="p">.</span><span class="nf">Build</span><span class="p">();</span>
</code></pre></div></div>
<p>We need to add a new upgrader to this script and instead of storing the journal in a table we will use the <code class="language-plaintext highlighter-rouge">NullJournal</code> that is build into DbUp</p>
<div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kt">var</span> <span class="n">storedProcUpgrader</span> <span class="p">=</span> <span class="n">DeployChanges</span><span class="p">.</span><span class="n">To</span>
<span class="p">.</span><span class="nf">SqlDatabase</span><span class="p">(</span><span class="n">connectionString</span><span class="p">)</span>
<span class="p">.</span><span class="nf">WithScriptsEmbeddedInAssembly</span><span class="p">(</span><span class="n">Assembly</span><span class="p">.</span><span class="nf">GetExecutingAssembly</span><span class="p">())</span>
<span class="p">.</span><span class="nf">LogToConsole</span><span class="p">()</span>
<span class="p">.</span><span class="nf">JournalTo</span><span class="p">(</span><span class="k">new</span> <span class="nf">NullJournal</span><span class="p">())</span>
<span class="p">.</span><span class="nf">WithTransaction</span><span class="p">()</span>
<span class="p">.</span><span class="nf">Build</span><span class="p">();</span>
</code></pre></div></div>
<p>The last piece of the puzzle is to put a filter onto each upgrader so each one only loads the scripts we want. The final code looks like this:</p>
<div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">static</span> <span class="kt">int</span> <span class="nf">Main</span><span class="p">()</span>
<span class="p">{</span>
<span class="kt">var</span> <span class="n">connectionString</span> <span class="p">=</span> <span class="n">ConfigurationManager</span><span class="p">.</span><span class="n">ConnectionStrings</span><span class="p">[</span><span class="s">"ConnectionString"</span><span class="p">].</span><span class="n">ConnectionString</span><span class="p">;</span>
<span class="kt">var</span> <span class="n">upgrader</span> <span class="p">=</span> <span class="n">DeployChanges</span><span class="p">.</span><span class="n">To</span>
<span class="p">.</span><span class="nf">SqlDatabase</span><span class="p">(</span><span class="n">connectionString</span><span class="p">)</span>
<span class="p">.</span><span class="nf">WithScriptsEmbeddedInAssembly</span><span class="p">(</span><span class="n">Assembly</span><span class="p">.</span><span class="nf">GetExecutingAssembly</span><span class="p">(),</span> <span class="n">s</span> <span class="p">=></span> <span class="p">!</span><span class="nf">IsStoredProc</span><span class="p">(</span><span class="n">s</span><span class="p">))</span>
<span class="p">.</span><span class="nf">LogToConsole</span><span class="p">()</span>
<span class="p">.</span><span class="nf">JournalToSqlTable</span><span class="p">(</span><span class="s">"dbo"</span><span class="p">,</span> <span class="s">"SchemaVersions"</span><span class="p">)</span>
<span class="p">.</span><span class="nf">WithTransaction</span><span class="p">()</span>
<span class="p">.</span><span class="nf">Build</span><span class="p">();</span>
<span class="kt">var</span> <span class="n">storedProcUpgrader</span> <span class="p">=</span> <span class="n">DeployChanges</span><span class="p">.</span><span class="n">To</span>
<span class="p">.</span><span class="nf">SqlDatabase</span><span class="p">(</span><span class="n">connectionString</span><span class="p">)</span>
<span class="p">.</span><span class="nf">WithScriptsEmbeddedInAssembly</span><span class="p">(</span><span class="n">Assembly</span><span class="p">.</span><span class="nf">GetExecutingAssembly</span><span class="p">(),</span> <span class="n">s</span> <span class="p">=></span> <span class="nf">IsStoredProc</span><span class="p">(</span><span class="n">s</span><span class="p">))</span>
<span class="p">.</span><span class="nf">LogToConsole</span><span class="p">()</span>
<span class="p">.</span><span class="nf">JournalTo</span><span class="p">(</span><span class="k">new</span> <span class="nf">NullJournal</span><span class="p">())</span>
<span class="p">.</span><span class="nf">WithTransaction</span><span class="p">()</span>
<span class="p">.</span><span class="nf">Build</span><span class="p">();</span>
<span class="c1">// migrate the database data, and table schema changes first</span>
<span class="k">if</span> <span class="p">(!</span><span class="nf">UpgradeAndLog</span><span class="p">(</span><span class="n">upgrader</span><span class="p">))</span>
<span class="p">{</span>
<span class="k">return</span> <span class="m">1</span><span class="p">;</span>
<span class="p">}</span>
<span class="c1">// now we can change stored procs that rely on the adjusted schema</span>
<span class="k">if</span> <span class="p">(!</span><span class="nf">UpgradeAndLog</span><span class="p">(</span><span class="n">storedProcUpgrader</span><span class="p">))</span>
<span class="p">{</span>
<span class="k">return</span> <span class="m">1</span><span class="p">;</span>
<span class="p">}</span>
<span class="n">Console</span><span class="p">.</span><span class="n">ForegroundColor</span> <span class="p">=</span> <span class="n">ConsoleColor</span><span class="p">.</span><span class="n">Green</span><span class="p">;</span>
<span class="n">Console</span><span class="p">.</span><span class="nf">WriteLine</span><span class="p">(</span><span class="s">"Success!"</span><span class="p">);</span>
<span class="n">Console</span><span class="p">.</span><span class="nf">ResetColor</span><span class="p">();</span>
<span class="k">return</span> <span class="m">0</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">private</span> <span class="k">static</span> <span class="kt">bool</span> <span class="nf">UpgradeAndLog</span><span class="p">(</span><span class="n">DbUp</span><span class="p">.</span><span class="n">Engine</span><span class="p">.</span><span class="n">UpgradeEngine</span> <span class="n">upgrader</span><span class="p">)</span>
<span class="p">{</span>
<span class="kt">var</span> <span class="n">result</span> <span class="p">=</span> <span class="n">upgrader</span><span class="p">.</span><span class="nf">PerformUpgrade</span><span class="p">();</span>
<span class="k">if</span> <span class="p">(!</span><span class="n">result</span><span class="p">.</span><span class="n">Successful</span><span class="p">)</span>
<span class="p">{</span>
<span class="n">Console</span><span class="p">.</span><span class="n">ForegroundColor</span> <span class="p">=</span> <span class="n">ConsoleColor</span><span class="p">.</span><span class="n">Red</span><span class="p">;</span>
<span class="n">Console</span><span class="p">.</span><span class="nf">WriteLine</span><span class="p">(</span><span class="n">result</span><span class="p">.</span><span class="n">Error</span><span class="p">);</span>
<span class="n">Console</span><span class="p">.</span><span class="nf">ResetColor</span><span class="p">();</span>
<span class="k">return</span> <span class="k">false</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">return</span> <span class="k">true</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">private</span> <span class="kt">bool</span> <span class="nf">IsStoredProc</span><span class="p">(</span><span class="kt">string</span> <span class="n">scriptName</span><span class="p">)</span>
<span class="p">{</span>
<span class="k">return</span> <span class="p">(</span><span class="n">scriptName</span><span class="p">.</span><span class="nf">StartsWith</span><span class="p">(</span><span class="s">"My.NameSpace.StoredProcs."</span><span class="p">,</span> <span class="n">StringComparison</span><span class="p">.</span><span class="n">OrdinalIgnoreCase</span><span class="p">));</span>
<span class="p">}</span>
</code></pre></div></div>
<h2 id="get-reviewing">Get reviewing</h2>
<p>Every change to the stored proc of view definition script will be just that - a change - so whatever source repository diff process you use will show only what has been done. Additionally you always have the current up-to-date definitions of your scripts in your source repository so you’re one step closer to not having to worry about having a known good starting point for your database, at least from the schema point of view.</p>
<p>So far we’re rolling this out on a change-by-change basis, but there is no reason all of the relevant parts of the database couldn’t be scripted to seed this effort giving you a known baseline.</p>
<p>This same theory applies to Views or Functions, or anything else where a migration script would need to contain the entire definition, and dropping the object is not a destructive operation.</p>We use DbUp at work to manage database changes and migrations and for the most part it works fine as long as you have a known schema that you’re coming from. The downside of the current implementation is that changes to stored procedure definitions are not easily reviewable in source control. Fortunately enabling this workflow with DbUp is relatively straightforward.I don’t want to remote into production2018-04-09T00:00:00+00:002018-04-09T00:00:00+00:00https://www.wengier.com/no-production<p>A friend of mine tweeted this article, and excellent summary today, about a recent production outage at Travis CI:</p>
<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">A read-only prod is a slightly safer prod, if you don't have access to truncate all tables, then it is less likely to happen :P<a href="https://t.co/7urWybtZA8">https://t.co/7urWybtZA8</a></p>— NullOps (@NullOpsio) <a href="https://twitter.com/NullOpsio/status/983237339634810880?ref_src=twsrc%5Etfw">April 9, 2018</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
<p>I personally feel even stronger about this: I don’t want access to production, read only or otherwise. I don’t even want to install the database management software, be it SQL Server Management Studio or MySQL client, if I can help it.</p>
<p>I started a new job today and during the onboarding it was mentioned that production server access was locked down by IP so if needed, it would have to be done via a VPN if I wasn’t in the office. Not to speak ill of my new employer, let me be clear this was only mentioned as a “just in case” option, as part of an answer to someone elses direct question, and not part of regular onboarding information that people need to know. In fact everyone there knows that its not a good idea, and something that should be worked to remove in future, and helping to do that is a large part of my role, but I digress.</p>
<p>I said flat out: I don’t want access.</p>
<h2 id="i-dont-trust-myself-and-you-should-too">I don’t trust myself, and you should too</h2>
<p>Eventually everyone has that moment where they do the wrong thing. Perhaps they run an UPDATE statement without a WHERE clause, or they’re connected to the wrong environment when they tweak some configuration value. Most people, you hope, only make these sorts of mistakes once, and I’ve made mine (quite a few years ago, don’t worry!) and I don’t want to do it again. The easiest way I can think of to guarantee that is to simply make it impossible.</p>
<p>Yes, I can double check everything I do.
Yes, I can get people to check over my shoulder, or review.
Yes, I can work through checklists with well documented steps.</p>
<p>Or I can just make incorrect actions impossible. If I can’t remote to a server, I can’t be remoted to the wrong server. If I can’t connect to a database, I can’t forget part of a script or statement.</p>
<h2 id="if-you-want-something-fixed-make-it-a-problem">If you want something fixed, make it a problem</h2>
<p>The ideal situation for production environments (or indeed most other environments) is that their setup and configuration is completely automated and needs no manual work. By making manual work impossible you force people and teams to do the necessary work to create tooling to enable that. There can be no shortcuts taken and temptation is removed by virtue of a firm wall between developers and where their code is deployed. If I need to make a change to a database schema I want the only option to be to create a change script, or similar. I would apply that script to my dev environment and in time to testing and staging environments.</p>
<p>If the only possible path is automated tooling then by the time a deployment to production is done not only are you guaranteed that the tooling is in place, you’ve also ideally tested its execution a few times in various environments.</p>
<p>Its always tempting for developers to leave this sort of effort to the end, but that sort of thinking is what leads to manual processes lasting years as workarounds because other work is higher priority. On the other hand if a developer has no option to solve their annoyance than to do the work right the first time, rest assured they will do that.</p>
<h2 id="data-is-cheating-too">Data is cheating too</h2>
<p>If you want to take this to the logical extreme, which I do, I also don’t want to modify any data in the database directly. If I’m building out a new feature that requires data in a specific shape, either to run or simply for me to manually test with, then I would rather build the seeding scripts, or ideally the data management front end, first. The seeding scripts can help with functional/integration tests, or the data management work presumably solves some future need in the product (and if thats not the case then don’t do it! But also consider why that data is needed).</p>
<h2 id="codified-knowledge-is-shared-knowledge">Codified knowledge is shared knowledge</h2>
<p>The other advantage of creating tooling, scripts or otherwise automating things is that anything that is codified and committed to a source repository is something that other people can reason about, read and hopefully understand. There is nothing that reduces <a href="https://en.wikipedia.org/wiki/Bus_factor">Bus Factor</a> like having a series of scripts and tools that anyone can pick up.</p>
<p>Essentially, avoid making manual work a necessity as much as is humanly possible. Of course be pragmatic about things, in particular there is nothing wrong with doing manual work once to get a feel for it, before automating, but I loathe to do something twice or three times. Additionally some manual work can be fun to opt in to, and I specifically avoid using tools like Ninite or Chocolately for this reason, as I simply enjoy the process of building a new machine.</p>
<p>But I don’t want to touch the production environment.</p>A friend of mine tweeted this article, and excellent summary today, about a recent production outage at Travis CI:UX Papercut :scissors:: Lights!2018-04-02T00:00:00+00:002018-04-02T00:00:00+00:00https://www.wengier.com/ux-papercut-light-stalk<p>The headlight stalk on the 2017 Honda CR-V annoys me more than something so simple has any right to annoy anyone. On the surface it follows the well worn path of headlight stalks, and indeed wiper stalks, across the car industry that has become standard for years. You turn the end of the stalk to select what level you want the headlights to be at, you move the stalk up and down to indicate, and forward and back to use the highbeams. So far so good, all is fine.</p>
<p>The issue I have is with the shape of the stalk which has a very pleasing rounded triangle cross section. Upon first viewing this design is very nice, it allows for slightly easier reading of the words and symbols on the stalk, and it feels nice in the hand to use. I’m sure the person who designed it has a picture of it on their wall and I’m sure they’re happy with their work.</p>
<p>I suspect also, that this person has never actually owned a 2017 Honda CR-V.</p>
<h2 id="designing-for-the-common-state">Designing for the common state</h2>
<p>The problem arise when you look at one of the words printed on that slightly-more-room-than-if-it-were-round stalk. It says “AUTO” and it activates the cars light-sensing automatic headlights. The feature is great, its not too aggressive like others I’ve seen on the road, turning the lights on when going under even the smallest of bridges, and its not too lax in its duty. If it were either of these things there is a simple sensitivity adjustment that can be made to tweak the system.</p>
<p>The awesome thing about a system like this, when it works well, is its a very simple matter of turning the switch to Auto seconds after leaving the dealership and then never adjusting it every again. In fact you can probably even forget that headlights are a thing that needs manual control, though I wouldn’t recommend it if you are in charge of a different, less capable vehicle, with any sense of regularity.</p>
<p>When the headlight stalk is set to Auto the last two inches of the rounded triangle cross section is rotated at an angle that produces just the right offset as to have one point of the triangle protrude, and generally form a lumpy looking stalk.</p>
<h2 id="strong-opinions-weakly-held-about-trivial-things">Strong opinions, weakly held, about trivial things</h2>
<p>Yes, this is a silly thing to complain about. No, this is not really that ugly, nor should it put anyone off buying the car. No, nobody else has probably ever noticed it, and certainly not the regular driver of the car where I most often observe the abomination.</p>
<p>The real problem is simply knowing that the designer(s) in question fell into that classic trap, at least classic in my realm of software design, of thinking about only the desired state and not the states or actions that real users will be in, or perform.</p>
<p>I’ve observer many a peer review of a new feature where a developer will use their own software and simply gloss over, or not even see, some of these types of issues. Forms that only work if filled out in the right order, non existent user help because the developer already knows what to do, having to perform extra unnecessary clicks or moves, that the developer has long internalized over the hours of testing. The list could go on.</p>
<p>To truly create a good user experience you have to put yourself in the users shoes, and all that entails. I don’t think you can deign a piece of a car if you don’t drive the car, in all conditions. Likewise I don’t think you can design a part of a piece of software, unless you use the software in question, including navigating through to the specific thing you want to test, and all with realistic test data and all that that entails.</p>
<h2 id="clear-your-mind-of-assumptions-and-biases">Clear your mind of assumptions and biases</h2>
<p>Removing assumptions and biases from your thinking is extremely difficult, but as they say, admitting you have a problem is the first step. Don’t assume the user knows what to do. Don’t assume they’ll navigate in a logical order. Don’t assume they know what you mean when you say “click to continue” if your button isn’t labelled “continue” etc. And don’t assume that all drivers will manually control their lights.</p>
<p>The first step of design, be it software or cars or anything else, is to understand your users and try to inform yourself appropriately so that you may adopt their world view. Only then can design be more effective, and you’ll avoid ugly misalignments in your headlight stalks.</p>The headlight stalk on the 2017 Honda CR-V annoys me more than something so simple has any right to annoy anyone. On the surface it follows the well worn path of headlight stalks, and indeed wiper stalks, across the car industry that has become standard for years. You turn the end of the stalk to select what level you want the headlights to be at, you move the stalk up and down to indicate, and forward and back to use the highbeams. So far so good, all is fine.Developer vs Coder2018-03-26T00:00:00+00:002018-03-26T00:00:00+00:00https://www.wengier.com/developer-titles<p>There are many ways to describe the difference between two types of developers: Junior vs Senior Developers, New vs Experienced Developers, Good vs Bad Developers. I’ve never been comfortable with any of these because its always too easy to find exceptions that prove the rule. I’ve worked with junior developers who really understand what it takes to be a successful well rounded programmer, and with senior developers who have years of experience yet lack a few important skills to really allow them to be successful.</p>
<h2 id="naming-is-hard">Naming is hard</h2>
<p>A further problem is trying to talk about and label developers without it seeming like a value judgement. There are plenty of fantastic technical people who could code the most amazing algorithm quickly and efficiently, but are simply not capable of being effective mentors to their teammates. Likewise there are people who excel at constructive actionable code review feedback, nurture those around them and inspire them to do better, but hit their technical ceiling early when things start to get complicated. Both of these types of people can be awesome developers and team members, and when paired with a great manager who knows how to play to their strength, can really round out a team.</p>
<h2 id="developer">Developer</h2>
<p>So how then do we talk about developer proficiency? I would like to see us move away from adjectives in front of “Developer” and instead try to redefine the term. I think to truly <em>develop</em> an application someone should be able to function across different areas of the lifecycle of that application. They should be able to reason about the broader motivations at play, and know when to come to a compromise on technical purity when a business need demands it, and vice versa. A developer learns that great results come from teams of people working together effectively, and that team harmony trumps individual concerns always. Let terms like “junior” and “senior” refer only to responsibility and time served, but not to knowledge or ability. In a team of developers authority can come organically based off nothing more than a reputation and trust that is formed over a series of interactions.</p>
<h2 id="coder">Coder</h2>
<p>So what then about the others? The people who strive only for purity of code and are happiest when someone else is making the decisions? I’ve always liked the term “Coder” as it sends a clear message about the priorities of the person. Is this meeting about the code? If not, then they’re probably not going to be engaged. If that is telegraphed up front by someones position in the team, then harmony actually increases. The coders are building the lego pieces, and even putting them together to form the final outcome, but they’re happiest following the instruction booklet. They aren’t interested in questioning if the outcome will fit the need, or even be useful, pretty or solve a problem. They simply follow the objective thinking that is inherent in the job, and leave the rest to others.</p>
<h2 id="pick-your-poison">Pick your poison</h2>
<p>So which one am I? Which one are you? I think one of the benefits of using different terms is that I can be both. I’m a Senior Developer in terms of approach and thinking, and where I am now I’m also a Senior Coder, because I know the codebase very well and can get things done very quickly and efficiently. I’m changing role soon though, and I’d probably identify as a Junior Coder there for a while. I’ll still be applying the Senior Developer thinking where appropriate, but learning a new codebase and perhaps even making some “rookie” mistakes like coding standards violations etc.</p>
<p>On every project I think everyone could identify themselves as a different spot on the two axes, and I think that position would change over time. Additionally its fun to spend a bit of time as one or the other, writing a small console application in the most productive yet ill-designed way, can be a lot of fun and a bit of a break sometimes. It doesn’t make you a bad developer, in fact one could argue its one of the qualities of a good developer that makes you aware that sometimes that is an option, and the right path to take.</p>There are many ways to describe the difference between two types of developers: Junior vs Senior Developers, New vs Experienced Developers, Good vs Bad Developers. I’ve never been comfortable with any of these because its always too easy to find exceptions that prove the rule. I’ve worked with junior developers who really understand what it takes to be a successful well rounded programmer, and with senior developers who have years of experience yet lack a few important skills to really allow them to be successful.DDD By Night March 20182018-03-19T00:00:00+00:002018-03-19T00:00:00+00:00https://www.wengier.com/ddd-by-night<p>I was lucky enough to have my submission accepted to speak at <a href="https://www.meetup.com/DDD-Melbourne-By-Night/events/247272367/">DDD By Night</a> on Thursday the 15th of March, 2018. DDD By Night is put on by the organisers of <a href="https://www.dddmelbourne.com">DDD Melbourne</a> and usually happens twice a year, consisting of 8 lightening talks of 10 minutes each.</p>
<h2 id="10-minutes-is-not-long">10 Minutes is not long</h2>
<p>This was my first lightening talk I’ve ever given, and quite frankly I wasn’t sure if I could do it. I’m not known for my brevity, and a previous talk that I delivered, expecting it to last around 15 minutes, ended up with me talking for over half an hour. The topic I submitted to talk about was writing an Amazon Echo skill because I figured that in 10 minutes I could at least explain how they work so that people had an idea of how easy they are, at least in my opinion, to write.</p>
<p>I whipped up a quick PPT consisting of around 11 slides that covered how the Alexa system works, what an invocation looks like, how to configure the speech analysis model and finally what the JSON requests and responses look like.</p>
<h2 id="10-minutes-is-really-short">10 Minutes is really short</h2>
<p>After my first run through I found it was way too long. Whilst I did get through all of the material in 10 minutes, I had no time for a summary, it was crammed with too much information that made it confusing, and there was no point to the talk. I don’t think anybody goes to a lightening talk to learn about the details of something, so there is no point trying to teach anything.</p>
<p>The thing I wanted to actually communicate to people, that writing a skill is easy, didn’t need all of that specific detailed information, and if you think like a salesman for a minute, it actually makes it worse. By cutting out a couple of slides about exactly how to configure the AI model people had to take my word for it that it was easy, rather than hope they found the screenshots I had easy to digest.</p>
<h2 id="10-minutes-is-really-not-very-long-at-all">10 Minutes is really not very long at all</h2>
<p>In the end my talk contained 6 minutes of information about how skills work, and how simple the JSON requests and responses are. I didn’t need anyone to remember any of the detail, they only needed to come away remembering how easy it seemed, so I focused on sending that message. The last 2 minutes of my talk basically consisted of me yelling at them to go and write some skills, because its really easy. I left 2 minutes at the end because DDD By Night rewards short talks with chocolate.</p>
<p>I rehearsed about 6 times so I was sure of my timing, and the talk went well, was well received, and multiple people told me they might have a go at writing an Alexa skill. Can’t ask for more than that really.</p>
<p><strong><em>So go and write an Alexa skill!</em></strong></p>
<h2 id="update-10-minutes-might-just-be-long-enough">Update: 10 Minutes might just be long enough</h2>
<p>Maybe, just maybe, a 10 minute talk can actually convey enough information <a href="https://twitter.com/centur/status/976368531309699072">to be useful</a>. Slides from the talk are available here: <a href="https://www.slideshare.net/DavidWengier/introduction-to-amazon-echo-skills">https://www.slideshare.net/DavidWengier/introduction-to-amazon-echo-skills</a></p>I was lucky enough to have my submission accepted to speak at DDD By Night on Thursday the 15th of March, 2018. DDD By Night is put on by the organisers of DDD Melbourne and usually happens twice a year, consisting of 8 lightening talks of 10 minutes each.PowerPoint Karaoke :microphone:2018-03-13T00:00:00+00:002018-03-13T00:00:00+00:00https://www.wengier.com/powerpoint-karaoke<p>I’ve done PowerPoint Karaoke (PPTK) twice now, and seen it done a bunch of times, and while I certainly enjoy the challenge a lot of people struggle with it and find it unbearable. In this post I thought I’d have a go at writing down what I think should be done to ensure you deliver a good PPTK talk. I must admit though at my last PPTK talk I naturally failed to implement all of these ideas, because nerves, but nobody can expect to be perfect the first time. Or second., Or third.</p>
<p><em>Disclaimer:</em> This blog is just my thoughts and opinions, I don’t claim to be good at PowerPoint Karaoke, or indeed anything else I blog about, or do in life.</p>
<h2 id="preparation">Preparation</h2>
<p>I like to think that everyone knows preparation is the key to a successful presentation, but I think that people assume its impossible to prepare for a talk about an unknown topic so they don’t bother. Whilst it is true you can’t prepare for the content or topic you can at least mentally prepare yourself for the fact that it will be the unknown, and hopefully prevent some of that “deer in headlights” effect.</p>
<p>The job of most presentations is not to lecture, but to entertain, and this is even more apparent in PPTK talks, so prepare yourself mentally for that. Treating it like and open mic stand up comedy routine than a technical talk, and making your only aim be to entertain a few people for a few minutes, will also put you in a better frame of mind.</p>
<h2 id="take-your-time">Take your time</h2>
<p>Most PPTK talks are expected to go for about 5 minutes, and given the natural tendency for people to talk faster in front of crowds, and the desire to rush through uncomfortable information you end up with really short talks that can last as little as 2 minutes. Given this its totally fine to slow down a little at the start, and take a minute to think about your topic once you find out what it is. Linger on that first slide a bit and think about 3 or 4 key things about the topic. This will give you some ideas to help once you start launching into the slides. If your topic is “raising children” for example, you might quickly think to yourself “children are loud, messy and the prospect of having them is scary. Those 3 things are enough of a framework to allow to proceed and not have every slide be a complete curve ball.</p>
<p>Usually at PPTK everyone in the audience knows that you’re doing PPTK so they will give you some leeway, and allow you to take this time. You don’t want to leave a lot of dead air of course, but even just saying out loud “oh, topic x, thats interesting, hmmm” will buy you the time you need to do some thinking because the audience knows you’re as unprepared as they are.</p>
<h2 id="analogies-are-your-friends">Analogies are your friends</h2>
<p>The best presentations have slide content that is not a literal representation of what is being talked about. With PPTK you can pretty much guarantee that will be the case, so embrace it and use analogies to map your topic to the slide. For example if the slide shows a picture of a starfield then say “Having kids can sometimes feel like being adrift in space”. The audience will appreciate that you talked directly to the slide, but about your topic, and will be impressed. Don’t forget: They know you’re doing PPTK so the expectations and rules are different.</p>
<p>You can also leverage their knowledge of PPTK and actually directly address the slide as a character. “There are hundreds of unique stars in this picture, and there are hundreds of theories about raising kids”. Again you’ve achieved one of the ultimate goals of a PPTK talk, that of being able to adapt your topic to the content, and the audience will appreciate it.</p>
<p>Once you’ve tied the topic and slide together you’re free to just keep talking about the topic which turns the PPTK from being “oh dear how can I connect these two things” to “I just need to waffle on a bit”. Having your plan of 3 or 4 ideas really helps here as it gives you boundaries on what to talk about, and a natural trigger to switch to the next slide once you’re done with a single idea.</p>
<h2 id="it-is-still-a-talk-about-a-topic">It is still a talk about a topic</h2>
<p>A good PPTK talk is a series of random slides, and an adept speaker who can link each one to his topic painlessly. A great PPTK talk is where the speaker does that linking but then continues talking, and demonstrates not only that they can deliver a few jokes, or talk to random content, but can also deliver a proper talk on the topic that was given.</p>
<p>Don’t forget however that the primary goal is entertainment, not education, so you don’t need to worry if your talk doesn’t have a beginning, middle and end. Nobody will remember anything you said anyway :)</p>I’ve done PowerPoint Karaoke (PPTK) twice now, and seen it done a bunch of times, and while I certainly enjoy the challenge a lot of people struggle with it and find it unbearable. In this post I thought I’d have a go at writing down what I think should be done to ensure you deliver a good PPTK talk. I must admit though at my last PPTK talk I naturally failed to implement all of these ideas, because nerves, but nobody can expect to be perfect the first time. Or second., Or third.