Mark Oliver's World

All Blog Posts

Newest posts are first.

Contents

  1. This Service Descriptor Is Keyed Your Service Provider May Not Support Keyed Services
  2. Make A Wireshark Capture File More Focussed
  3. Forcing Tests To Run In A Specific Order Using MSTest
  4. Upgrading From IdentityServer Version 2 To IdentityServer Version 4
  5. Docker Compose Scaling Gotcha Part 2
  6. Using ObjectDumper Breathed New Life Into A Complex System
  7. How To Prepare For The First Day Of A New Starter
  8. My Thoughts On Using GitHub CoPilot
  9. 50 Ways To Retain An Employee
  10. Recording Video From A Nintendo Switch
  11. Caching The Contents Of This Site
  12. Adding More Disk Space To Azure DevOps Agents
  13. Four Years Have Gone
  14. Querying Console Output In Your Tests
  15. Filtering A Pcap VoIP Recording To A Specific Call In Wireshark
  16. Playing A Captured VoIP Recording Through Wireshark
  17. Replaying Network Captures
  18. Explicitly Including A DLL In A Project
  19. Hosting A Large Number Of Participants In Microsoft Teams
  20. Dynamically Changing Docker Container Limits
  21. Docker Compose Parallel Gotcha
  22. Intermittent Playwright Exception
  23. Its Time To Talk About Lego
  24. Docker Compose Scaling Gotcha
  25. Three Years Have Gone
  26. Testing In Dotnet
  27. Why Byte Arrays Are A Bad Idea When Dealing With Large Data
  28. Azure Static Web App Invalid SSL Certificate
  29. Reading UDP Packets With SharpPcap
  30. How To Confirm 2 Audio Files Are Identical
  31. Microsoft Teams Compliance Recording _ Part 2
  32. Microsoft Teams Compliance Recording
  33. February 2022 Lego Builds
  34. Meeting Timer For Microsoft Teams Zoom Google Meet
  35. Working With Ffmpeg
  36. Two Years Have Gone
  37. Summing Up 2021
  38. Sorting Lego
  39. Remote Meetings
  40. Remote Pair Programming
  41. Atlassian Products
  42. First Day At New Job Fail
  43. Different Staging Environments In Azure Static Web Apps
  44. Speeding Up The Initial Load
  45. Setting Up A Sitemap
  46. Installing A Garden Office _ Part 7
  47. Installing A Garden Office _ Part 6
  48. Installing A Garden Office _ Part 5
  49. Lego Saturday
  50. Installing A Garden Office _ Part 4
  51. Installing A Garden Office _ Part 3
  52. Getting Into Blogging
  53. Installing A Garden Office _ Part 2
  54. Adding Category Pages
  55. Job Search 2021 _ The End
  56. Job Search 2021 _ Working With Recruiters
  57. Job Search 2021 _ A Live Programming Test
  58. Installing A Garden Office _ Part 1
  59. Tagging Posts
  60. Highlighting Search Terms
  61. Job Search 2021 _ Another Programming Test _ Feedback
  62. Adding A Site Search
  63. Job Search 2021 _ Another Programming Test
  64. Job Search 2021 _ Choosing The Right Role
  65. My Mentoring Journey
  66. Making This Site More Accessible
  67. Customising The Blazor Loading Page
  68. Testing This Site
  69. Job Search 2021 _ Technical Tests
  70. Secrets Within Blazor WASM
  71. Why Does It Feel Weird That People Are Reading My Blog
  72. Getting Twitter To Work
  73. Being A 5 Year Old
  74. Its Been A Year
  75. How I Built This Blog Part 3
  76. Error When Incorrect Post Uri Used
  77. Job Search 2021 _ Using Your Friends
  78. Job Search 2021 _ Where To Look
  79. Job Search 2021 _ Initial Steps
  80. How I Built This Blog Part 2
  81. Being The Executor Of Someones Will
  82. Why Can I Not Jump Direct To A Blog Post
  83. Setting Up My Custom Domain
  84. My OBS Configuration
  85. How I Built This Blog

This Service Descriptor Is Keyed Your Service Provider May Not Support Keyed Services

I just came across this error without any code changes.

            System.InvalidOperationException   HResult=0x80131509   Message=This service descriptor is keyed. Your service provider may not support keyed services.   Source=Microsoft.Extensions.DependencyInjection.Abstractions   StackTrace:    at Microsoft.Extensions.DependencyInjection.ServiceDescriptor.ThrowKeyedDescriptor()    at Microsoft.Extensions.DependencyInjection.ServiceDescriptor.get_ImplementationInstance()    at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteFactory.Populate()    at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteFactory..ctor(ICollection`1 descriptors)    at Microsoft.Extensions.DependencyInjection.ServiceProvider..ctor(ICollection`1 serviceDescriptors, ServiceProviderOptions options)    at Microsoft.Extensions.DependencyInjection.ServiceCollectionContainerBuilderExtensions.BuildServiceProvider(IServiceCollection services, ServiceProviderOptions options)    at Microsoft.Extensions.DependencyInjection.ServiceCollectionContainerBuilderExtensions.BuildServiceProvider(IServiceCollection services)    at MediaServer.Startup.SetupRateLimitWithRetryStrategyPipeline(IServiceCollection services) in Startup.cs:line 129    at MediaServer.Startup.ConfigureServices(IServiceCollection services) in Startup.cs:line 86    at System.RuntimeMethodHandle.InvokeMethod(Object target, Span`1& arguments, Signature sig, Boolean constructor, Boolean wrapExceptions)    at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)    at Microsoft.AspNetCore.Hosting.ConfigureServicesBuilder.InvokeCore(Object instance, IServiceCollection services)    at Microsoft.AspNetCore.Hosting.ConfigureServicesBuilder.<>c__DisplayClass9_0.<Invoke>g__Startup|0(IServiceCollection serviceCollection)    at Microsoft.AspNetCore.Hosting.StartupLoader.ConfigureServicesDelegateBuilder`1.<>c__DisplayClass15_0.<BuildStartupServicesFilterPipeline>g__RunPipeline|0(IServiceCollection services)    at Microsoft.AspNetCore.Hosting.ConfigureServicesBuilder.Invoke(Object instance, IServiceCollection services)    at Microsoft.AspNetCore.Hosting.ConfigureServicesBuilder.<>c__DisplayClass8_0.<Build>b__0(IServiceCollection services)    at Microsoft.AspNetCore.Hosting.StartupLoader.ConfigureServicesDelegateBuilder`1.<>c__DisplayClass14_0.<ConfigureServices>g__ConfigureServicesWithContainerConfiguration|0(IServiceCollection services)    at Microsoft.AspNetCore.Hosting.ConventionBasedStartup.ConfigureServices(IServiceCollection services)
            
          

😨

Turns out a minor nuget package upgrade caused it (in this case Polly v8.0.0 to v8.3.1)

I found this issue on github which is exactly what I am seeing:
https://github.com/dotnet/runtime/issues/95789

It basically says; something has changed in dotnet DI registrations, and now anything that uses a "Keyed" registration is potentially going to break by throwing this exception.

This is because of a change in v8 of Microsoft.Extensions.DependencyInjection:

The only way I solved it was to downgrade Polly back to version 8.2.1

This was using a dotnet6 app.

I don't know what else to do to work around this error, as we are not registering anything in a "Keyed" way, nor are we directly referencing v8 of Microsoft.Extensions.DependencyInjection. So I am concerned that we are going to be "stuck" at certain version of packages, and eventually be unable to upgrade anything. 😱

Someone please Tweet me if I am wrong!


Make A Wireshark Capture File More Focussed

Wireshark capture files can be massive. So you may want to remove all the unnecessary data from the file before further analysis.

This can be done via the UI, but it is easier to do it via the command line.

Run the following tshark command, from your Wireshark installation directory:

            tshark.exe -r <filename of original capture file> -R <display filter> -w <filename of smaller capture file>
            
          
  • <filename of original capture file> is the full path to the capture file.
  • <filename of smaller capture file> is the full path to the new capture file.
  • <display filter> is the filter criteria you wish to use to select the capture packets you want to keep. (These are exactly as you would enter them in the Wireshark UI.)

Examples:

  • "sip or sdp or rtp" - Will filter for only the traffic that is used in VoIP calls
  • "ip.addr == 192.168.1.10" - Will filter for only the packets that involved the IP address 192.168.1.10
  • "tcp.port eq 25 or icmp" - Will filter to only SMTP (port 25) and ICMP traffic:

This will apply the filters to the file, and only the packets that match the filter will be in the output file.


Forcing Tests To Run In A Specific Order Using MSTest

By default, MSTest will run all tests in alphabetical order based on their name.

This means, you should be able to know when a test will run in relation to another.

For example, if you prefix your tests with TXXX, where XXX is the test number:

            T001_test_something T002_test_something_else T003_third_test_to_be_executed
            
          

Then these tests will (by default) run T001 first, followed by T002, T003 etc...

Watch out though, the ordering is based on string comparison, so a test named T010 will run before T002!

But you can run them in your own order without changing the name, and that is to use the Priority attribute

Add this to the top of each TestMethod you wish to define an order of e.g.

            [TestMethod, Priority(1)] public async Task ATestThatWillRunFirst() {     // Arrange     // Act     // Assert }  [TestMethod, Priority(2)] public async Task AnotherTestThatWillRunSecond() {     // Arrange     // Act     // Assert }
            
          

I don't recommend this approach, as any test should be atomic, and therefore not matter what order they are run in.

In fact this is vitally important if you are running tests in parallel!

BUT there may be an odd occasion you want them to run in a specific order, such as complex Integration Tests that require significant setup, and so this can be useful.


Upgrading From IdentityServer Version 2 To IdentityServer Version 4

I have recently needed to upgrade from IdentityServer v2 to IdentityServer v4.

This was relatively painless (upgrade via nuget), but I had these 3 issues:

1 - PublicOrigin is no longer exposed, need to add it manually:

            
              //Add in PublicOrigin as it has been removed in v4 of IdentityServer: https://github.com/IdentityServer/IdentityServer4/issues/4535     app.Use(async (ctx, next) =>     {             ctx.SetIdentityServerOrigin("YourRequiredPublicOrigin");                         await next();     });
            
          

2 - Add in ApiScopes :

            .AddInMemoryApiScopes(ApiScopes.GetApiScopes())
            
          

Everything worked, except my Postman tests. Weird!

All my C# code was fine, which confused me.

After a lot of searching, I came across this page (English version) by Gabriel Bareclos.

It said:

I changed the content-type of the request from: form-data to: x-www-form-urlencoded .

So in my postman, I changed:

            
              "mode": "formdata",                     "formdata": [
            
          

to this:

            
              "mode": "urlencoded",                     "urlencoded": [
            
          

That was it. That fixed my postman tests!

Note - I should have looked at the docs again, although it is something that might pass you by:

            POST /connect/token CONTENT-TYPE application/x-www-form-urlencoded      client_id=client1&     client_secret=secret&     grant_type=authorization_code&     code=hdh922&     redirect_uri=https://myapp.com/callback
            
          

Docker Compose Scaling Gotcha Part 2

Check out my previous gotchas!

Today I ran this command to scaled from 5 existing services, to 10 copies of service1

            docker compose -f config.env up -d --scale service1=10 service1
            
          

What do you think happens?

Well, again, not what I was expecting!

I was expecting service1 to have 10 scaled versions (which it did), but what I was NOT expecting is it restarted the previous running 5 versions!

So what is going on?

When you run a docker compose command it looks at the config you have given it with the new command, and compares it to the previous config setup you gave it.

IF the config is different, its default is to restart the existing services to bring them inline with the new configuration.

When you are scaling, that is not necessarily what you want, so you need to be more explicit about what you want it to do.

Fortunately there is a no-recreate option we can provide to compose that tells it you just want to scale the services, and ignore any changes to config for the existing ones.

This makes the command look like this:

            docker compose -f config.env up -d --scale service1=10 --no-recreate service1
            
          

Source

I wish Docker would tell you that services are going to restart before doing it, as on a shared instance, you have no way to identify if the config has changed since starting a service.

Oh well another reset of the number of days since I took out PRODUCTION!


Using ObjectDumper Breathed New Life Into A Complex System

For a previous article on Testing in Dotnet, I wrote about the package ObjectDumper.Net

I use this for lots of things, but it is great to add to a test to really see what an object looks like.

So it was obvious for me that I needed to write more about that!

But when I started writing this article (last summer), ChatGPT was everywhere (not that this has changed), so i thought, lets see what it can tell me about ObjectDumper.Net as a start to the article.

Overview From ChatGPT

Prompt: Tell me why we should use ObjectDumper.Net

ChatGPTs response was:

ObjectDumper.Net is a useful tool for developers working with .NET applications, especially during debugging and data exploration processes. Here are some reasons why you might consider using ObjectDumper.Net:

  1. Easy Object Visualization: ObjectDumper.Net allows you to visualize the contents of an object in a human-readable format. It can recursively traverse object graphs and present the properties and values of each object in a structured manner. This makes it easier to understand complex object hierarchies and quickly identify the data you're interested in.
  2. Debugging Aid: When debugging an application, it's often necessary to inspect the state of various objects at runtime. ObjectDumper.Net simplifies this process by providing a convenient way to dump object data to the console or a text file. You can quickly examine the properties and values of objects without having to write custom code to traverse and display them.
  3. Rapid Prototyping: During the early stages of development, when you're experimenting with different code structures and data models, ObjectDumper.Net can be a handy tool. It allows you to visualize objects and their relationships, making it easier to validate your assumptions and verify that your code is behaving as expected.
  4. Exploration and Documentation: When working with unfamiliar libraries or APIs, ObjectDumper.Net can assist in exploring the available objects and their properties. By dumping objects to the console or a file, you can study their structure and understand the data they hold. This knowledge can then be used to document the behavior of third-party components or to aid in writing code that interacts with those objects correctly.
  5. Customization and Extension: ObjectDumper.Net provides various customization options to tailor the output to your specific needs. You can control the depth of object traversal, exclude specific properties, or define custom formatting for property values. This flexibility allows you to adapt ObjectDumper.Net to your preferred output style or integrate it seamlessly into your existing debugging workflows.
  6. Open Source and Community Support: ObjectDumper.Net is an open-source project, which means it benefits from community contributions and support. If you encounter any issues or have suggestions for improvements, you can engage with the community, report bugs, or even contribute your own enhancements to the project.

Overall, ObjectDumper.Net is a valuable tool for .NET developers that simplifies the process of visualizing and exploring object data. Whether you're debugging, prototyping, or documenting your code, ObjectDumper.Net can save you time and effort by providing an easy-to-use mechanism for examining the contents of objects in a human-readable format.

I was very happy with the response. I found it helpful, and it covered everything I would have said about it.

So let me tell you a bit about how ObjectDumper.Net has helped me to view the internals of a complex application.

Usage Example

I have inherited a software system that integrates with a very complex third party system. It is used to record calls that are happening on a Phone Exchange (PBX).

This system is very complex and must maintain a lot of state when receiving data from the third party application. It requires that state to follow a phone call through the changes within the PBX, and allows the recording software to show useful meta data about the call. e.g.

  • Who made the call
  • Who received the call
  • What call queue it went through
  • How long was the call
  • What other participants were involved in the call
  • Were any keypad buttons pressed during the call
  • etc...

Think of it like this: The call recording software is primarily used in call centres...

"This call is being recorded for safety and for training purposes"

... so thousands of calls go through a PBX in a day. They come in. They go out. They get transferred between departments, people & groups of people. They get put on hold. They get sent to voicemail etc...

All of these changes need to be retained to allow the recording to be identified for those "training purposes".

It should be clear now, that the journey of a call can be very complex!

Debugging this system that had no documentation on how it worked, nothing on how it was architected/designed and no automated tests was incredibly hard!

Quickly in using and debugging the system, it became clear that the state of the "recording" was the most important part.

This state data was dynamic, and had to morph for different recording PBX types.

The complexity and fragility of this system and the fact that it was used in 100s of production environments meant that we needed to tread lightly in the code for fear of breaking things.

We needed a way to log out the contents of the state as it changed, which would allow analysis of a calls journey through the system and its state changes with a few small code changes.

What we needed was to output the state without knowing what it was made of!

In enters ObjectDumper.Net!

If we add a small log message that has the state "dumped" out, we could add a huge amount of context to what was going on:

            Logger.Debug($"State of the call at {timestamp}: {call.State.Dump())})
            
          

This was a game changer for this system, and the overhead of the "Dump" was minimal for the benefits it gave us (although we could compile it out if we wanted to).

I highly recommend ObjectDumper.Net as a quick way to view the contents of an object without retrospectively adding ToString() method everywhere (though I highly recommend them too!)

I wrote more about why this was useful for tests in this post Querying Console Output In Your Tests.

I will leave you with a final comment - ObjectDumper.Net will write out the whole contents of an object, and it can be full of unhelpful data, so use it wisely.

Look at the output it is giving, and ideally replace it with more structured logs if you can!


How To Prepare For The First Day Of A New Starter

When someone is joining the company for the first time, there is a lot of preparation required to bring them onboard and make them feel a part of the company and the team.

I have joined companies in the past where I have turned up and had to:

  • Clear out the previous occupants desk
  • Find a computer and a chair
  • Request licenses for the basics e.g. Visual Studio

Also while working remotely, I have also turned up to:

  • No access for company network/website.
  • No access to my manager for several days and no way to contact anyone else.

Therefore when you are bringing a new starter into the team, this is the list I use to help me board them:

Physical things

  • PC
  • Monitor(s)
  • Headset/Microphone/Speakers
  • Webcam
  • Notebook & Stationary
  • Chair
  • Desk

Virtual things

  • Software Logins & Licenses e.g. Visual Studio, GitHub, Jira, HR System, MS Office, Email
  • An Organisation diagram
  • Links to all the relevant documentation - Specifically a team specific "New Starter Guide"

Human things

  • Someone to greet them on their first day
  • Someone to show them around the premises on their first day
  • Someone to introduce them to the team, department, key stakeholders, HR and Site Services
  • A consistent point of contact for all questions
  • A definitive plan for the first day. One person should be given the job (& time) to ensure that occurs.
  • Ensure that there Line manager contacts them as soon as possible. Even remotely or in a different time zone this is a must do.

Onboarding does not stop on Day 1

Then after they turn up on the first day, these additional things will be needed, but not necessarily on that day:

  • Communicate frequently with them.
  • A Training Plan - To start with, this will be company specific training e.g. How to use the software systems. Long term training plans come later.

Bringing a new person on to your team well, is so important for them to start there career with you. If as a new joiner, you don't feel like a part of the team quickly, you will likely never achieve your full potential, and at worst, leave soon, and waste all the effort the company has put in.


My Thoughts On Using GitHub CoPilot

Recently I enabled GitHub Co-Pilot in Visual Studio.

This was following a talk about AI with Silicon South.

I have heard loads about it, but had not had the urge to use it.

To give this post context, I work on a large existing code base (About 1 Million Lines Of Code) with a unique and bespoke platform, so I didnt feel it would help too much, hence the lack of fervour to try it.

Today I changed that, so here are my random thoughts when using it:

Day 1

Moving code around

Working on fixing some broken tests it offered nothing. Which was no surprise as I was moving around code.

Comments

The first time it fired, it wrote a comment for me that was spot on!

I typed:

            //We have received a new snapsh
            
          

it said:

            //We have received a new snapshot, so we need to update our internal list of users
            
          

It's comments are very generic, but they are a good place to start.

Existing Code versus New Code

It likes to create new content, but does not like to add to existing things.
e.g. Will create a unit test for you, but wont fill in a blank method

So this encourages a different way of writing than I am used to. I would normally plan by building out skeleton code and then gradually fill it in.

CoPilot wants to build it immediately!!

To make it start making suggestions within existing code, you need to "prompt" it as if it was chatGPT.

The best way of doing that is to start writing comments or pseudo code, and it then gets a much better idea.

Like a Newbie

It often gives incorrect code, which can be very confusing, as at first glance it looks correct.

It is like working with a person who understands how to code and the classes of the system, but not what it is supposed to do, or how it works. There is no Domain Knowledge.

Day 1 Conclusion

After using it for a day, it saved me a small amount of work.

I look forward to using it on newer things to see its power, but right now, it wont be cost effective for legacy code maintenance.

Day 2

I spent the day adding new tests to existing complex code, that has no current tests.

I found that it is good at guessing, but it gets it wrong a LOT.
This actually slowed my flow down, as I kept having to look at what it was suggesting, and confirm if it was correct, and actively discarding it before writing what I needed.
Again, it felt like someone was really trying to help, but butting in with not thought out suggestions.

Just to be clear - if I was pair/mob programming with a real person, this is the kind of thing I would want to see, as it gives me an opportunity to help them learn.

BUT Copilot is a tool, and it is here to help ME, and not for ME to teach IT. So it comes off as annoying!!

The best thing it did, was save me typing when generating test data constructs though.

Final Conclusion

My biggest problem is it suggests things that are clearly (well to me) wrong.

This was mostly when suggesting parameters that are completely different types.

This is very annoying as it breaks concentration for me, because I am having to solve a problem that would not be introduced normally.

I liken this to a small child asking you questions while you are typing (Why is the sky blue? What is your favourite Carrot? What’s faster? Fire or dust?), sometimes you just end up typing what they are saying and have to redo it all again.

I am still open minded about Co-Pilot, ChatGPT and all the other AI tooling being developed, but for my current company project, it is not worth the cost to me, both financially and with my time.


50 Ways To Retain An Employee

My friend Hira suggested I write a post detailing 50 ways to retain an employee. This is all I have right now:

  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday
  • Give them expensive lego sets every month
  • Free Krispy Kreme Donuts on Friday

Thanks Hira. This is my favourite post 😃


Recording Video From A Nintendo Switch

My son has decided he would like to have a YouTube channel showing him playing games on his Nintendo Switch.

He sees lots of YouTubers playing games on YouTube, so assumes it is something easily done.

So this is my exploration ( as an awesome parent 😃 ) into how to make that happen.

I have some experience with OBS, FFMPEG and recording Microsoft Teams, so in the words of Jeremy Clarkson "How hard can it be?"

My first stop is Google, which comes up with a recording feature in some Switch games, but that wont work for what we want.

We want to be able to record the whole session of gameplay, and then turn that into a YouTube video, this is for short replays really.

My first thought was a Video Capture device that would link to my PC, which as it turns out is the 2nd option touted in this video: https://www.youtube.com/watch?v=zmdeOuUW5_8

They also suggest using a camera to directly record the consoles screen, which is not good enough for us.

Everything I looked at always suggested a Video Capture device, and based on several YouTube videos, suggest that for the most part even the cheapest ones are ok.

So with that in mind, I decided to go a step above the cheapest and get a brand I had heard of before. I went with the Mirabox USB 3.0 4K Pass-Through Capture Card for £30 https://www.amazon.co.uk/dp/B091YCHYP3?psc=1&ref=ppx_yo2ov_dt_b_product_details

It is just a small box with a USB cable and 2 HDMI Ports. You plug the Nintendo Switch base station into the Input, you connect a HDMI cable from the Output back to the TV, and you plug the USB into a PC to record the video on.

Simples! ?????

It works like a charm, my son can play on his Switch on the TV as normal, and we can record his gameplay on a computer in the corner.

This Box will work for anything with a HDMI cable output that you would normally plug into the TV, so opens a world of possibilities!


Caching The Contents Of This Site

This site is all stored statically (to make it completely free to host in Azure Static Web Apps), but it is generated dynamically using Blazor webassembly!

Find out more how I built this blog here

So because of that, it needs to be fast to run, and more importantly to download (webassembly means it requires the dotnet runtime downloaded, as well as the code for the site).

Once the Blazor code (dotnet runtime + site code) is downloaded, this site is all run in memory, with no further server access (ignoring assets).

So how does that work for the blog posts?

The posts themselves are markdown text files. This makes them very small, and easy to keep cached in memory.

That means, whenever we access a blog post, it's full contents are kept in RAM.

However we don't want to read all the posts if we don't have to. Therefore we access the posts markdown only ONCE no matter how many times it is accessed by the user (assuming they don't navigate away from the blog).

However there are times we need to have All the posts. This is the case when we do a search or view all blog posts.

For example, when we search, we search the title and the contents of the posts. Which means we need them all in memory to do that.

This may not be sustainable in the long term as the number of posts grow, but for a few hundred it should be fine (famous last words?).

Some quick calculations show that with the current 73 posts, they fit in 270KB (I don't keep the rendered view just the raw text), which is smaller than a lot of images floating round the web.


Adding More Disk Space To Azure DevOps Agents

Out of the box, Azure DevOps Agents have very limited disk space in which to manage your source code.

All of these machines have at least 10 GB of free disk space available for your pipelines to run. This free space is consumed when your pipeline checks out source code, downloads packages, pulls docker images, or generates intermediate files.

https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops&tabs=yaml#hardware:

For a very large Visual Studio solution (over 450 projects), this meant, we have run out of disk space!

Based on our analysis, we are building over 7GB of data during solution build alone, so with only 10GB to play with, running tests, building installers etc means we do not have enough space.

Unfortunately Azure DevOps does not provide a way of running a MS Hosted Agent with a custom disk size, so that left us with the only option to have a self-hosted setup.

This means setting up your own machine (build agent) to do the Solution Build etc on.

Microsoft give us the instructions to do this, and suggest using Virtual Machine (VM) Scale Sets to build multiple streams in parallel.

The process we need to follow is:

  • Create a VM
  • Install the build software on that VM
  • Generate an Image of that VM
  • Setup a VM Scale Set using that Image
  • Tell DevOps to use the VM Scale Set

So this is how we did that:

Create VM

On your machine:

Find the right subscription to create the VM scale set etc on:

az account list -o table

az account set -s <your subscription ID>

Then start the VM image creation

az group create --location <azure reqion required e.g. westus> --name <group name>

List of available regions here: https://azure.microsoft.com/en-gb/explore/global-infrastructure/products-by-region/?products=virtual-machine-scale-sets

az vm create --name <VM Name> --resource-group <group name> --image <Image To Use> --admin-username <user name> --public-ip-sku Standard

Note "Image To Use" should be one of these: https://learn.microsoft.com/en-us/azure/virtual-machines/windows/cli-ps-findimage#default-images

Install the build software on that VM

Now Connect (RDP) into the VM created.

(via the Azure Portal "Connect" option next to the VM)
Showing the location of the Connect option in Azure portal

Install the software you need to build your system, such as Visual Studio Build tools, Azure Powershell etc.

Note - The Azure Build agent software will automatically install later, but you still need the build tools you rely on. Some of this can be done via your Azure DevOps pipeline, but for things like Visual Studio then these need to be pre-installed.

e.g. All MS Hosted build agents have this software pre-installed, so you may need to install some of it yourself if you are not explicitly installing it in your pipelines: https://github.com/actions/runner-images

Reboot the VM to ensure all installations are completed.

Generate an Image of that VM

Connect back into the VM, and from an administrator Console Window / Command Prompt, run:

C:\Windows\System32\sysprep\sysprep.exe /generalize /oobe /shutdown

Wait for the VM to finish generalization and shutdown (in Azure you need it to say “STOPPED”). Do not proceed until the VM has stopped.
Showing the status of the VM in azure portal as stopped

Then on your machine, run the following:

Deallocate the VM in Azure (stops you being charged for it)

az vm deallocate --resource-group <group name> --name <VM Name>

Tell Azure the VM has been generalised ready for imaging:

az vm generalize --resource-group <group name> --name <VM Name>

Create the VM Image from the generalised VM:

az image create --resource-group <group name> --name <VM Name Image> --source <VM Name>

Setup a VM Scale Set using that Image

And finally create the scale set using the image created:

az vmss create --name <group name>pool --resource-group <group name> --image <VM Name Image> --vm-sku Standard_D2_v4 --storage-sku StandardSSD_LRS --instance-count 2 --disable-overprovision --upgrade-policy-mode manual --single-placement-group false --platform-fault-domain-count 1 --load-balancer '""' --orchestration-mode Uniform --admin-username <user name> --public-ip-per-vm

Note: this VM has a public IP so you can RDP into it. You may also want to tweak some of the hardware characteristics dependant on your needs

Tell DevOps to use the VM Scale Set

Setup the Agent Pool as per these instructions: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/scale-set-agents?view=azure-devops#create-the-scale-set-agent-pool

Then adjust your pipeline to use it. For a yaml file, you would add it in the pool section of your stage:
A picture of the section of the pipeline file that shows a scale set in use

You are done

I found that to get Azure Devops to recognise and use the new scale set, you need to WAIT. It takes AGES to recognise and use the new set (in excess of an hour for me).

You will also need to start a pipeline run that uses the scale set to make it do things too.

Making changes to the VM Image

Run the steps in "Cleanup" to remove the existing scale set and the generalised VM.

Then create a new VM using your custom image:

az vm create --name <VM Name> --resource-group <group name> --image <VM Name Image> --admin-username <user name> --public-ip-sku Standard

Install the additional software, and make any other VM changes you need.

Then follow the steps for "Generate an Image of that VM" and "Setup a VM Scale Set using that Image"

Cleanup

To Cleanup if you need to run it again, run these:

Delete the scale set

az vmss delete --name <group name>pool --resource-group <group name>

Delete the VM

az vm delete --name <VM Name> --resource-group <group name>


Four Years Have Gone

D,

The last year I have tried to move on, to see more people, to do more things, to reconnect with myself, to reconnect with our family, to live my life.

It has meant I needed to let go of some of that crippling grief.

I still miss you every day, but I feel you all the time... in the small things: Seeing an odd pair of socks, a crazy pair of shoes, a stupid crude joke, the music you listened to and the movies we loved and most of all in the joy in our kids faces.

I never thought I would have to go through life without you, I expected you to be there for it all.

4 years have gone by, and I have felt every day of that time, the grief has often swallowed me whole, but I have pushed through it, and I am here.

Love you always, M.


Querying Console Output In Your Tests

Hello, and welcome to my blog post for December 16th of the C# Advent Calendar 2023.

Organised by Matthew Groves.

I wont tell you about me, but if you are interested, check out my About me page.


Last year in one of the C# advent calendars, I wrote about Testing in Dotnet.

I find that Unit Testing is a topic that is not discussed as often as needed, and considering that article has had over 2.1k views, I decided that it made sense to write about testing once again.

This time I am going to talk about how I tested a black box system that despite having the code available, could not be refactored to be easier to test with unit tests.

The system in question was a telecomms messaging parser, and was a large and complicated beast. It had no documentation, and none of the implementing software engineers were still with the business.

So inevitably, it had no unit tests either 😱 ! However, it was not completely bereft of tests, it did have some higher level integration style tests.

These tests took a network capture (See Wireshark ), and ran it through the parser, and confirmed that the emissions of the parser were what was expected (Think of it like snapshot testing)

This in theory is a very sound testing strategy for this kind of setup, however the emissions of the parser were very generic, and did not indicate if certain conditions had been met. 😢

What this meant is that over time, the system changed internally and certain code paths were not being executed any more, but due to the state of the final emission it was assumed everything was correct.

Due to the nature of the input, and the system, the existence of the final emission was not enough to prove that the system functioned correctly, and many edge cases were not working.

Therefore, without a major rewrite, how do we confirm that the system is following the correct path?

Well one thing it did well was it logged its internals constantly, which made the job of analysing the issues much easier for the service desk team to help the customers.

But as a software engineer I thought, "Can we use those logs as part of the validation of the system in the automated tests?"

Turns out the answer is Yes!. Dotnet is a very flexible framework, and you can pretty much override anything if you know the right magical incantation.

The first thing was to determine how the system is doing its extensive logging.

This turned out to be the log4net logging platform with a bespoke format, and some bespoke handlers added.

Now we know that, we "could" have our tests read the log files directly and parse them for the required items!

This was a great idea, but did not give the feedback we wanted in the unit tests. We would get a test failure, and then have to find the logs and determine why.

So instead, can we redirect those logs to the MSTest Console Output?

That way when we get a failure, we have all the information we need to diagnose the problem further (vital when running external devops agents!)

Turns out this is easy to do with log4net, we can use a ConsoleAppender.

So adding this to our `log4net`` config specifically in the unit tests, gives us the internals of the system redirected to the Console:

            <appender name="Console" type="log4net.Appender.ConsoleAppender">     <layout type="log4net.Layout.PatternLayout">         <conversionPattern value="%date %-5level: %message%newline" />     </layout> </appender>
            
          

That is useful for sure, we now have all the info in the same place in a failure of the test.

However can we make the tests more useful?

Can they analyse the output in the Console and identify the problems themselves?

Again, yes! Dotnet to the rescue!

We can override the Console output destination to something of our choosing. For us that was a StringBuilder:

            public static (TextWriter originalConsoleOut, StringBuilder consoleOutput) CaptureConsoleOut() {     //Capture all the Console.Write output to a StringBuilder     StringBuilder consoleOutput = new StringBuilder();     TextWriter originalConsoleOut = Console.Out;     Console.SetOut(new StringWriter(consoleOutput));     consoleOutput.Clear();     return (originalConsoleOut, consoleOutput); }
            
          

If we add this code, to our TestInitialize method, then all tests can now get access to the console output.
We can do that by defining the TestContext property, that MSTest will then automatically handle and populate.
This is then available in all parts of the test lifecycle. So in the initialise, we are storing the StringBuilder that will contain the Console output and the original TextWriter used as the Console Output (so we can reinstate it later) as a Tuple within the TestContext Properties Dictionary.

            public TestContext TestContext{ get; set; }  [TestInitialize] public void TestInitialize() {     TestContext.Properties["consoleOutInfo"] = CaptureConsoleOut(); }
            
          

We are nearly there now. But we have captured all the Console output, so it is no longer displayed in the TestRunner, therefore at the end of the test, we need to force the text intercepted back to the real console, which we will do in the TestCleanup method:

            
              [TestCleanup]  public void TestCleanup()  {     var consoleOutInfo = ((System.IO.TextWriter originalConsoleOut, System.Text.StringBuilder consoleOutput))TestContext.Properties["consoleOutInfo"];      //Reset the Console output back to the original output stream     Console.SetOut(consoleOutInfo.originalConsoleOut);          //Push the captured data to the original outputstream     Console.WriteLine(consoleOutput.ToString());  }
            
          

So now in our tests, we can query the consoleOutput StringBuilder's contents at any time in the test, and Assert what the black box system is doing e.g.

            var consoleOutInfo = ((System.IO.TextWriter _, System.Text.StringBuilder consoleOutput))TestContext.Properties["consoleOutInfo"];  Assert.IsTrue(consoleOutInfo.consoleOutput.ToString().Contains("COMPRESSING MESSAGE"), "Did not compress the message");
            
          

This was a game changer for this type of system, as the end emission being as the stored snapshot was hiding that other parts of the system were not being activated due to missing internal emissions that could not be captured from the Black box.

We can now add additional tests that confirm the path through the code was as expected for each scenario by adding a few simple Asserts based on the logging.

An Afterthought

Some of you may be thinking that this is a very crude approach, and our own implemented TextWriter would be a better choice, and you may be correct, but this was very quick to put in place, so KISS & YAGNI kicked in.

As often in software, "simple is best", and for the effort required to implement this, its benefit was massive.

In an ideal world, we would have a more observable system, but sometimes you just have to work with what you have.

If you got this far through my C# Advent Calendar 2023 article, then thanks a lot, I appreciate your time.

Come say Hi on Twitter

Happy holidays everyone

"A Cat in tinsel saying Happy holidays, dotnet family"


Filtering A Pcap VoIP Recording To A Specific Call In Wireshark

Open your Pcap file in Wireshark.

On the menu, choose Telephony->VoIP Calls calls

Then find the call you want in the list, and select it. (Look for an INVITE 200 in the Comments section, and a From that has a phone number in it).

Once selected, press Prepare Filter button.

This will change the main screen of Wireshark to have a filter like:
Wireshark Filter

Then on the main Wireshark window, select File->Export Specified Packets, and then on the following dialog, choose All packets & Displayed

Enter a filename and a location, and hit Save.

Then you can open that new file in Wireshark and it will only contain the packets for that single call.


Playing A Captured VoIP Recording Through Wireshark

If you have a Pcap file network capture (generated by Wireshark perhaps) with a SIP or any Voice Over IP (VoIP) call traffic, you can listen to the raw calls within the Wireshark.

Open the pcap file in Wireshark.

On the menu, Choose Telephony->VoIP Calls
VoIP Call Menu Option In Wireshark

From the resulting window, choose a call leg to start listening too.

Press “Play Streams” button, and you will get the RTP player (when it has loaded - depends how much is in the recording how long this takes), press the play button.

Wireshark RTP Player UI

You should hear the audio of the calls.


Replaying Network Captures

When building any network related software, sometimes you need to capture the network traffic to debug an issue. The tool of choice tends to be Wireshark, which is truly excellent and I have used it for many years debugging TELNET, HTTP, SIP, UDP & RTP traffic.

However when you have a network capture, and determined the solution to your problem, you now need want to prove the problem is solved, and the best way to do that is to generate the network traffic again. However what if the system outputting the traffic is a closed system miles away. The best way then is to replay the network capture.

Wireshark its self does not support replaying the captures, but there are other tools out there. One of which is a piece of software called Colasoft Packet Player which you can get from here:

https://www.colasoft.com/packet_player/

It is a very simple tool:

Colasoft Packet Player UI

Once started, you select which network adapter to send the traffic over andthe network capture file to replay.

Then Click 'Play' and the packets will then be sent over your chosen network card!

NOTE: The packets are sent in real time which means that if you have a 12 hour capture, it will take 12 hours to send all the packets out. There are play speed options to adjust this if your software will cope with that.


Explicitly Including A DLL In A Project

Have you ever seen this kind of error on running your code:

            System.IO.FileNotFoundException: Could not load file or assembly 'XXXXXXXXXXXXXXX, Version=X.Y.Z.0, Culture=neutral, PublicKeyToken=yyyyyyyy'. The system cannot find the file specified. File name: 'XXXXXXXXXXXXXXX, Version=X.Y.Z.0, Culture=neutral, PublicKeyToken=yyyyyyyy'
            
          

This is easily fixed, by adding the missing package/DLL into your project, and rebuilding.
However what happens when someone does a "remove all unused references" on that project?

Well the DLL reference gets removed, which gives you back the error again, and is only detectable by running your app, so unless you have tests that invoke this behaviour, then you have a sneaky bug.

Note - Most likely you need an Integration test to find this kind of error, as it is a problem with what DLLs are available at runtime for your app, and unit tests may include that DLL in some other way.

To avoid this from happening to you in the future: when you include the pacakage/dll in your project, you need to explicitly reference the DLL in your code.

To make this clear for future developers, I prefer to add a specific class into my code that is responsible for this. It looks something like this:

            
              internal static class MissingDllHack     {         private static <Type within the assembly> youNeedMe;     }
            
          

an example that is forcing System.ValueTuple to be included in your project output folder:

            
              /// <summary>The missing dll hack.</summary>     [UsedImplicitly]     internal static class MissingDllHack     {         /// <summary> Must reference a type in System.ValueTuple so that this dll will be         ///   included in the output folder of referencing projects without requiring a direct         ///   dependency on it. See http://stackoverflow.com/a/22315164/1141360.</summary> #pragma warning disable 169         private static ValueTuple youNeedMe; #pragma warning restore 169     }
            
          

Now the DLL is a compile time problem, so if it is not included, you wont be able to compile your code.


Hosting A Large Number Of Participants In Microsoft Teams

Something I learned about today, was the different ways to host people in MS Teams.

1 on 1 calls are great, but sometimes you want to talk to a large group. Normally I reach straight for a "Meeting" in Teams when I want to talk to a bigger number of people.

But if you are talking to a whole department or large team, this can be very distracting as Microphones and Cameras come on and off.

Teams has another way of dealing with this scenario, and it is called a "Town Hall".

This is a great way to present to a large number of people, as well as getting feedback in the form of a Q&A session. Think of it as a Live Stream like on YouTube or Twitch.

Take a look and see if it could be good for you: https://support.microsoft.com/en-us/office/get-started-with-town-hall-in-microsoft-teams-33baf0c6-0283-4c15-9617-3013e8d4804f


Dynamically Changing Docker Container Limits

We have limits set on our docker containers to limit things like CPU usage, and memory usage.

These are defined in docker compose configuration YAML files, and are set at runtime.

For instance, this setting sets the CPU to only be 95% of the available CPUs, and limit the Memory to 5000MB

Docker Compose picture showing the deploy resources options

This can be changed dynamically without the container being restarted using the docker update command:

https://docs.docker.com/engine/reference/commandline/container_update/

So if we run

docker update -m 700M test-messagebroker-1

This will change the memory limit from 5000M to 700M without the container being restarted.

Note this is a temporary change until the container is restarted, as the compose file will always take precedence on startup.


Docker Compose Parallel Gotcha

Time for another Gotcha with Docker.

This time, we kept seeing calls in Azure DevOps to docker compose up that would never complete. It would start some of the services (We have about 40), and then it would just hang.

We have recently upgraded to Docker Compose V2 due to V1 being retired.

We did not see this issue on our local machines, only on DevOps.

The only thing that was different (apart from docker-compose changing to docker compose in our scripts) is that we noticed during a local build (docker compose build) of our containers, that we would swamp our PCs and make them unusable during the build process, this meant we added this directive to our environments:

COMPOSE_PARALLEL_LIMIT=2

This solved the issue, and we thought nothing of it. Several weeks later, this issue occurs.

Lots of researching of this docker compose hanging, lead to this issue about lacking "entropy" on a build agent:

https://stackoverflow.com/questions/59941911/docker-compose-up-hangs-forever-how-to-debug

https://akliang.medium.com/docker-compose-and-entropy-5b3f00561b88

None of the suggestions there helped.

Then I noticed that if we run a docker compose up with a specific container, it never hangs.

This led me to this GitHub issue:

https://github.com/docker/compose/issues/7486

And again the COMPOSE_PARALLEL_LIMIT being a factor.

DAMMIT!!

So we see that having it too low, stops compose from being able to start (note that this is not consistent, sometimes it works fine 5 or 6 times in a row)

So we set it to 200 in our env file:

COMPOSE_PARALLEL_LIMIT=200

and now it all works fine!

I will add an additional .env file with an override for a local build to help the developers, but for now devops is working.


Intermittent Playwright Exception

I have been getting this error intermittently in our Azure Devops based Playwright tests using a headless Chrome browser:

            Microsoft.Playwright.PlaywrightException: net::ERR_NETWORK_CHANGED Stack Trace:    at Microsoft.Playwright.Transport.Connection.InnerSendMessageToServerAsync[T](String guid, String method, Dictionary`2 dictionary) in /_/src/Playwright/Transport/Connection.cs:line 165    at Microsoft.Playwright.Transport.Connection.WrapApiCallAsync[T](Func`1 action, Boolean isInternal) in /_/src/Playwright/Transport/Connection.cs:line 471    at Microsoft.Playwright.Core.Page.ReloadAsync(PageReloadOptions options) in /_/src/Playwright/Core/Page.cs:line 684
            
          

We have been unable to figure out the problem, but after another round of googling, I came across this stackoverflow answer:

"The underlying issue one of my docker-compose setups having restart=always. Unfortunatly I had a bug causing a container to terminate and restart. This restart caused the network change. It is determinable by running docker ps and noticing the container restarted."

https://stackoverflow.com/questions/44678168/docker-and-chromium-neterr-network-changed/72292093#72292093

The problem is:

  • The UI that is being tested is running in Docker!
  • The systems services are run in Docker via a compose file!
  • This is a test environment!
  • Some of the containers in our deployment are not fully configured!
  • They are constantly restarting!

This is what is causing Docker to reset the network, and thus this error we are seeing.

An easy fix was to kill the bouncing Docker containers as they are not needed.

Problem solved (well its not consistent enough to know for sure, but Tweet me in a month and I will be able to confirm or not.)


Its Time To Talk About Lego

I am a big fan of lego, which is a fairly new thing for me, I didn't have any as a kid.

However as an adult with disposable income, I get lots of pleasure from Lego. It is a great stress and anxiety reliever for me.

My "Want to Buy list" at the moment contains:

https://www.lego.com/en-gb/product/motorised-lighthouse-21335

https://www.lego.com/en-gb/product/atari-2600-10306

https://www.lego.com/en-gb/product/the-friends-apartments-10292

My top 3 favourite sets that I actually purchased in 2022 were:

https://www.lego.com/en-gb/product/nintendo-entertainment-system-71374

https://www.lego.com/en-gb/product/ghostbusters-ecto-1-10274

https://www.lego.com/en-gb/product/optimus-prime-10302

These are a couple of the less expensive sets (under £100), that I think are cool:

https://www.lego.com/en-gb/product/horizon-forbidden-west-tallneck-76989

https://www.lego.com/en-us/product/material-handler-42144

You might find some interesting deals here:

https://www.brickwatch.net/en-GB/legodeals/1/best-deals

I can talk about lego for ages!

Last one (I promise). This is beautiful (and on my wall):

https://www.lego.com/en-gb/product/vincent-van-gogh-the-starry-night-21333


Docker Compose Scaling Gotcha

Today I ran this command:

            docker-compose up -d --scale service1=10
            
          

Then ran this command:

            docker-compose up -d --scale anotherservice2=10
            
          

What do you think happens?

Well not what I was expecting.

I was expecting service1 to have 10 scaled versions, and anotherservice2 to have 10 scaled versions.

What I got was service1 was scaled back down to the default in the compose file (in this case 5), and anotherservice2 was bought up to 10.

Why does this happen? Would this have worked?

            docker-compose up -d --scale anotherservice2=10 anotherservice2
            
          

Turns out YES!

So what is going on?

Lets break the command down...

  • docker-compose up -d says bring the containers in the compose file up and detach from them.
  • --scale anotherservice2=10 says we want to ignore the scale options in the compose file for anotherservice2 and have 10 of them instead.
  • anotherservice2 says only apply the previous options to the anotherservice2

I can understand why, now I look at the command, but I would have preferred an indication that it would take service1 replicas down too.

Oh well another reset of the number of days since I took out PRODUCTION!


Three Years Have Gone

D,

3 years have gone by, and it feels like yesterday. So much has happened, yet so little too.

Families have wained, friendships needed rekindling, and through it all there has been that feeling of emptiness and loss.

3 years without a part of me, 3 years without my whole heart, 3 years without my brother, 3 years trying to find my way back to being "me".

Every day, you have been there, watching me, reminding me. Every day you have been so far away. Every day you have been so utterly missed.

I ask myself, why did this happen, but I realise, there is no rhyme or reason; only "life". You lived your life with joy and abandon, with fun and laughter, with a sense of giving and openness and you loved freely too.

The one thing that will always make me sad is watching our kids have so much fun together and you not being here to share.

My heart still hurts you are not here, and the grief does not lessen, but I have grown around it, tamed it and smothered its screaming noise with the love and memories of us.

Love, M.


Testing In Dotnet

Hello, and welcome to my blog post for December 15th of the .NET Advent Calendar 2022.

Organised by Dustin Morris Gorski on Twitter.

I wont tell you about me, but if you are interested, check out my About me page.


Testing in Dotnet

My view is that we don't talk enough about Testing in the Software Development world!

Meme showing Brad pitt in Fight club asking what the first rule of testing in dotnet is

I have been working in the industry for a long time, and have seen the benefit of testing, not just in creating stability in the software, but speed of development, confidence of the engineers, confidence of the stake holders, less downtime, less out of hours support required, the list goes on.

So I thought it would be a good idea to talk about how "I" do Testing with Dotnet.

I am talking about automated testing here in the general sense. I am not going to get into a flame war of the difference between unit tests, integration tests, E2E tests, It is nearly the holidays after all 😃

Bill & Ted saying be excellent to each other

I should say, that this is very much an opinion piece. However I hope you will find something useful within it, and it will give you the passion to start (or continue) on your testing journey in software development.

What tooling

I use Visual Studio 2022 Professional for dotnet development, running on a Windows machine, building with dotnet 6 (at time of writing).

But nothing I am about to say is specific to those things (unless stated). Everything I am about tell you is platform and development environment agnostic, after all Dotnet is very much multi platform these days and every test I write for this blog or the Company I currently work for runs on Linux but is developed in Windows.

What extensions

Within VS, I use the following extensions to help me write tests:

Fine Code Coverage

Created by Fortune Ngwenya

This extension (FCC) is really cool, it can independently show you code coverage, (or with the latest version use the inbuilt coverlet app in VS) to generate a table of code coverage and indicate in the code what paths you have missed.

Snapshot of the Fine Code Coverage output window

It can also highlight Risk Hotspots too, identifying complex areas of your code, that could be a focus for a refactoring effort.

I find this tool invaluable, so do check it out.

Please note, code coverage is a metric I find useful but not essential, it helps me discover tests I have not written, and in some cases, identify that a test is not following the path I would expect it to (useful when retrofitting tests to legacy code).

Create Unit Tests Command

(This used to be an extension, but since VS 2019, it is now built in)

This tool allows you to create unit test projects and blank test methods. I find it useful to quickly put the infrastructure in place to allow me to start testing.

screenshot of the Create Unit Tests dialog, showing all of its options to create a unit test and or project

However I do end up automatically updating the dotnet version & nuget packages as they are rarely current.

What it gives you then is a TestClass with a bunch of TestMethods. Each test method is intended to exercise each public method of the class it is creating unit tests of.

Unfortunately its usefulness is limited to that, I wish it did more, but "every little helps" to quote a supermarket.

What packages

This is a list of a useful nuget packages:

JustMock Lite

JustMock Lite is an great mocking tool, it gives you just enough functionality to be useful, but not too much that you don't notice the design flaws in your code. See below on "What can it mean if it is hard to write a test?"

System.IO.Abstractions

Combined with Dependency Injection & JustMock Lite, this allows us to abstract away things in System.IO, such as the FileSystem. Really handy so you dont have to read and write real data to the disk for your test.

DeepEqual

Allows you to compare the contents of objects with ease.

ObjectDumper.Net

I use this for lots of things, but it is great to add to a test to really see what an object looks like. A good companion with DeepEqual.

MockHttp

This is a new package for me, but BOY do I wish I knew about it sooner. It allows you to fake http responses.

Benny from Lego Movie saying Awesome

How to start testing

I am going to talk about 2 scenarios: working with code that already exists, and code that does not.

For code that exists already, we will focus on adding tests to the existing structure, so we are focussed on testing methods in classes.

For code that does not exist, then I encourage you to look up the technique of Test Driven Development (TDD).

This means you will write the test before the code exists, and often that can mean you are writing tests at a much higher level. e.g. testing of a feature, rather than a class.

Testing code that already exists

I will create a new unit test project called projectname.Tests (using the tool above)

This project will sit next to the project with the code I want to test.

I will focus on testing the public interface of the class, and try to drive the code through the paths of the code I want to test (likely because I want to change it).

If I know enough about the code, I will write the test before any code changes.
This test will prove the existing functionality works as we currently see it.

Then I will write a test that will fail because the code I need to write/change does not exist yet.

Now I have 2 tests, one to prove I have not broken anything unintentionally, and one to prove when I have made the change I need to do.

Note - This is a simplistic example, we may want to write many tests to prove it currently works, and many to prove it will work. Don't limit your testing unnecessarily. This is where FCC comes in handy, as it allows you to identify areas that are not currently exercised by tests, and tell us when we are not executing branches of our code.

As a side note, I often use tests to learn about what the existing code does - exploratory tests. These normally build up as I learn the code and its complexity, until I understand enough to write a test we can keep, or throw it away (but transfer that knowledge into another written form on the internal WIKI).

Testing code that does not exist

We still need to create a new unit project, but this may be named after a feature, rather than a code project, especially if you are going to try TDD.

However you still need a good name and location for the test project.
Then I would create a blank TestClass with a TestMethod to write the test I need to.

Method name conventions

There are lots of ways to name your test methods, I subscribe to this one from Roy Osherove

This has the method name laid out as so:

UnitOfWork_StateUnderTest_ExpectedBehavior

e.g.
MyAmazingMethod_InputParametersAreAllBadValues_FalseResult()

MyAmazingNewFeature_HappyPathOne_CustomerCompletesTransaction()

Read the article for the details, I don't want to make this article even longer.

The thing to remember though, is to make it clear what the test is trying to do (it is always about readability in software dev!), and be consistent across your code base.

Think about what your CI platform shows you on a test failure, are you getting enough information from it to diagnose the issue. Quite often all you get is the method that failed, so make your future selfs life easier, by giving it a helpful name now.

A picture of a future city with Thanks past me written on it

What is a SUT?

SUT or System Under Test is the term often used to indicate the focus of the test.

I always write it in full, never sut.

We are always talking about making our code more readable, and then we go and use "sut" in our tests. Pfft

This is the thing we will "Arrange" for, then "Act" on, and ultimately "Assert" about, which leads nicely on to...

AAA

No this is not about Car breakdowns, or having had one too many Sherries (is that a thing? I have always been Tee-Total!).

It is about the test writing pattern "Arrange, Act & Assert"

  • Arrange inputs and targets.
    • Arrange steps should set up the test case.
    • Think, "Does the test require any dependencies setup?" such as an in-memory database?
    • Handle all of these operations at the start of the test.
  • Act on the target behavior.
    • Act steps should cover the main thing to be tested.
    • This could be calling a method, calling a REST API etc.
    • There is likely to be only one line of code in this section of the test.
  • Assert expected outcomes.
    • Assert steps verify the output of the Act stage.
    • Sometimes it is checking an exception threw, or the correct value returned.
    • Other times you may need to interrogate the System Under Test (SUT) to determine if it did what you expected

Read more about it here from the awesome people at Telerik JustMock (No sponsorship provided, but I can hope for a xmas present right?).

How many asserts

I have been asked (and asked myself) how many "Asserts" should there be in a single test.

My preferred answer to this is: As many as needed to PROVE that your test has succeeded. This maybe simply checking the return result of the method call, but it could be as complex as checking multiple tables in a DB have been modified, and all files have been removed, and a REST API was hit....

More often the number of asserts you have indicate the level that your test is at:

  • One Assert ~= a single method call with a return result
  • Multiple Asserts ~= Testing a complex path

As always:
A picture of a dog in a suit with glowing eyes, saying "it depends!"

MStest, NUnit, XUnit

There are different packages you can use to write and run tests. I have only ever used MSTest with dotnet.

The folks over at BrowserStack have written a good article on the differences: https://www.browserstack.com/guide/nunit-vs-xunit-vs-mstest

As you can see, they all do similar things, so I would say "Dealers choice", use what you are comfortable with and that your company already uses.

If you have to start with nothing, then NUnit seems to be the most popular, and most examples of tests "on the line" tend to use NUnit.

Public vs private testing

I am a big proponent of public interface testing.
However, that assumes your interface is easy enough to use, and is not hiding a big bloated ball of mud.

If you have a ball of mud, I would suggest writing as many tests as you can to the interface that touches as much of the code as possible.

Then you can start refactoring the ball of mud into smaller balls of mud, using Dependency Injection to maintain the functionality.

By refactoring even small parts of the original code, it WILL make testing of the refactored code and the original code easier.

However if your big ball of mud is so big and mutant like, then perhaps private testing is for you. This will require a different way to run your tests, probably as derived classes.
There are ways through Reflection but I do NOT recommend that approach.

Take the refactoring route, it will help you in the long run, and it is likely to be less painful.

InternalsVisibleTo

But wait, all my classes are internal. I cannot access them from a .Tests project.

Firstly - Great!

Secondly - You have 3 options:

  • Make your internal classes public (I don't recommend)
  • Move your tests into the main code project (I don't recommend)
  • Allow your test project to see the internals of the project using InternalsVisibleTo

This allows the internal methods in an Assembly to be visible to another Assembly. In this case your .Tests assembly.

You can do this, by adding this to your project file. This will allow a .Tests project of the same name as the assembly generated to be able to access the internal classes of your project.

            
              <ItemGroup> 		<AssemblyAttribute Include="System.Runtime.CompilerServices.InternalsVisibleTo"> 			<_Parameter1>$(AssemblyName).Tests</_Parameter1> 		</AssemblyAttribute> 	</ItemGroup>
            
          

Note - You can define this AssemblyAttribute multiple times if you want to access the internals in multiple projects!

What can it mean if it is hard to write a test?

If you are struggling to write a unit test, normally this would indicate that your code is too complex.

It may need a lot of dependencies with significant setup, or a lot of parameters passed to a method to drive it to the line of code you want to execute.

This generally indicates you have a design issue, and it may be easier to do some refactoring to help with testing.

It is amazing how much testing can be easier when you pull a single encapsulated functionality out to a separate entity.

Not only can the entity be tested in isolation, but you then don't need to worry about testing its fine details in the code that uses it.

Test Constructors

As you have seen in MSTest and NUnit etc, you can do Setup and Tear Down of testing data/environments in specially tagged methods.

You can also use a constructor of your test class.

I generally try and stay away from any setup/tear down methods. It becomes easy to muddy one test with another's setup.

Also, I like my tests to stand alone, and be runnable in Parallel, and this is much clearer if there is no shared code.

However they are very useful if used carefully.

If you got this far through my first NET Advent Calendar article, then thanks a lot, I appreciate your time.

Come say Hi on Twitter, if you are interested in learning more, then here are some great things to research. I particularly love Mutation testing.

Additional reading

Happy holidays everyone

"A Cat in tinsel saying Happy holidays, dotnet family"


Why Byte Arrays Are A Bad Idea When Dealing With Large Data

Hello, and welcome to my blog post for December 7th of the C# Advent Calendar 2022.

Organised by Matthew Groves & Calvin Allen.

I wont tell you about me, but if you are interested, check out my About me page.


Why Byte Arrays Are A Bad Idea When Dealing With Large Data

First off, I want to detail a win: The changes I am about to describe saved 5GB of RAM from a total of 6GB when dealing with a 350MB file.

Yes, you read that right! That is an 83% reduction in memory usage!

Second, let me put some context in place:

  • The software I am talking about is responsible for reading a proprietary file format holding raw audio data & a bunch of headers per "frame".
  • There can be hundreds of thousands of frames dependant on the length of the audio stream recorded.

Now lets look at an example of a 150min (2.5 hour) recording:

  • The WAV formatted file on disk that generated this audio originally was 280MB.
  • The software needed 5GB (yes that is not a typo! It says 5 GIGABYTES!!) of memory to process the proprietary file.

Baby meme, saying "Wait, What?"

The difference in file size between the WAV and the proprietary files are not significant, say 10%, so why are we using 18 times the amount of memory to process a file?

Investigation

We deal with Mono and Stereo audio differently (we support additional channels too, but lets keep it simple), and "up-sample"(?) the Mono audio to Stereo, so that explains some of the difference.

So we can consider the original file size as a 600MB file to accommodate the up-sample.

Right, so now we are only using 8 times more memory than the file on the disk!

"Baby meme, saying "Small wins"

Ok, Ok, I hear you all saying But what has any of this got to do with the topic of the article?

Well for those who have not guessed, the data was being read into memory and manipulated there, using... ...Byte Arrays !

"But why does that use so much RAM: the file is 600MB, the Byte Array should be the same size" --- Correct, it is!

"Confused Unga Bunga meme"

Don't run away, I will explain...

We use MemoryStream to walk around the Byte Array, to make it quick to access data within the array.

After much analysis of the code, it was determined, that multiple MemoryStreams were created to walk around each channel (Mono has 1 channel, Stereo has 2) of audio data.

By default a MemoryStream will create a Byte Array as its back-end storage, this is ok, as it will grow to fit the data within it (caveat about this later).

            int sizeOfLeftChannel = 5000;  List<MemoryStream> channelContainer = new List<MemoryStream>(); MemoryStream leftChannel = new MemoryStream(sizeOfLeftChannel); channelContainer.Add(leftChannel);
            
          

So what do you think happens when you go:

            MemoryStream leftChannel = new MemoryStream(sizeOfLeftChannel);
            
          

Correct! It creates a Byte Array in the background of the size sizeOfLeftChannel.

Now if I do:

            var leftBytes = channelContainer[0].ToArray();
            
          

What I now have is 2 Byte Arrays (the ToArray() created a new Byte Array), both 5,000bytes (sizeOfLeftChannel) in size, thus doubling my memory allocation.

We were doing this in a lot of places: to focus on channels of data, and to walk through them quickly.

So now we know where some of our memory is going, and we can limit this with some code rework, however, it does not explain all the memory allocation.

Lets continue the investigation...

Turns out, we are using some third party components (I am not allowed to tell you which) to do specific audio encoding, and they would pull the file into RAM (exactly as we did), convert to a MemoryStream, and then pull it out to a Byte Array to feed to the encoders.

These components were vital to the process, and cannot be changed, so lets check the documentation (as we always should) for any clues.

Turns out those encoders had alternative choices to encode (no clues yet what they are, all will be revealed soon!), thus reducing/removing the memory allocation.

"Mr Bean looking very bored meme"

Solution time

I have talked about what the problem is:

  • Reading large amounts of data and manipulating it in memory.
    • Which in hindsight is very obvious why we were consuming huge amounts of RAM!

But how can we resolve it?

We are reading data out and putting it into MemoryStream to consume it further.

Hang on a minute, there are other inbuilt Streams in dotnet, can we use one of those?

Why yes, of course: FileStream!

"Sonic the hedgehog to the rescue with a filestream"

We are reading from a file to process in a stream in memory, why don't we just switch to reading the file directly instead. All our memory problems will be solved.

Yes, that is true, reworking the code to deal with FileStream instead of MemoryStream reduced our memory consumption dramatically!

We were no longer pulling everything into memory, and then processing it, we could read in what we needed, process it, and write it out to another stream with limited memory usage.

But that is going to be SLOW! I hear you shout.

Yes it was, we went from a 20 second conversion time to 3 minutes. OUCH!

"Gru saying not cool"

Solution number 2

Now the code has been setup to use Stream everywhere, we can at least look at alternative implementations.

What options do we have in dotnet:

  1. MemoryStream
  2. FileStream
  3. BufferedStream

We have already ruled out MemoryStream and now FileStream. BufferedStream does not make sense in our use case - What we want is something that allows us to manipulate data in memory, but have all the data still on disk.

Right, time for more research - "To Google!", and lets try searching: "c# allow us to manipulate data in memory, but have all the data still on disk"

we get these 2 interesting results:

"Google search results with Memory Mapped files highlighted"

I have not heard of Memory-Mapped Files before, but they sound promising.

The way I interpret the documentation is that I can read part of a file into memory, manipulate it, and it be persisted. This sounds perfect, the best of both worlds.

A benefit because we are dealing with audio is, we can manipulate it in chunks, something like this (not real code):

            //Read a reasonable chunk of the WAV file http://www-mmsp.ece.mcgill.ca/Documents/AudioFormats/WAVE/WAVE.html const int encodeChunkSize = 1024; for (int i = 0; i < inputStream.Length - encodeChunkSize; i += encodeChunkSize) {     int bytesRead = inputStream.Read(waveFileFrames, 0, encodeChunkSize);     int bytesEncoded = encoder.EncodeBuffer(waveFileFrames, 0, encodeChunkSize, encodedFrames);     outputStream.Write(encodedFrames, 0, bytesEncoded); }
            
          

This allows us to read in the file a bit at a time, encode it, and write it to a new file a bit at a time too. Memory-MappedFiles are definitely for us!

Due to the confidential nature of our code, I cannot share how the code actually works, but here are some snippets (not compilable code) which will help you figure out what you could do:

            //Creates a new temporary file with write, non inheritable permissions which is deleted on close. new FileStream(Path.Combine(Path.GetTempPath(), Path.GetRandomFileName()), FileMode.CreateNew, FileAccess.ReadWrite, FileShare.None, 4096, FileOptions.DeleteOnClose);  //Open a memory mapped file stream from an existing file MemoryMappedFile.CreateFromFile(fileStream, null, 0, MemoryMappedFileAccess.ReadWrite, HandleInheritability.None, true);  //Create a Stream for a portion of the memory mapped file Stream leftHandChannelPartialViewOfFileStream = leftChannelMemoryMappedFile.CreateViewStream(startIndexForViewStreamChannel, offsetLength, MemoryMappedFileAccess.Read);
            
          

Conclusion

So lets finally answer the question - "Why are Byte Arrays A Bad Idea When Dealing With Large Data?"

Simply put - They use a lot of memory!

That is not necessarily a bad thing, "IF" you can ensure your data is always small enough, and you don't copy it about endlessly.

However, there are other ways when dealing with large amounts of data, and that is to use some kind of "Stream", be it a FileStream or a MemoryMappedFile.

FileStreams are great, but can be very slow, so a Memory-Mapped file is a great alternative: We get access to a file directly in Memory, but as with all things, there are trade offs and gotchas (I will leave it to you Dear Reader to find out more, or come and chat on Twitter).

Finally, I mentioned earlier a caveat with a MemoryStream - These are backed by Byte Arrays and therefore are limited in size to 2GBs (on a 32Bit Process). Our process is currently, due to third party dependencies, limited to 32Bit.

.NET Framework only: By default, the maximum size of an Array is 2 gigabytes (GB). In a 64-bit environment, you can avoid the size restriction by setting the enabled attribute of the gcAllowVeryLargeObjects configuration element to true in the run-time environment.

https://learn.microsoft.com/en-us/dotnet/api/system.array?redirectedfrom=MSDN&view=netcore-3.1#remarks

If you got this far through my first C# Advent Calendar article, then thanks a lot, I appreciate your time.

Come say Hi on Twitter

Happy holidays everyone

"A Cat in tinsel saying Happy holidays, dotnet family"


Azure Static Web App Invalid SSL Certificate

Today i find out, my website was not working.
I had an invalid SSL certificate, it timed out 4 days ago.
Side note - I should set up a Github action to check this and warn me if it is offline.

Checking the SSL cert showed me it had run out.
Now a static web app in Azure has the ability to have custom domains with free SSL certificates.

This has worked perfectly for months, and I have not changed anything - so why has it run out.

I found in Azure, that this was showing against my custom domain:

Custom domain not resolving to my website

As a quick fix, removing the custom domain and addding it again solved my problem, but that will only work for another 6 months.

My DNS provider is CloudFlare, and I use a CNAME record within that to allow the custom domain to work.

So in cloudflare I turned everything off, and the problem still persists.

Which made me think to check the random url that Azure creates, and lo and behold, that works.
That website has a valid SSL cert!

So it is something wrong with my custom domain.

A bit of search on t'internet, I came across these posts on github:

https://github.com/Azure/static-web-apps/issues/923
https://github.com/Azure/static-web-apps/issues/888#issuecomment-1320162612

This made me reconsider have I set it up correctly at Azure.

At the top of the custom domain blade in Azure it gives documentation on how to configure the custom domain with your DNS host:

https://learn.microsoft.com/en-us/azure/static-web-apps/custom-domain?wt.mc_id=azurestaticwebapps_inline_inproduct_general

which then lead me to here:

https://learn.microsoft.com/en-us/azure/static-web-apps/apex-domain-external

Oh, I have not setup an Apex domain registration at Cloudflare.
Adding in the "root domain" e.g. "markoliver.website" and now Azure has verified my custom domain.

A validated Azure domain

Thanks also to @Stacy_Cash & @nthony_chu on Twitter for trying to help out:

Fingers crossed in 6 months I am not back here reading my own article 😃

What did I learn (again) - RTFM - READ THE **** MANUALS

Banging my head against the wall for not learning


Reading UDP Packets With SharpPcap

If you are intending on reading UDP packets in promiscuous mode using SharpPcap, then you need to consider the amount of traffic on the network you are going to intercept.

If you are sweeping up packets on a very busy network (think Voip network), then you will need to think about dropped packets.
UDP does not guarantee receiving the packet, so if a NIC is overloaded, then it will simply discard the UDP packets.

On top of that, when processing packets using SharpPcap, you need to read the packets and process them as fast as they are being received, or again you will drop packets.

SharpPcap can help us here, we can reduce the number of dropped packets by using its buffering mechanisms.

Instead of something like:

            var deviceConfig = new DeviceConfiguration {      Mode = DeviceModes.Promiscuous,      ReadTimeout = DeviceReadTimeout};  device.Open(deviceConfig);
            
          

We can instead use something like:

            const int buffersize = 10 * 1024 * 1024;//10MB  var deviceConfig = new DeviceConfiguration {      BufferSize = buffersize, //This means we favour keeping the packets over dropping them, by using more RAM     KernelBufferSize = buffersize,      Mode = DeviceModes.Promiscuous,      ReadTimeout = DeviceReadTimeout};  device.Open(deviceConfig);
            
          

This will tell SharpPcap to use 10MB of RAM to buffer incoming data before it drops packets.
This will give you more time to process the large number of packets before packet loss.

This however is not a panacea to slow code, and you should consider why you are not processing the packets fast enough in the first place.


How To Confirm 2 Audio Files Are Identical

The tool Audacity has the capability of telling you if 2 audio files are the same.

Here is the process:

  • Open one of the files.
  • Import the other (via the File menu)
  • Select one of the tracks, by clicking in the box area on the left of one of the tracks (where you can Mute etc)
  • Choose Effect->Invert from the Menu bar
  • Select both tracks (Ctrl-A), and select Tracks->Mix->Mix and Render to New Track from the Menu bar.

You will be shown a third track which should have no waveform on it at all if the tracks are identical.

If you playback the tracks, you should also hear nothing if they are identical. This is because the Inverted waveform cancels out the other waverform.

This is very handy when trying to confirm a before and after conversion is bit accurate.


Microsoft Teams Compliance Recording _ Part 2

This is the second post in a series on Microsoft Teams Compliance Recording .

In the first post, I gave an overview of what Compliance Recording with MSTeams entails. With this post, I will talk a bit about the process of building a compliance policy bot.

The steps your bot need to take are:

  • Connect to your teams tenant
  • Initialise the graph api
  • Setup your certificate
  • Setup the web hooks for teams to contact us with call notifications and call updates
  • Setup the data socket tunnel for the audio and video to be sent
  • Initialise the media platform
  • Listen on the HTTP endpoints supplied to MSTeams

If everything is configured correctly, you will be able to receive audio and video

So if we assume that your policy bot can receive call notifications, and call updates, then the next stages are:

  • Pump the audio event queue quickly
  • Pump the video event queue quickly

The important of the "quickly" part above should not be overlooked.
The audio events are received at 50 events per second. If the events are not processed quickly enough, the MS media platform becomes unhealthy, and starts dropping audio packets.

The recommendation is that the audio packet event handler does very little. It should take the packets, and complete the event as soon as possible. Even writing the data to disk is too slow.
The best thing to do is to write the data packets to a memory based queue, that is then serviced in the background outside of the event processing.

The same is true for video, but in this case, we have 30 packets a second of video data. However the data is much larger than the audio packets, so although we receive less of them, we have a bigger overhead of processing them.

My preference for this work is to create an async background file writer (for the audio packets), which stores the packets in memory on a concurrent queue. Then a background thread runs and pushes the audio packets with there timestamps into a file.

Now for audio, reading the data back out to be played is not trivial, but is simpler than video (see future posts). We receive a constant stream of audio with no gaps (unless it is muted), therefore we can just play back the audio packets 1 at a time sequentially at the correct speed of recording.

We should however be using the timestamp that was supplied with each packet to ensure the audio is spaced out with the correct silence and synchronised with any other recorded elements (like the video).


Microsoft Teams Compliance Recording

This is the first post in a series on Microsoft Teams Compliance Recording .

Microsoft teams has the capability to record all of its calls, both the audio and video.

Anyone can do it by using the record button when in a call, however there is also the capability to record all calls all the time. This is normally used in a business capacity and is generally called Compliance Recording.

Compliance recording is normally needed for companies like Financial Institutions that need to ensure that all communication with a customer is available for auditing. This includes phone calls and video calls.

Compliance Recording through MSTeams however is a complicated process.

Communication between Microsoft Teams and the Recorder Bot is achieved through an Azure "Bot Service Channel". The Microsoft Teams cloud service connects to one side of the channel, and the recorder bot connects to the other side. This allows the recorder bot to receive audio and metadata for calls made by Microsoft Teams users.

The Bot service channel needs to be given the required permissions, and configured with a way to authorise the bot for the recording Teams Tenant.

The Bot service channel needs to be given permissions in the Teams Tenant that is to be recorded.

The Teams Tenant will then need to Enable Policy Recording for the Bot service channel.

The users within the Teams Tenant will need to have the Recording Policy enabled. This can be all users, or a selection.

Most of this is documented on Microsoft Docs: https://learn.microsoft.com/en-us/microsoftteams/teams-recording-policy

So that is the easy bit done, now for the hard bit. The recording software.

Firstly, a warning, To be compliant with Microsoft licensing this must be hosted in Azure on a VM of some kind: https://learn.microsoft.com/en-us/microsoftteams/platform/bots/calls-and-meetings/requirements-considerations-application-hosted-media-bots#c-or-net-and-windows-server-for-development

To build this software, Microsoft have provided some helpful SDKs:

  • Microsoft.Graph.Communications.Calls.Media - This allows you to talk with Ms Graph API which is where all the communication with Teams occurs for the Calls to be recorded.
  • Microsoft.Skype.Bots.Media - This is the media sdk, it allows the audio and video passed from teams to be decoded into readable and usable data, Namely PCM audio data and H264 video data (there are other choices).

Note - These SDKs at time of writing only support the .NET Framework.

Note - Some of these SDKs are "preview" only, which means they are not officially available or supported.

There are also examples of how to integrate these SDKs into your code, dependant on the recording concepts you require:
https://github.com/microsoftgraph/microsoft-graph-comms-samples

Be aware, these are purely available to show the general idea on how to code a compliance recorder. There are MANY challenges to overcome, that these examples do not deal with.

I will blog more on this subject, it is an in depth one. Of course, there are companies out there that will do most of this work for you: https://cloudpartners.transform.microsoft.com/compliance-recording-solutions


February 2022 Lego Builds

My lego obsession is in full swing now, so I thought I would tell you what I have been building this month.

I have been coveting for months the 76193 - Lego The Guardians' Ship.

I have been told that it has an actual name (Benatar), which surprises me that Lego don't reference it as that

Anyway, I bought this set for several reasons; It is huge (physically in dimensions, and in number of pieces!), It is beautiful, but most of all cos I love the marvel cinematic universe!

I realise that I am fortunate enough to be able to splurge on something like this every now and then, and not everyone is (I knew loving computers would pay off one day). This is not a review of this product, I just wanted to share a passion of mine.

Anyway, what is it like? Well it's nearly 2000 pieces, so it's the biggest set I have ever made.
It is very orange. It has 6 cool guardians minifigs. It has its own lego built stand, and again, it's very cool.

In the box there are 14 bags, and a novel of an instruction book.

I tried to restrict myself to only one bag a day, and eek out the build. It is very expenseive, so I want to savour it for as long as possible.

When I started this article, I had only built 4 bags, which was the bottom of the ship, half a wing, Starlord, a Chitari Warrior and Mantis minifigs.

https://twitter.com/MicbOliver/status/1492922518558101509

The way the wing was built simply astounded me, such clever engineering in Lego form, some true engineering.

The last few days of february were some holiday days for me, and I could not resist completing the build.
It is a beautiful product, it is very cleverly engineered, I love its display stand (also made from lego). I find myself staring at it quite a bit, in awe of how it is put together with its unusual angles and smoothness.

It has a real presence in the room.

It is not really a model to play with, the wing tips are a little fragile, but it is definetly a stunning display piece!

Other lego things I have been building

In the massive mixed bundle of Lego I have purchased over the last few years, I have found some half built models, and tried to reassemble them.

This time it was the turn of 4643- Power Boat Transporter.

I had found all the pieces except a missing a key piece. The flatbed base, but I managed to build my own variation of it using standard bricks, which works like a charm. It is not quite a rigid, but has held up to play well enough.

I am also looking for the pieces of a Superman: Battle of Smallville set I think have. I noticed some odd shapes and stickers, and with the help of rebrickable and brickset I have determined what set it belongs too!

Looking at the part list I thought I had everything but the minifigs, and barring one awkward piece to find, I think I have it all. So that will be a fun superman space ship set to build for Mr 6yo

My son also built the Pop Up Party Bus set from Lego movie 2 this month. It was a great build, and it has such fun playable bits on it for a 6 year old. He loves it.

We flew it to different lego planets (star wars world, marvel world, sea world and friends ville) to have disco parties with each. It has been fun, and a great way to bring all the models we have built together.

We have also finished creating the tavern of his LEGO SET 31109-1 - Pirate Ship B-model. A beautiful build and inventive too, I enjoyed it more than him I think, I specifically love the water wheel.

Phew it's been a busy lego time recently, so a bit quieter for a while I imagine, but perhaps my next coveted set Ecto-1 could be a birthday present. Who knows.


Meeting Timer For Microsoft Teams Zoom Google Meet

As part of a distributed virtual team, I find that meetings are a good chance to form bonds with your colleagues. However this can easily mean that a meeting can spiral out of control and take a lot longer that anticipated. This can be ok, but can also be a huge time sink. I am not going to go into ways to stop this happening (that is for another post someday). What I am going to show you is how to make everyone aware of the time that is left within a scheduled meeting.

I use a virtual camera instead of a direct webcam feed so I can control what my colleagues can see (who wants to see the pants lying on my floor!). I wrote about My OBS Configuration Setup previously.

So what I want is a timer ticking down on my webcam stream that shows how long the meeting is expected to continue for. So a 30 minute meeting will countdown from 30mins to zero.
This was originally setup for a joke for my team, but we have found it useful for our standups, so it stays.

I am going to use OBS (Open Broadcaster Software) to achieve this, and reading lots of articles about this, I found that the functionality is available by default in OBS now, so no additional plugins are required.

Here is how to set it up in OBS:

Add a Text Source

We need to display the countdown clock on a Text Source within a Scene.
So choose your Scene, then add a new Text (GDI+) Source, call it MeetingTimer

Position it, resize it, change the font etc as you want.

Add the Countdown Script

On the top menu, choose Tools->Scripts

Note - Ignore the enticing "Output Timer" option, this is to automate streaming.

In the resulting Scripts window, choose the + in the bottom left hand side.

This opens up an explorer window to your file system. It is likely to default to the OBS Plugins directory e.g.

on windows it is C:\Program Files\obs-studio\data\obs-plugins\frontend-tools\scripts

Choose the countdown.lua file.

This will then show it in the Loaded Scripts section on the Scripts window.

In the Duration (minutes) text box on the right, choose how long you want the timer to last. This is a global setting, so you will need to change it every time you want a different value.

Choose your MeetingTimer as the Text Source

Specify the text you want to have shown at the end of the countdown. For me, I have it set to Meeting has finished

Close the Scripts window.

Using the Timer

To start the timer, you must transition your Scene to the live Program view.

To reset the timer, you can transition away from the scene and back to it again.


Working With Ffmpeg

I have been working with audio and video files a fair bit recently in my day job. We do video and audio call recording software, so this has allowed me to play around with FFmpeg

I have had to :

  • Create test videos in MP4, & H264
  • Create test audio files in WAV & MP3
  • Crop a video into smaller view sizes (e.g. remove monitor 1 and 2 from a 3 monitor viewed video)
  • Convert a H264 stream to MP4
  • Convert WAV to MP3
  • Resample video frame rates
  • Combine multiple videos into a single video to present a "zoom like" combined view.

The latter one has been the hardest so far, it requires using the "filter_complex" option. So I will take you through how I generated it, this is the complete command:

ffmpeg -i vid1.h264 -i vid2.h264 ... -i vid100.h264 -filter_complex "hstack=8,format=yuv420p,scale=1024:-1" -c:v libx264 -crf 18 output.mp4

  • ffmpeg is the tool we are using. An awesome audio an video manipulation and generation too,
  • -i <filename> Specify this as many times as you want.
  • -filter_complex The type of filter we are going to use, and uh oh its a complex one.
  • hstack=X (where X is the number of input files above)
  • ,format=yuv420p Defines the pixel format to use.
  • ,scale=1024 Defines the videos total width, with each of the sub videos taking up X amount of space as a percentage.
  • -c:v libx264 This is the AVC Encoder to use: https://ffmpeg.org/ffmpeg-codecs.html#libx264_002c-libx264rgb
  • -crf 18 The Constant Rate Factor (The range of the quantizer scale is 0-51: where 0 is lossless, 23 is default, and 51 is worst possible. A lower value is a higher quality and a subjectively sane range is 18-28. Consider 18 to be visually lossless or nearly so: it should look the same or nearly the same as the input but it isn't technically lossless. = https://trac.ffmpeg.org/wiki/Encode/H.264 )
  • output.mp4 The filename to output the combined video as.

This was a really helpful stackoverflow post: https://stackoverflow.com/a/33764934/15722683

You will have noticed, that command only adds the videos in a single row, which is not what I wanted, so to solve this, in comes xstack:

Which forces you to define an x/y layout of your videos.
Optionally, you may need to resize your videos to match, to give an even view, this is what I came up with

ffmpeg -i 1.h264 -i 2.h264 -i 3.h264 -i 4.h264 -i 5.h264 -i 6.h264 -i 7.h264 -i 8.h264 -i 9.h264 -i 10.h264 -i 11.h264 -i 12.h264 -i 13.h264 -i 14.h264 -i 15.h264 -i 16.h264 -filter_complex "[0:v]scale=iw/4:-1[v0];[1:v]scale=iw/4:-1[v1];[2:v]scale=iw/4:-1[v2];[3:v]scale=iw/4:-1[v3];[4:v]scale=iw/4:-1[v4];[5:v]scale=iw/4:-1[v5];[6:v]scale=iw/4:-1[v6];[7:v]scale=iw/4:-1[v7];[8:v]scale=iw/4:-1[v8];[9:v]scale=iw/4:-1[v9];[10:v]scale=iw/4:-1[v10];[11:v]scale=iw/4:-1[v11];[12:v]scale=iw/4:-1[v12];[13:v]scale=iw/4:-1[v13];[14:v]scale=iw/4:-1[v14];[15:v]scale=iw/4:-1[v15];[v0][v1][v2][v3][v4][v5][v6][v7][v8][v9][v10][v11][v12][v13][v14][v15]xstack=inputs=16:layout=0_0|w0_0|w0+w1_0|w0+w1+w2_0|0_h0|w4_h0|w4+w5_h0|w4+w5+w6_h0|0_h0+h4|w8_h0+h4|w8+w9_h0+h4|w8+w9+w10_h0+h4|0_h0+h4+h8|w12_h0+h4+h8|w12+w13_h0+h4+h8|w12+w13+w14_h0+h4+h8" output.mp4

I will figure out what that command breaks down to another day, but for now, it takes 16 different input files, resizes them all, and then stitches them together in a 4x4 grid.

Immediately I was asked to add in variable length files and pad them to start and finish at different times, which again can be achieved with the filter_complex command in FFmpeg - Did I say how awesome it is.

I'll write more about this soon, but for now, I am dealing with some memory reduction issues at work, so not focussed on this.

Enjoy your day, and thats for reading.


Two Years Have Gone

A poem.

Two years, now.
Wow, it went so fast!
Everyday I think of you,
everyday I miss you.
But the grief is easier now.

I don't find myself uncontrollably sobbing,
or suddenly bursting into tears.
The sad times come for sure,
but I can feel them coming.
THEY are not controlling ME.

Reading of others grief experience has helped me.
Seeing other people describe grief has helped me.
Seeing your photos on the wall has helped me.

Lighting a candle in your memory has helped me.
Watching TV & Movies that we always used to love has helped me.
Seeing your amazing nephew flourish, knowing you would be cheering him on has helped me.
Seeing your beautiful daughter, playing, talking and having so much fun has helped me.

I don't want to stop grieving,
I hope I never do,
that feels that I would be leaving you by the wayside,
which I can never do,
BUT I am growing around that grief.

I love you D!

Reading things like this, have helped me: A Twitter thread by @claremackint0sh

Seeing things like this, have helped me: Growing around Grief from "WhatsYourGrief.com"


Summing Up 2021

2021 has been a mixed bag for me:

  • COVID
  • Home schooling
  • Continuing to deal with grief
  • Building my office
  • Starting a new Remote role.

But here we are, at the tail end of it, and I suppose its time to reflect.
Its time to take stock, and reconsider the future.

I enjoyed home schooling my Son for 3 months at the start of the year, it gave me a chance to understand how he was being taught in school, and what educational challenges he faced. It was very enlightening, and a great way to connect to that new part of his life.
But BOY am i glad it did not last long, he is a smart kid, and some of the tasks set were difficult to keep him focussed with, I do not envy teachers. We made it through, and found new ways to teach and learn, and we as a family came out of that time better than when we went in.
He was a better writer, reader and mathmatician, and I was a better Parent, Teacher and listener too.

Once the lockdown was over, back to school he went, and oh boy did he enjoy it, making some very firm friends.
This gave me time to focus on a task I have been wanting to do for the 10 years we have lived in this house, build myself a Man Cave! But wait, with working from home a real long term prospect, it now becomes an Office.
So I spent several months, researching, designing, planning, digging the foundations and building the floor for my Office. It is still an ongoing project, because after a few months, money started to dwindle & supplies became harder to get. So the Office got shelved for now.

So onwards to finding a new role, a Remote Only role. Talking with recruiters, and interviewing I realised what type of role I wanted, and focussed into that setup, eventually landing a role which ticked all the boxes. With an immediate start date, the Office was on hold indefinetly, as there was a lot to learn and focus on in a new team, in a new company, in a new product.

6 months later, the team is finally hitting its stride, we have changed a LOT of processes and procedures, and set some clear goals and focus for the software setup. Its been a very busy time with a complicated software environment. I hope we can gain some real momentum in the new year, and meet the needs of the ever growing list of customers.

As with everything, life goes on, and parenting never stops, my son is inquisitive and amazing, and also exhausting too, so despite being a year since I lost my twin brother, I have struggled with his death, dealing with his estate, my emotions, and not being able to draw strength from my family due to COVID.

This time of my life has and continues to be hard. I am suffering with increased anxiety due to COVID risks, not seeing friends and family, not going to an office to work, and sometimes feeling very isolated. I have always been introverted and enjoyed my own company, so add on anxiety I have not been able to be out in the world, so the isolation is both a good and a bad thing to me, one which I hope to break out of when we can be better protected from COVID.

So this year has been full of strange times, lovely times and sad ones. I have loved watching my son grown up into a sassy, funny and funky 6 year old, my heart has swelled with pride at his achievements with schooling both academic and social. But through it all I have missed my brother.

We all have different things going on in our lives, sometimes they are not visible. Be kind to each other, even if it is just with a smile. See you all in 2022.


Sorting Lego

I have become a bit obsessed with Lego recently.
We could never afford it as a kid, and now I have a kid and a bit of spare cash, lego has become a favourite in the household.
It helps also that Lego Masters (Uk, Australia and USA) has become a bit more prevalent on the TV, and my son has picked up my interest in it.

I have bought over 30kgs of assorted lego over the last few years, and I am still sorting through it!

My son is old enough now to want to build sets too, so instead of buying new ones (which he loves), we are trying to build old ones. The loose lego came with a lot of instruction booklets, so there should be some sets in there.

My sorting strategy has worked like this:

  • Separate flat pieces
  • Separate standard bricks
  • Separate bigger and smaller than 2 studs
  • Separate technic

This worked well to start with, and got the lego into a more findable piece state, but I found when building sets, I needed even more granularity of piece.

So my strategy then became:

  • Pay close attention to small bits around 2 studs and smaller, and put them in separate pots based on shape.
  • Separate flat pieces into rectangular and "other" shapes
  • Separate bulky pieces out (e.g aircraft hulls or car chassis)
  • Separate wheels
  • Separate windows and doors
  • Separate minifigs
  • Separate "shrubbery"
  • Separate "slopes"

This then worked well, but I found it could still take me a while to find specific colours of items in a 5Litre tub, so I have now added colour sorting to the bigger collections of bricks.

Oddly I have found this very therapeutic, and massively time consuming.
Now I sort while I am finding pieces for a set, so if I am looking for a purple 2x4, I will pull ALL the purple flat pieces out and bag them up.

I now have a reasonably well sorted set of lego, in containers and resealable bags, and I can track down a specific piece quite quickly (If I have it).

One big tip that has come out of it all too, is to have good lighting when sorting and finding pieces. An LED anglepoise lamp has made things so much easier.

I'll post some pictures of our models up soon.


Remote Meetings

Most software engineers in the UK have been having virtual meetings for over a year now, due to Lockdown, and new colleagues working remotely.

Some of my thoughts on how to keep these effective.

  • Cameras on
    This one is controversial for lots of good reasons, but I find it a lot easier to connect to others and understand tone if I can see someone on Camera. I work in my bedroom, so I have made adjustments to my camera feed to make me more comfortable showing that to my colleagues.
    I never will or have pushed anyone to have Camera on, but I do find it a better experience.

  • Have some time for "connecting"
    Due to not being in the office with your colleagues, you need to take time to connect with everyone. Therefore the start of a meeting should feel like a time to do that connecting. Ask how people are, how there kids are, What is their dog upto in thebackground. Mention the cool Lego set etc. These help you connect, and stay connected.

I think also the normal things for a meeting also apply, such as Agendas and having the right people, and someone keeping it on track.


Remote Pair Programming

In order to get upto speed at my new job, I have suggested I Pair program with some of the developers.
This will help me get upto speed, help me learn the systems, the processes, the people and the tools.

It seemed like the existing engineers were hesitant, but that was quickly overcome. I believe the hesitancy was down to not really liking the "video on" meetings. Although most are now on board with that - I'll write up my thoughts on this in another post.

So with them onboard, I have spent this last week pair programming.

Myself and one of the other newbies were given a task to create the capability to store large files in Azure Blob Storage.
Neither of us had used this functionality of Azure before, so we decided to pair up to tackle it.

We both read a little around the subject, sharing links over Chat about the API and the capability, and then we got together using MS Teams video calls.

We wanted to write code together in a Pair Programming way, while also taking a Test Driven Development (TDD) approach to the task.

We started a Teams video call, both booted Visual Studio, and then tried the "Live Share" option.
This at first did not work, we were dealing with 1 solution with over 300 projects in it, and VS was struggling.
Some of it was attributable to later discovered hardware issues.

Once we also figured out that being signed into VS gave you more capabilities with editing the code together, the point of LiveShare became apparent.
Unfortunately, Visual Studios Intellisense was not working over LiveShare, which is apparently a bug that is being worked on. The LiveShare "guest" was at a huge disadvantage due to this, so we found doing a Screen Share as well as the LiveShare helped. The feedback for the "guest2 was much better.

We managed to write tests together, then build the code to meet those requirements.
We problem solved together, we learnt about each other, our coding habits, the way we like to approach issues, design solutions and code.
We helped each other to improve our way of writing tests, how to use VS, how to build code to be tested, and improved our use of TDD.

We found a regular code commit and swap of the main VS user/guest helped, as the lack of Intellisense was frustrating.

We also found that VS Extensions live Fine Code Coverage and SonarLint were helpful.

We also kept our Jira task upto date with the "TODOs" for the development work. 2 people generated a lot of thoughts, and working through them all at the same time meant we kept getting side tracked. Writing them back to come back to kept us on track, and allowed us to focus.

Once the bulk of the coding was over, we again separated to separate VS instances, and tidied up different parts of the system, adding identified missing tests and documentation with the legacy code.

Then we came back together for testing.
The system we had built was a plugin to a much larger system, and up to now had only been executed through unit and integration tests. We needed to test it in the system,
However neither of us had ever run the system or used it before.

Pair programming again helped here. We both had learnt different things about the system in the previous week working apart, therefore when we started to run the system for testing, between the 2 of us, we figured our way through it.

We still have testing to go, due to some blocking system config issues, but I have found the Pair programming of the last week to be very beneficial for me, and I hope the other engineer too.


Atlassian Products

This is not an advert for this company or the tools, just my thoughts on them.

Today with my new job I have to use Atlassian tools.

Specifically Jira, Confluence, BitBucket & Bamboo.

Jira is a project management tool specifically aimed at "Agile" systems.

Confluence is a team workspace for project docs and team collaboration. Think of it as a "Super Wiki" with collaboration features that is versioned.

BitBucket is a GIT based version control system. It has CI etc built in now too, but we are not using that.

Bamboo is a self hosted "Build & Artifacts" server.

I have never used these before, but with my experience of other tools like Trac, Redmine, GitLab and Jenkins means that these tools feel familiar to me.

I like the way Confluence seamlessly handles versioning of documents.
Jira does not feel very user friendly. I struggle to find the features I want.
BitBucket works fine, although the inability to build Windows based apps in there CI pipelines is a big problem.
Bamboo sucks. Its hard to navigate, and is incredibly limiting without paying a fortune. Jenkins does a far better job for free!

On top of those 4 products, I am also using SourceTree for a GIT UI, which I like more and more.


First Day At New Job Fail

My latest job is fully remote. That means no access to an office in person.
Therefore in order to start working at a new position, some basic things are needed:

  • PC
  • Multiple monitors
  • Headset
  • Webcam
  • Notebook & Stationary
  • Chair
  • Desk
  • Software Logins
  • Software licenses

Day 1 - No Hardware, no software logins. Nothing - Ooops!

Manager is 5 hours different timezone. What can I do?

So how would you handle this?

Well for me, I found out some of the tools that were in use that I had not used before, and was able to do some training on these tools.

For me it was Jira, Confluence, BitBucket & LiteDb

I used YouTube to and the products' own documentation documentation.

Also once I got access to the internal Office365 system, I was also able to read and watch some recent internal marketing videos which gave context of the work that will be done.

I also found there youtube and other social channels and caught up on any public marketing!

Basically what I am saying is, I made myself useful, and got to know the company through whatever means possible.

Everything worked out quickly enough, but don't be afraid to get stuck in on your first day even if you don't have the things you need. Show you worth and professionalism by being pro-active!


Different Staging Environments In Azure Static Web Apps

I have been trying to get the initial load of my blog posts much faster, and during that work, it was annoying me that in order to test certain things like configuration, I had to push my change to production.
Normally I don't mind a push to production, but as I was experimenting I kept breaking things (the change I was making, was an Azure config change, and does not reflect locally).

This is not ideal.

In reading some more Azure documentation, I came across a reference to a "staging environment", which is also mentioned in the Free tier of the Azure Static Web Apps plans (now it is GA), and it states there are 3 available!

This finally registered with me, and I went looking for some information on the different staging environments for an Azure Static Web App.

I have had a look around the MS docs, and cannot find much where this is talked about, but I came across this article detailing how they work with Github Actions.

This is amazing!

For up-to 3 branches created, a different staging environment will be automatically created for you to test!

Who knew!

I am not going to recreate the steps here, other than to say, create a branch, create a PR, commit stuff, the PR will get updated by the github action with the Url of the staging env.

Go read the docs

Having now used it, it is seamless within GitHub and means I will no longer break the blog when trialling new ideas!

Awesome!

Benny from Lego Movie saying Awesome


Speeding Up The Initial Load

My site is a static WASM one, there is no server host involved.
Therefore in order to display the dynamic content of the site, all the "code" must be downloaded to the browser, and then run within the browser to generate the content.

This is a downside to Blazor WASM, and the clever peeps at Microsoft have been trying to improve this. The current plan they have is to pre-render the content. However this requires a Server host at some point to generate the dynamic content and store a static version of it to then display to the user.

But can we pre-render without a hosted site at all?

Reading these gave me an idea:

Problem

I want my users to be able to start reading the blog page much faster than they currently do. This is primarily aimed at direct links to posts. At the moment, they get a "Loading" screen for quite a while. This is a holding page while the Blazor JS kicks off and downloads dotnet.wasm and all the dotnet binaries needed to actually display anything.

So pre-rendering seems sensible, as it will display something to the user much faster. But without a host, this loading page is still an issue right?

So why not change the Loading page to be the first page of my website. It gives the user something to read for a few seconds while the Blazor code is downloading, and then it will switch to the Blog post.

This means replacing the Index.html with a version of my home page!

This is very akin to the old loading screen games when computer games used to take an age to load.

A discovery

In playing with the ideas above, I discovered something!

In a non-hosted Blazor WASM setup (like mine), I have found that having a HTML file of the same name in wwwroot folder, means it will get served to the user first.
If we ensure that the <script defer src="_framework/blazor.webassembly.js"></script> tag is in that file, then Blazor will kick in after the page is loaded, and then display the dynamic version when it has all the bits to do it.

e.g.
If we hit this Url

https://blog.markoliver.website/Setting-Up-A-Sitemap
and in the wwwroot folder have a html file called "Setting-Up-A-Sitemap.html" (Note we have to have the file with a .HTML extension with the exactly the same name as the Uri path!) then we see the static file first, before we see the dynamic one:

An animation of the pre rendering in action

So with this information, we have an idea...

Possible Solution

If we use Andrew Locks idea to visit every page and save the rendered output to a static html file, ensure that the blazor.webassembly.js script is on every page, then the pre-rendered output will be shown BEFORE Blazor has finished loading all the .NET dlls (about 20MB worth) in the background.

This requires the pre-rendered files to be saved into the wwwroot dir. The pre-rendered files MUST have the blazor.webassembly.js script included too, otherwise it will never switch to the dynamic view!

Design

I need to generate the pre-rendered views of the pages.
Andrew lock does it by generating them using a hosted site.
But what about using bUnit to generate the content and inserting it into a templated html file. No hosted site needed at all!

First, I need to get a list of Urls to generate. Luckily this is held in my Index data store.
So I can read that list, and then using bUnit render the BlogPage.razor component and take the HTML out, and insert it into a templated HTML file with the blazor.webassembly.js script in it, and save the file in the correct place.

Simples right?

Lets find out...

Note - I have not changed my Blazor WASM site at all to accommodate this. All the tutorials on the web talk about removing index.html and things like that, but they are for a hosted site. Mine is not. All I am doing is adding html pages into the wwwroot folder!

Implementation

Setup the bUnit text context

This needs external JavaScript calls disabled (e.g. Twitter) as we don't care about the rendering from this. Not this does not affect the Blazor generation.

            Bunit.TestContext ctx = new Bunit.TestContext(); ctx.JSInterop.Mode = JSRuntimeMode.Loose; //Ignore any JS calls
            
          

Then I needed to register the service dependencies that my BlogPage Blazor component uses.

            IBlogPostAquirer mockedblogPostAquirer = CreateMockForBlogPostAquirer(); ctx.Services.AddScoped<HttpClient>(); ctx.Services.AddScoped<BlogPostReader>(); ctx.Services.AddScoped<IBlogPostIndexReader, BlogPostIndexReader>(); ctx.Services.AddScoped<IBlogPostPopulater, BlogPostPopulater>(); ctx.Services.AddScoped<IBlogPostAquirer>( s => { return mockedblogPostAquirer; } ); ctx.Services.AddScoped<MarkDownFileUriBuilder>(); ctx.Services.AddScoped<MarkOliverBlog.Searching.Searcher>(); ctx.Services.AddScoped<IWebAssemblyHostEnvironment>( s => { return Mock.Create<IWebAssemblyHostEnvironment>(); } );
            
          

I needed to Mock out IBlogPostAquirer, instead of trying a HttpClient call to an external url for a MarkDown file, it can just read the local one in the source directory:

            var blogPostAquirer = Mock.Create<IBlogPostAquirer>(); Mock.Arrange( () => blogPostAquirer.GetPost( Arg.AnyString ) ).Returns( ( string fileName ) => {     return Task.FromResult( File.ReadAllText( $"{basePath}/Posts/{fileName.Replace( " ", "" )}.md" ) ); } ); return blogPostAquirer;
            
          

All other dependencies are the real ones, as I want the generation to be as close to reality as possible.

Call bUnit

Tell bUnit to render the BlogPage component:

            Bunit.IRenderedComponent<BlogPage> systemUnderTest = ctx.RenderComponent<BlogPage>( parameters => parameters.Add( p => p.Title, blogName ) ); systemUnderTest.WaitForAssertion( () => systemUnderTest.Find( ".tagHeader" ), TimeSpan.FromSeconds( 10 ) );
            
          

Create the static file

I read the current Index.html file, as it contains the majority of the layout we need and it forms the basis of all Blazor generated pages.
I then swap out the app div for the generated content from bUnit. This uses AngleSharp which was included as part of bUnit anyway.
I also remove the #blazor-error-ui div as it's not needed in the static data.

            string currentTemplate = File.ReadAllText( basePath + "index.html" ); var config = Configuration.Default; var context = BrowsingContext.New( config ); var parser = context.GetService<IHtmlParser>();  var document = parser.ParseDocument( currentTemplate ); var body = document.QuerySelector( "body" ); body.RemoveChild( document.QuerySelector( "#blazor-error-ui" ) );  var app = document.QuerySelector( "#app" ); app.InnerHtml = generatedContent;
            
          

Finally I just write the file out to disk in the wwwroot directory.

Next steps

  • I need to pre-render other pages, which is the same process but with a different set of components.
  • Inline all the CSS that is needed in the pre-rendered page, so no unnecessary external files are downloaded: This looks good: https://github.com/milkshakesoftware/PreMailer.Net
  • Defer the javascript code to run as late as possible, ideally after render, so we can get the HTML shown to the user.
  • Try to make the change from Static page to Dynamic page as seamless as possible for the user.
  • Improve the look of the pre-rendered page.

Conclusion

This works great in the sense that the page is shown as close to immediate as we can get (so far). However on slow connections, it still takes time to download the dotnet binaries. We are relying on the user to be reading for the time it takes to download those files in the background.
Hopefully they will be downloaded before the next page is requested, but at least the user has gotten the initial content as soon as possible.
The only way we can reduce that is by ensuring the smallest amount of content and external files are needed, which is an ideal for every website!

This approach will work for all future posts. It runs as a Unit Test which means that the static content will also get auto deployed via my GitHub Action to the Azure static web app.

I love this approach, I only had to write a few lines of code, and not change my Blog at all. It uses everything I already have, and will work without change as my Blog grows.

The best part of all is that the initial time to see content is so much smaller (its hard to measure accurately, but Its about 3 seconds compared with 21 seconds when running the "Fast 3G" throttling option in Chrome on a new Incognito window directly to a blog post).

So that is about 7 times faster for new site visitors!

Also, according to Google Chromes Lighthouse dev tool. The Time to Interactive is down from 4.2s to 1.9s

Lets see it in action then with a side by side comparison:
An animation of the pre rendering in action as a side by side comparison

Matt Smith as Doctor Who Saying Who Da Man


Setting Up A Sitemap

Google works best for your website if you give it a sitemap.
This is a full list of all the Uri's of all the pages on your site.

There are several ways to do this, and Google Help will show you how.

So I need one for this site.

However this site is "static" in a sense, as it only becomes dynamic after its viewed in a browser.
Therefore I need to generate a static sitemap for google to index.

That way Google will just read my sitemap file instead of trying to download the site, then render it to find the links.

So the easiest thing to do is generate a static sitemap file every time the site is pushed to Azure.

This is easily done through a script reading all the MarkDown files of the posts, and injecting the 5 or 6 static pages (like "about me") too.

That way whenever I commit a new post, the sitemap will get updated automatically and google will find me new page easily.

So how to generate this static sitemap?

Generation

All my posts are MarkDown files, those file names dictate the layout of the Uri for the page e.g. "SettingUpASitemap.md" becomes "blog.markoliver.website/Setting-Up-A-Sitemap"
Therefore I can run a script reading these files and write out the text file.

But when should it be generated?

Because the script is so small:

            # This will generate the sitemap for my website baseUrl="https://blog.markoliver.website/" echo "$baseUrl""About-Me">>SiteMap.txt while IFS='|' read date title categories; do    if [[ $date != "<"* ]]     then        echo "$baseUrl${title// /$'-'}">>SiteMap.txt    fi done < Index.md
            
          

I was thinking I can do this in my GitHub Action. I think I need to add this:

            
              - name: Generate Sitemap         run: |           # make file runnable, might not be necessary           chmod +x "${GITHUB_WORKSPACE}GenerateSiteMap.sh"            # run script           "./GenerateSiteMap.sh"           cat SiteMap.txt
            
          

Then it should get auto deployed on push to Azure.

I will then be able to see it here

It works!

Success Meme

Final touch

The last thing to do is tell Google about it in the Google Search Console Sitemaps page, and then wait for Google to index it.


Installing A Garden Office _ Part 7

See the full series of posts, to find out how we got here.

Building the floor

Now I have the base dug out, and the block pillars sighted (note they are not secured yet), its time to build the base frame.
This frame will also be the floor of the office!

I have decided on a 3m wide building that is 3.6m deep.
These dimensions have been dictated by the trees surrounding the space, and also the standard lengths of materials. e.g. working to standard lengths means you don't have to cut as much, or get a lot of wastage.

Most timber comes in multiples of 1.2m (1220mm), so 1.2, 2.4, 3.6, 4.8.
This is true for sheet materials too they are often 1.2m (1220mm) x2.4m (2440mm).

So 3.6m depth fits the standard lengths of timber.
The 3m width is because I want as wide as I can for the room (I don't want it to feel small inside). The trees stop it being any wider, but the next timber size down is 2.4m which just makes the room feel too small.

So the base will be made of 47x125mm treated C24 timber (see previous post for more info).
It will be in 2 pieces (possibly 3), a 1.2mx3m front part, and a 2.4x3m rear section.
It will have joices at 400mm centers.
The joices will be installed using Joice hangers, and nailed into place using Sheradised Square Twist Nails.
The basic frame will be nailed together with 100mm Round Galvanised Nails.

Between the Joices will be Celotex General Application Board GA4100 insulation of 100mm depth.

Then all the joices will be covered in Aluminium Foil Tape. This is to stop water from passing through the floor. It also stops a lot of heat loss, by reflecting it back into the room.

Costing

All of this totals around £700 for the floor, and about £100 for the foundations.
If you are interested in the specifics I purchased, then contact me, and I can share my spreadsheet of costs.

Whats next?

I have purchased all the bits, and have the theory, now I need to put it all together. Wish me luck!


Installing A Garden Office _ Part 6

See the full series of posts, to find out how we got here.

Research

Now I know where the office will sit, and how it will affect the garden, neighbours and fauna, its time to figure out how to get it erected.

The first decision to make, is to self build, buy prefab, or "get a tradesperson in".

Due to the size I require, most prefabs wont fit or will be too narrow. They also don't give the lighting I want without significant cost increases. For something about 3x4m is going to cost in excess of 7k for just a shell. To kit it out with electrics and decent finishes inside and out, its several thousand more. Also most prefabs wont have the insulation levels I require. I need to work in this Office every day of the year.

I decided then to build it myself. It is a real challenge for my adequate DIY skills, and I love that I can may it bespoke for me.
Getting someone in to build it for me is always an option still, but for now, is a last resort!

First up is the choice of base!

My initial thought is that I don't want a concrete base. I have limited access to the garden (I can only get a thin wheelbarrow through the gap, and only at the right angle).
This means its either difficult to get all the sand and cement in the garden, or it needs to be pumped in. My office location is about 200ft away from the road, so cement pumping gets expensive with all the extra hosing required.

This lead me to 2 other options:

  • Ground Screws
  • Pillars

I like these options as they are low impact on the environment they are placed in.

Ground screws

These are as the name suggests, a giant screw (somewhere between 500mm and 1m long), which using a tool, you screw into the ground.
For a building 3m x4m you would need 12 of these.
They are a great idea, but as I found out, they get expensive at that number. I was also concerned about the weight of an office, compared with a shed which they seem to be aimed at.

Pillars

These are great, there are lots of options, concrete shaped pads that just sit on the ground, with an adjustable height.
A great idea, unfortunately again, cost got in the way.

So after talking with my Very Wise Mum who has experience in the building trade, I decided on building block piers.
These are just 2 concrete blocks sat on top of each other, cemented into position and made level.
These cost less than £2 each, so 18 of them (I need 9 piers) with some sand and cement and ballast and hardcore cost less than £100. A significant saving!

9 Pillars you say?

But how do you know you only need 9 pillars?
In the building trade, there is something magical called a "Span Table" which is some mathematical witchcraft that tells you how far apart supports can be between spans of timber of certain dimensions to accommodate certain loads.

These magical tables tell me that with a 47x125mm (2x5") C24 graded timber, I can have a gap of a max of 2.82 metres length between piers.
I have a 3mx3.6m building, so I need supports in the middle of the floor too, hence 9 pillars!

Why did you not just use 2x8" timber instead to give a 4.55m span instead? Simple - the answer is height! I need to keep the building below 2.5m in height, so those extra 3" make a big difference to the internal height. Who wants a low ceiling!

Oh right, smarty pants - Then why did you not use 2x4" instead then, that would work upto 2.24m width. Good spot - I could have, but I wanted a bit more rigidity in the floor, and 100mm insulation, so to keep the insulation off of the blocks, it need to be higher.

What is C16 vs C24 timber?

This basically gives you a level of strength. The higher the number is, the better the wood, and therefore stronger.

Take a look here for a full list https://web.archive.org/web/20180203020533/http://www.wickes.co.uk/helpadvice/buying-guides/timber-buying-guide

Researching all of this was great, and here are a few sites that really helped me:

Whats next?

Building the floor!


Installing A Garden Office _ Part 5

See the full series of posts, to find out how we got here.

When will the digging stop?

I continue to dig, its like the great escape, there is always more to dig. However I don't have a cool name for my hole, nor do I have anyone else to help.

I'm very close to the end now, so I thought I would crunch some numbers to stop me having to do more digging.

I reckon its about 30 wheelbarrows of soil for every 4m^2 of soil about 7.5cms down.

So the 4m x 3m hole I need, equates to this.

3.5 layers at the back third = 105 barrows
3 layers at the middle third = 90 barrows
2 layers at the front third = 60 barrows.
Plus 2 barrow for each block pier hole (foundation). there are 9 of them

Total = 264 wheel barrow full of soil.

Soil is heavy, so I am guessing about 45+ kilos a barrow full (based on comparing it with 4 concrete blocks in the same thing)

264 barrows * 45kg = 11880 kg or 26k pounds.

That is the weight of 2 Male Adult African Bush elephants!

No wonder my back hurts!

Anyway, here is my hole:
A large 3m x 4m hole
I will explain about the blocks in a future post.

Whats next?

Time to do some research!


Lego Saturday

Today was a lego day!
It started at 6:30 this morning when woken up by 5. He wanted to play lego batman on the wii.
Should I be worried that he loves playing the villains and beating up the Cops??

Anyway, it's Saturday so why not? 3 hours later we have some breakfast. Oops

After a good break, he asks to break out the big pot of mixed lego I bought ages ago.
It came with about 20 instruction booklets, but we have no clue if any of the sets are in the big pot.

While mummy and him built star wars machines and an army to fight (again should I be worried?), I sorted some of it out.

Looks like we may have a couple of sets! Woohoo.
They are old ones from late 80s onwards, so should be good, and I think the first ever Lego set I had as a kid is in there.

I have decided that I will be investing in more lego now, i love it, couldn't afford it as a kid, so why not spend some of my hard earn money on something I can enjoy with my son.

I want to buy the Lego Tron legacy set or the Bonsai tree next.


Installing A Garden Office _ Part 4

See the full series of posts, to find out how we got here.

"The roots have to go!"

My ethos with this build has always been to nestle it among the trees and therefore to ensure they stay healthy too.

It is intended to be nestled between 2 trees, a May tree (Red Hawthorn) and what we affectionately call a "weed tree". It grows no matter what you do to it. I have yet to identify it, but it grows fast.

Because I don't have an affinity do it, I won't be sorry if it dies off. The only loss will be some privacy on the rear boundary.

Sometimes you set out with good intentions but common sense gets in the way. I hope the tree survives as I cut a root that is 6inches in diameter and a lot of its ancillary roots.

Large Tree Root

I hope there are more roots in the other direction but I don't know.

But Why?

It's for a good reason though: I need to maintain a good airflow under the building to prevent water penetration and moisture build up.

This means the building will sit better between the trees due to the reduced height and survive due to the extra moistness of being amongst them.

For those people who hate me for making this decision, please know that my garden has many trees and flowers and bushes and several wild flower patches in it that I maintain. After all the tree has not come out, and I hope it will survive the loss of this limb.


Installing A Garden Office _ Part 3

See the full series of posts, to find out how we got here.

Digging!

I have spent the day digging again!

I am trying to maintain the trees that my office will be sitting between.
Therefore I need to dig out the ground without hurting the trees.

My intention is to have the office sat above the ground, and that will be achieved by building/some/block/piers, that allow the floor to sit 150mm above the ground.

I'm also building/within permitted development so I am restricted to 2.5metres High to avoid needing planning. I am also building just over 1m from my boundary.

Because of all these decisions I am digging out the soil, but leaving the roots. This is a pain to do, but will allow the airflow under the building I need as well as maintaining the health of the trees.

It's fiddly because of a big lump of roots. So I'm pretending to be an archaeologist with a brush and trowel. It's slow work, but all foundation work is important and slow when building a structure you want to last.

Being in a high mositure area with the trees, it's vital I have this air gap.
So just call me Indiana Jones for now!
(Or even better, Dr Alan Grant from Jurassic Park as the roots look like a dinosaurs skeleton!)

Is It a Skeleton?

Anyway, still more digging to go, but it's coming along nicely.
I have moved about 20 wheel barrow fulls of stop soil into a big mound which later I will use to fill my front garden better. So it's been hard work (I'm not as fit as I used to be).

All in all I have taken off 20cms of top soil in the back of the area, which shows how such of a slope there is in the garden.

What's next I hear you say?

Well I need to finish digging out, then I need to build the block piers. More on that another day!


Getting Into Blogging

Blogging has never been something I have found easy, the thought of everyday writing seems overwhelming to me. But that all changed when I joined Tech Twitter. Its such an interesting place that has really inspired me. Also listening/reading Scott Hanselman blog has helped. He says that you don't need to blog every day, just do it consistently. If you blog once a week, in a year you will have 52 posts, and that is a lot.

Also at my last position I wish I HAD documented my journey more, I spent the first 6 months of my research recording things everyday, but when my team grew and we started solving challenges, I felt I had no additional time to write in the same way.

But now, i am about to embark on a new role and this time I WANT to record more than before, and generate blog posts where I can. I hope at least once a week I can write something complete and useful, but I am also going to post a lot of unfinished thoughts too.
This blog is just as much for me, as it is for you dear reader.

So onwards and upwards with the posts, and I hope the features I have added to the site help you navigate them better. I'm always open to ideas to improve the way the site is used, so don't be afraid to tweet me with an idea.

I would also be happy to hear about any ideas on blog posts or even how you approach blogging.


Installing A Garden Office _ Part 2

See the full series of posts, to find out how we got here.

I have been clearing the area of debris and plants using my new favourite digging tool. It's called a Mattock and it breaks through roots, and hard soil and plants with ease. Basically an axe for the ground. Get one if you are doing any significant digging!

Once the space was cleared, I had the chance to Mock up the space. This meant putting up fake walls and assessing heights.
For walls I used an old window and pallet I had lying round.
These helped me imagine the space and see the sizes I had to work with.
Mocked up office area

This allowed me to site the desk, and therefore design the window positions I want too.

Other things I realised when the mockup was in place:

  • Impact on the surrounding trees with the height
  • How much (or not) the neighbours would be able to see into the office, and see the office its self.
  • How sloped the ground was.

Resolving the ground slope

The ground slope (about 15cm difference front to back) is a major issue when erecting a building. It's an office, so it needs a stable floor. That means it also needs to be Flat.

I believe I have 2 options here:

  1. Add something at the front to bring it to the level of the back
  2. Reduce the ground at the back to level it with the front.

I am building this office under permitted development here in the UK. Which means I am restricted on how tall the building can be and how close to the boundary it is too. There are lots of rules, so check you are meeting them before deciding on your own.

The biggest restriction I have is the 2.5m height restriction. Its 2.5m measured from the "adjacent ground" to the tallest part of the building.
So for a sloped site, this is advantageous if you dig the building into the ground, to level it to the lowest slope.

Therefore I have decided to dig a great big hole. Essentially burying the back of the office into the ground. This means the floor of the office will be level at the lowest part of the slope. Handily this gives me about another 15cm to play with in height of the building under the permitted development, but most importantly of all, reduces the impact on the tree canopy.

What is next?

Next is to level the ground of the site to cope with height and make the slope less.

Also I need to do a lot of Research on building foundations for this building.

Useful links on permitted development restrictions:


Adding Category Pages

Now we have tags on posts, it is easy to see all related posts, but there is no way to access a list of the known categories/tags.

So lets add one.

What do we need?

A new page "Category List" needs to be created.
This page will query the blogpostpopulater for the list of categories and post names read from the "index".

Then we will use the Virtualize built in Blazor component to work through the returned data, and add a TagsDisplay component we built previously to show the tags.

Had to transform the data from a list of BlogInfos to a Dictionary of Lists of BlogInfos keyed on categories.
This is probably easy to do in Linq, but my LinqFoo is not strong in that area, so a nested set of forloop to populate the list is fine (Please, let me know if you can do it linq!):

            Dictionary<string, List<BlogInfo>> result = new Dictionary<string, List<BlogInfo>>();  foreach (var post in allPosts.Where( t => t.Tags != null )) {     foreach (var tag in post.Tags)     {         if (result.ContainsKey( tag ))         {             result[tag].Add( post );         }         else         {             result[tag] = new List<BlogInfo> { post };         }     } }
            
          

So once thats all available, I just need to put it all together with a bit of styling and using the exiting components for displaying titles and links, and we get this "page", which I quite like.

Thinking about it, I also need to add it to the NavBar too, so a nice new icon here.

Im thinking of dropping the latest 10 blog posts from the nav bar too, its getting a bit crowded. Perhaps just last 3.

One last think, I need to sort the categories alphabetically. They help the reader scan them more quickly- There is no other reason than that, as they are all unrelated.


Job Search 2021 _ The End

After 5 weeks of looking for a new Senior .NET role, today I accepted an offer!

Its for a great company, with an exciting set of products, a great vision, a fantastic team to work with and most importantly of all a great work life balance for me.

In order to get to this point, I have found it exhausting, interesting, fun, soul destroying and very time consuming.

Here is a quick list of highlights that have occurred:

  • Talked with an infinite number of recruitment agents about an infinite number of positions. Note - I turned a lot of options down at this point due to not fitting my requirements.
  • More than 200 emails.
  • Applied for 20 jobs via LinkedIn.
  • Applied for 2 via reed.co.uk.
  • Talked to 5 different companies through my friends.
  • Talked to 3 different companies because of #TechTwitter.
  • Head hunted by 2 companies who saw my blog or met me via virtual meetups.
  • Interviewed in Australia, England, Scotland, Poland, Estonia, Nigeria & Canada.
  • Interviewed with 18 companies.
  • Attended 24 interviews.
  • Talked with 32 interviewers (CTO's, HR heads, business owners & software engineers).
  • 20 interviews via MS Teams.
  • 2 interviews via Google Meet.
  • 2 interviews via Zoom.
  • 1 interview via telephone.
  • 3 at home programming tasks (code available on my github).
  • 4 online tests.
  • 1 live programming session.
  • 1 interview I terminated 10 minutes in because it did not fit my requirements.
  • 1 role disappeared after interviewing.
  • 2 roles changed the remote requirement after interviewing, so I declined before the formal offer was given.
  • 5 weeks,
  • 4 job offers.
  • 3 declined offers.
  • 1 accepted offer!

A lot has happened in a short time, and its nice to be able to get some of these thoughts out of my head, and reflect on the process.
Its certainly the most intensive Job Search I have ever had.
Not working while looking made that much easier, but also remote only interviews have made things faster.
In the last 8 years since I looked last the industry has grown a lot too, and there are opportunities everywhere. I would also add, that deciding to go Remote first, has allowed me to look across the UK, rather than just my home town which opened even more doors.

Things I have learnt

Getting your profile on reed.co.uk makes your phone explode with recruitment agencies. Although it nearly made my phone melt, it did prove to be the best way to find a lot of companies. Surprisingly there were very few duplicate roles.

Your "network" really works: The notification of the existence of the role I have accepted came from a post on Twitter. This was really helped by an initial retweet from MVP Dan Clarke which was then retweeted another 23 times. With a total engagements of 625 and nearly 10k people seeing it. Wow.

Being comfortable on a video call is so very important.
Have good lighting on your cam, and check your microphone levels.
Personally I found the auto blur backgrounds very distracting as people moved around, so I don't have one.
Make yourself memorable on camera too, several people commented on my camera tag line, and many looked at my blog during the interview or soon after.
Don't settle for a "good enough" role, keep looking.
This requires you "Know your worth".
Don't be afraid to disable your "looking for work" status on the platforms you use. You need time to decompress from all the calls and interviews. Also when you turn it back on, you ping back up on everyone's radar!
If you are getting inundated with calls, set your salary expectations higher!

Stick to your requirements.
Be flexible on industry, at the end of the day, its all code.
Know what tech stack you want to work in, but be flexible. For me, I knew it had to be C# .Net Core and some Cloud provider. The rest was open.
Be sure about your motivations. Are you salary driven, or is the team fit most important?
You can negotiate on holiday allowance, pension contributions and working hours. Its not all about Salary.

Most importantly of all - Think. Think about the role, the team, the day to day work, how you will progress, how the role fits around your life. Always take time to think before accepting. And always see a contract before you formally accept.

So thats it. I don't have a start date yet, but it will be in the next few weeks.

If you want any job hunting advice or help with interviewing, then find me here.

Good luck!


Job Search 2021 _ Working With Recruiters

Recruitment agents are generally the best way to find positions here in the UK.
They work on commission which is normally a percentage of the salary of the person they introduce to the company who gets hired.
So it's in their best interests to get you the best salary.

For us looking for a role it's a free service that you should use.

The agents have a view of the industry that is unique, and often have access to opportunities that are not always advertised.
Having said this, your network is vital too, but that is another post.

Recruitment agents can also be a curse too!

Some agencies trawl the Web looking for job ads, and then entice candidates to that role. They then effectively Cold Call the company trying to get you into a role that the company has asked them not to.

This is frustrating for the company, as well as the candidate as in my experience it rarely works.
Unfortunately this limits you as the candidate to approach the company separately as often you have agreed that the agency is your proxy. Check the terms of the agency here as its a grey area in my view, especially when its only via a phone call.

How do I deal with them

I have been inundated with agent calls, so set time aside for them as you never know which call will lead you to your dream role, even if you are repeating yourself for the 30th time today!

Also know what your requirements are BEFORE talking with them. The more you can narrow your search, the better the roles you will be told about. So if you want a 35hr week, tell them. If you don't want to work overtime, tell them!
You cannot be "too picky" at this stage. Its better to oversubscribe than undersubscribe, as this will allow you to focus on the roles that you would really consider. Be honest with yourself!

All in all, the recruitment agents are a great resource you as a candidate should use, but they take a lot of work to handle.

Also, don't limit yourself to one agency! the more the better as they don't have access to the same jobs, and some work exclusively with some companies. Just don't share roles between different companies as this leads to the cold calling problem above. Also you don't want your application put forward to a company with 2 different agents, as it sends the wrong signals to the company.


Job Search 2021 _ A Live Programming Test

It's a new day, so its time for another interview, and this time its a live programming test!

I thought it was going to be setup like a "pair programming" concept, but turned out it was more of a "watch while you code", while talking about your decisions out loud with the occasional question thrown in.
Not particularly "real life"!

I thought I was going to be OK with this, but I struggled.
The challenge was to build a yahtzee game scorer.

During my career, I have architected, designed, analysed and built major systems for multi million pound companies, but the simple logic of taking a string apart and working out if 2 of the numbers are the same somehow completely baffled me when under this interview condition.

I think flexing these skills is sometimes underrated by me. Seeing the big picture of complex systems and being able to build them does not hide you from sometimes simple logic.

The problem I had was "overthinking" the issue. What I should have done was to just code it out with simple if statements. This made me get flustered at my ineptitude and lose all my confidence.

In a real life situation I would have gotten a cup of coffee or spoken to a team mate or even just looked up a solution:

However, tt was an interview so copying a stackoverflow answer was not an option even though we all do it without thinking too much about it.

Lesson learned

The thing I have learnt from this, quite frankly a little humiliating experience, is to practice these kinds of tests in advance just to get the brain juices flowing!

Oh well you live and learn, and you will never ace every interview.

In case it was not clear, I did not progress to the next interview round. Ho hum.


Installing A Garden Office _ Part 1

See the full series of posts, to find out how we got here.

For the forseeable future, I have decided to work from home. We live in a 2 bed bungalow with 3 people. That means there is not a dedicated space to work in. Right now I am working in my "office" in the bedroom.
This is not sustainable.

We are fortunate enough to have a big back garden, and I have always talked about having a "Man Cave" or "Workshop" at the back of it.
In the years we have lived here, I have never actually built that space, preferring to keep the garden full of plants.
However there is a space nestled amongst the trees at the rear that is very tranquil and I find myself drawn to that location to think over the past traumatic year.

So after discussing with "The Boss", I am going to build a Garden Office.

These are the steps I believe I need to go through:

  • Lots of thinking (more on that in a bit)
  • Find and measure a space (including the height)
  • Lots of research
  • More thinking
  • Clear the space
  • Create the foundations
  • Install a base
  • Source the "materials"
  • Build the office

I am on the fence as to buy vs build. I'm an adequate DIYer, but have never built anything that big. The biggest projects I have worked on are replacing kitchen and bathrooms.

My gut is telling me to build it, so it can be bespoke and the perfect size for the space available.

Inspiration

So first of all I've been getting inspiration for what is possible with some TV shows and reading artices (its been a long lockdown!)

I am also a sucker for a long trip to B&Q (Pre-COVID), so I kind of know what materials are available and what I think I can work with.

Lots of thinking

I tend to take a lot of time planning projects in my mind (just ask "the boss" about the kitchen!), so I like to let ideas marinate, and then do research around them to see what makes sense, and what I believe I can achieve.

This is the kind of thinking I have been doing:

  • Mocking up the size in the garden
  • Sitting in the space and imagining the walls and views
  • What light will I get, and when?
  • How will the temperature be regulated?
  • How will the rain sound on the roof
  • How will it be visible to the neighbours
  • What access is available for maintenance

I have been thinking about these things for a while now, so its about time I got on and made it. This is Part 1 of this journey, so check back to see it progress.


Tagging Posts

As the number of posts are growing on this site, I am finding it harder and harder to find related items. I have already added a Search in, but sometimes, I want to see all my "Job Search" posts together, and not rely on them having the exact words I search for.

So I'm implementing post "tags" or "categories".

What this means, is that every post will have 0 or more tags associated with them. Then I can jump to a page containing all those posts together.

What does this look like?

I'm going to add the list of applicable "Tags" at the top and bottom of the post text on each post page. These will be clickable links to go to a page showing the synopses of each matching blog post.
The page will look like the Latest Blog Posts page, with reference to the tag.

How to build it?

Storing and reading the tags

I currently have a data store with the post title & creation date.
I'm going to add to that an additional field that is a comma separated list of tag names.

At present, this store is modified manually (A future feature request).

I will store the tags as part of the existing BlogInfo class as a List<string>.

Displaying the tags on the posts

I can render the BlogInfo.Tags in a loop of links.
This will be a new component that takes a list of strings and renders them as clickable links in a "button" style.

As we have Bootstrap installed already, lets use a small outline button with an anchor tag e.g.

            <a class="btn btn-outline-info btn-sm" href="#" role="button">Link</a>
            
          

which looks like: Link

We will then place the new component on the BlogPage component above and below the rendered MarkDown of the post.

Displaying a page of the tagged posts

We need a new Blazor based Page.
This should accept a parameter in the Uri.
The parameter is the name of the tag.
Then using this tag name, it will display all the blog posts that have that as a tag. This just requires a list.Contains in all the cached blog posts.

Building it

I built this while watching a dotnet south west meetup event, after planning it out above, it only took an hour to build.
It always takes more time to make it look good for me 😃

What's next?

We need a page to display all the Categories/Tags on them. I like the way Scott Hanselman shows his: https://www.hanselman.com/blog/category so that will be next.


Highlighting Search Terms

From my search page, you can search all the blog posts on this site.

As part of the results, I show a few characters either side of the found search term in the post.
It can also show multiple sections of a single post if multiple matches are found. e.g.

Search for "Blazor" and you see lots of extracts, here is the current top 1:

            Adding A Site Search ....markoliver.site 'search term'` e.g. https://www.google.com/search?q=site%3Ablog.markoliver.website+Blazor - Implement [Googles custom site search](https://support.google.com/programmable-search/answer/4513...  ...ink to jump to the posts, and highlight the search term on the result too. ### Solution So I need a Blazor page that has an input box and a button on it so you can enter the search term (for now its just a ...  ...er the search term (for now its just a simple pattern match), and a button to say go. I also need a Blazor component, that can render the results. In the back end, I need a searcher that will activate the s...  ...t this feature now, by clicking Search, or if you want to find all my posts that mention Blazor, go direct to the results here #### Update 15/10/2021 I forgot to remove case se...  ...), or if you want to find all my posts that mention Blazor, go direct to the results here #### Update 15/10/2021 I forgot to remove case sensitivity from the search, and also that the "en...
            
          

This is hard to read at a quick glance, so I need to add a highlight to the search term where it is found in the extract. This will make it more obvious we have multiple results in the post.

As I have a "DisplaySearchResult" Blazor component, this is easy to change.

First, we need to know what the original search term was, so Ill add a new Parameter to the search result component.

            
              [Parameter]     public string originalSearchTerm { get; set; } = null;
            
          

Then in the display code, add a span class around the found term text rendered to the screen. This requires the use of a MarkupString.
Give the span a css class of highlightSearchTerm and add

            ::deep .highlightSearchTerm {     color: black;     background-color: khaki;     padding: 0.2em; }
            
          

to the .razor.css file, and we now get the search results with the search term highlighted.

Note that I am making the search term black, with a lovely background khaki colour, but most important of all is the padding. This really makes the text standout.

Go have a look...


Job Search 2021 _ Another Programming Test _ Feedback

I wrote about this programming test recently for a job interview.

Since it has been submitted, I have had some feedback, which I would like to follow up with.
Unfortunately, the company decided that they would not give me the opportunity to talk with them further about the role and programming experience, which I feel is a big loss for both of us. I always enjoy a conversation about code improvements, and it would have given them a view into my approach to the test. That all said, the offer was not available.

So I thought I would work through those issues here:

The remit was:

You have been asked to:

  • Refactor the code using clean coding principles to make the code maintainable
  • Add new features
  • Write tests to prevent functionality from breaking

When reviewing the test, we look for a solution which:

  • Is clean, easy to understand and maintainable
  • Shows an understanding of software craftsmanship
  • Doesn’t break the existing functionality

We recommend you spend 4 hours on your solution.

These are some of the comments I had in return:

  • Invalid customer input is not handled
  • If entered partial correct input then it adds a customer rather than showing UNKNOWN INPUT.
  • Few bugs in the code are not fixed including in the new features
  • Code is over-engineered in places in a way which makes it harder to follow
  • “ref” is used in a few places without any need to.
  • a lot of use of .GetType() == typeof rather than using is or .OfType.
  • Used Builder pattern instead of Factory pattern

So these break down into a few categories:

  • Fixing existing bugs
  • Not testing thoroughly the new feature code
  • Not using the style of syntax they prefer
  • Architectural concerns

Taking each in turn:

Fixing existing bugs

With any existing code, there are bugs. We all know this.
This existing code had no unit tests. The remit did not mention trying to improve the code, in fact, you could argue (badly) that you are breaking existing functionality! Either way, with the 4 hour recommended time spent, this was just unfeasible.

Not testing thoroughly the new feature code

You got me here, there are conditions I have not tested. These are simple edge cases they have mentioned, but with the code refactoring I have done, these are simple to add and to test. 10 minutes work!

Not using the style of syntax they prefer

Everyone has there own compiler syntax they prefer, sometimes these are for good reasons, such as performance, other times these are because the are more readable.
The readability is subjective for sure, but not knowing the coding standards of the org (other than the code supplied), then this is a harsh comment to make with no discussion.

Architectural concerns

These are very valid concerns. If I had done this within a company, I would have talked these choices through with a colleague. However this was a test, and unless you count my 5 year old I had no one to discuss these ideas with.
Again not having the chance to discuss these concerns after the fact with the employer is frustrating.

What are your thoughts? Should I go back to them with an updated version??


Adding A Site Search

As the site gets more and more posts, its important to have the capability to search on the site.

Now there are several ways to do this:

The first 2 options are great if your sites content does not change very much, as google does all the heavy lifting for you, and you can just link to them.
However if you are adding or modifying content frequently, you will probably want to have the search "live" on your site.

The custom search engine is very important if your site has a lot of dynamic content, or is not easily indexed by google. e.g. if you have a lot of dynamic products in a shop.

For this blog, I'm going to implement a custom search for 2 reasons:

  • The content is dynamic, and not always indexed
  • Its a great coding challenge!

First steps

We are only interested in searching the blog posts. If I want to add any of the static pages (like About Me), then I can hard-code some search results for them, as they rarely change.

All of the blog posts are cached within the app, so we should be able to search the contents very quickly.

The posts are kept in a list of objects, and so searching the content of the posts themselves is just a "grep" on the string. They are held in MarkDown so there is minimal formatting to get in the way.

We then need to render a list of posts, and a small snippet of the post that contains the matches found on the search results page.

I also want to have a link to jump to the posts, and highlight the search term on the result too.

Solution

So I need a Blazor page that has an input box and a button on it so you can enter the search term (for now its just a simple pattern match), and a button to say go.

I also need a Blazor component, that can render the results.

In the back end, I need a searcher that will activate the search on all the posts.
I need a search result builder that will find a result in the post and build a set of matches (search term may occur more than once).

The Search page should also be able to take a search term in the uri, so I can link to search pages.

Those are all the bits I need: A page to ask for the term, and render the results, and something that can read all the posts pulling out matches.

Simple really 😃

I all honesty, its not hard to do this, the biggest complexity is in finding all the matches and pulling out the "context" of the words around the matching search term.

You can try out this feature now, by clicking Search, or if you want to find all my posts that mention Blazor, go direct to the results here

Update 15/10/2021

I forgot to remove case sensitivity from the search, and also that the "enter" key should activate the search. These are now fixed!


Job Search 2021 _ Another Programming Test

I have already spoken about my issue with programming tests for interview, and again, in order to get an interview, I have completed one by treating it as a learning exercise.
This time to practice TDD on a legacy application.
Can I still write a comprehensive test suite for an untestable app then refactor it bit by bit making it more testable at each stage.

This interview test was to take an existing console app and make it better.

It was a simple coffee shop idea, you enter in the customers who buy drinks throughout the day, and then can print a summary of the trading.

The code is messy and has a few bugs, it also is not following SOLID principles or Clean code ideas.

The test was 3 fold:

  • Write tests
  • Tidy code to make it. More maintainable
  • Add some new features.

Oddly the code is delivered via zip file and that's how it is returned too. I guess this is to reduce the number of copies turning up on github, thus making it easy for others to pass the test. So first thing I did was put it in a private github repo! 😃

Then I could start working on the problem set.

First was to read the code and run the app to see what it did.

After experimenting with the program and a quick run through of the code, I thought I should upgrade it to dotnet 5!
This was REALLY easy using the dotnet upgrade assistant it did all the work for me.
I proved everything was still working after the upgrade manually.
However now it is in dotnet 5 ( it was in framework before), its much easier to work with!

Then I wrote tests for the public api of the main logic class.
The app is just 2 main classes. 1 to interpret user input, and the other to run the coffee shop logic.
So focusing on the coffee shop api, I test its two entry points.

  • Add a customer
  • Print out report

Because the code is all executed through these 2 places, the tests have to be very high level.

Once these were in place however I can do some more housekeeping.

  • Add github pipeline to build and test
  • Split big methods down to smaller ones but. Maintaining the existing public api and not adding classes. Basically making it more readable.

The tests still pass woohoo.

Now let's look at that coffee shop class. It has 3 responsibilities.

  • Maintain list of customers and what drink is being sold
  • Calculate the statistics of the shop e.g. The summary of number of customers, drinks and operating costs.
  • Print a summary.

I decided to focus on the statistics first. Split that out to a separate class and test it separately.

One this was done, it was time to tackle the new feature requests.

  • A new customer type
  • A change to billing rules for existing customer type
  • A new shop requirement

These were each tackled independently and tested. Refactoring the code as each feature was approached.

At this point I am done with 2 of the 3 requirements. It's tested, new features added, and it's also maintainable now. However the remit was to make it maintainable using clean code ideas, so now it's time to look at the code once again and apply those principles, Solid, kiss, dry etc...

I spent a lot of time on this for the recruitment, primarily because it was enjoyable, but also I want the job!

However in hindsight, I wonder should i have tackled it differently?
I wanted to always commit working code, so it meant having tests in place, and nothing broken every time.
This in itself though made me focus on small things each time.
Perhaps if I had taken a big bang approach to it I would have come up with something even "cleaner".

For now though, I stick by my approach. probably more than 95% of the 56 commits I made for this small app were with a completely working system, that could have been delivered to the end user at any point. In my mind this is incredibly important when dealing with Legacy code!


Job Search 2021 _ Choosing The Right Role

So far in my current search I have been flooded with calls by recruitment agents with tantalising roles (reed.co.uk is the primary source of these, but LinkedIn & Indeed are candidates too)

As the recruiters are not all offering the same roles, there seems to be a lot of roles to suit me out there. In fact on LinkedIn I see roles popping up every few hours!

I have been working in the industry for a few years now and I have been both an interviewee as well as an interviewer, so this tells me that the market is very active at the moment, and therefore is in the job hunters favour. Good for me!

So for someone like me, it gives me the flexibility to pick the right role. I'm lucky enough to have saved up some money and use that while taking my break from working, and I have started looking before it runs out too, si I'm not pressured into taking the first offer I receive.

This is why I have pulled out of a role after the second interview this week, and most egregious of all, declined a job offer!

So let me explain: the first role wanted a huge plethora of skills which I mostly have, but also those skills are behind me in my career. I want to focus on the cloud, dotnet 5 and Blazor. Therefore I didn't feel I could learn much from them.

The second role had 2 stages of interviewing, the first a coding test, and the second a face to face interview.
Both went well and I was looking forward to an offer which was gratefully received.
But when I asked for full paper work and details of the package on offer I was upset.

One of the major asks I have with this job hunt is to maintain a good work life balance. To me this means flexible working, remote first, 35 hour weeks and a generous holiday (25 at least right!) allowance. I want to take my son to school or pick him up, and with just these simple things, I can do that. To me, taking him to school is more important than getting paid large amounts of money. I have worked very hard in my career to get to the point that I can make that statement.

Studies have proven that shorter working hours make us more productive, and flexible hours improve that too. Microsoft Japan saw a 40% productivity improvement with a 4 day week

So with a Job offer of good money, I turned the role down because the flexibility with hours and holidays was not available even if it was remote first.

I'm lucky enough to have the skills that lots of companies are screaming for, so I have the luxury to say no and keep looking.

At time of turning it down, I had 6 interviews still lined up and had turned off Reed.co.uk so I would stop getting recruiter calls.

Let's see how they turn out.

In conclusion you have to know what you want to get out of a role, what it's going to help you do, but also does it suit your life choices too.
An interview is a 2 way street, you are interviewing the company too. But when an offer comes through, make sure you have all the details before you agree, some of these concerns only came to light when the contract was seen, don't rely on what the recruiter (or website/job ad) tells you, they get things wrong too, remember they are dealing with a lot of companies and roles and candidates, so they may have been mistaken.
Also look for definite figures on Job ads. What does "generous" really mean to you? In my case generous was very different.

The hunt continues, but I know I will find the right thing for me!


My Mentoring Journey

I have mentored many people throughout my career, helping lots of newbies into the family.
I would always do this as part of my job, either as a specifically assigned Mentor, or mostly as a Team Leader.

However these days, I am shying away from these kinds of roles in my career, and so I want to be able to continue helping people into the industry in my own time.

Some ideas I have to aid this:

  • Write posts on this site
  • Participate in #TechTwitter
  • Present at Tech Meetups (There are so many to mention here, but I follow loads on Twitter and you can always find more at the .NET Foundation)
  • Become a developer advocate at a public facing software company
  • Become a Microsoft Valued Professional (MVP)
  • Signup to sites that allow you to advertise to mentor people. e.g. Coding Coach

Can you help me?

I'm looking for someone to mentor!

I have over 20 years of experience building software, and over 15 years leading teams and mentoring people.

If you would like some free mentoring, Contact me!


Making This Site More Accessible

I want to make this site as accessible for everyone as I can.

            What is Web Accessibility Web accessibility means that websites, tools, and technologies are designed and developed so that people with disabilities can use them. More specifically, people can:  * perceive, understand, navigate, and interact with the Web * contribute to the Web Web accessibility encompasses all disabilities that affect access to the Web, including:  * auditory * cognitive * neurological * physical * speech * visual
            
          

From https://www.w3.org/WAI/fundamentals/accessibility-intro/#:~:text=Web%20accessibility%20means%20that%20websites,contribute%20to%20the%20Web

Fortunately Google Chrome (other browsers are available), has a tool called "Lighthouse" in it, that can help you with this.

Using Lighthouse

First you need to run a report on your site:

  • Open your site in "Incognito mode"
  • In Chrome dev tools (hit F12), click the "Lighthouse" tab.
  • Run a report.

The report, gave me this score for my site:
score
A score of 61 (the second one is Accessibility). I can do better!

Improving things

Under each score are recommendations for making the site better. Click each in turn, and it will show you what the issue is and how you can improve it using the "Learn More" links.

This were the things I fixed:

These last 2 issues are in the template for Blazor, so perhaps I can submit a PR to fix them. Something for another day.

Is it better?

Re run the report, and I now get a score of 100% WOOHOO!!!

How do we test our site in a screen reader?

If you want to see how a screen reader reads your site, this simple chrome extension is great. Its called Screen Reader and it reads whatever has focus on your page.


Customising The Blazor Loading Page

I sent this tweet out the other day :

And Adam suggested I needed a loading animation:

He is right, I do!

So off to google, I go, and found this helpful page by Gerald Barre

Using this as a guide, I changed my index.html to have this in the "app" div:

            
              <div id="app">         <div style="position:absolute; top:30vh; width:100%; text-align:center">             <h1>Mark Oliver's World</h1>             <p><i class="fas fa-spin fa-spinner"></i> The application is loading...</p>         </div>     </div>
            
          

Note, its slightly different to the article, but not much (outer div is not an app tag).

Now, the page looks nice, but as Adam noticed when I published it, the animation was not working (That will teach me to deploy in the middle of a game with my son!).
It turns out, that the animation itself if part of Font Awesome pack (which is awesome), but is not loaded by default in the Blazor template.
So I could just add it, but thats a lot of overhead for a swirly icon.
I have bootstrap and Open iconic currently installed, can they do anything?
Open Iconic - No, they are static icons.
Bootstrap - This has something, so lets use it:

            
              <div class="spinner-border" style="width: 3rem; height: 3rem;" role="status">     <span class="sr-only">Loading...</span>     </div>
            
          

This works nicely. Thanks everyone!


Testing This Site

This site started as an easy way to learn Azure Static Web Apps, and an ego thing to figure out if I could build a WASM based blog.

I have satisfied those things for sure, but I have also realised I like writing these posts!

So Im keeping this site around, which means the code I have written needs to be tested.
I abhor code that is not tested if its being kept.

That said, it does not need to be "100% tested" (whatever that is), but the important parts should be tested.

This site has 2 parts to it:

  • A front end set of components written in Blazor.
  • A "back end" set of classes

Each of these require different ways of testing:

  • Front end - This requires testing the component as visual entities. This can be achieved in Blazor using bUnit
  • Back end - This can be tested using unit tests of the classes & methods

You may also want to consider running bigger tests that check that the "site" works in its entirety. These are End To End tests, or integration tests!

To test this way, it would require using something like Selenium. Considering this is my personal blog, Im going to focus on unit testing the back end and the key front end components. I really dont need to put the effort into Integration tests, I wont learn much more about Selenium!

Back end testing

This type of testing is the most familiar to me. I do a lot of this type of testing every day. I also am pushing closer and closer to a TDD style approach!

I took one look at the core of my Blog code, and realised it was not very testable without Mocking, or even making internals public.
This is not good, and one of the biggest benefits of testing your code. If you try and test it, and its hard to do, then your code is more complicated than it needs to be.
Testing as you develop (not after) teaches you so much about design and putting your code together, that I often feel I get more out of it from that perspective than the actual tests.

So the first thing I did was not write any tests, but to refactor the core component of my blog, so it is much more testable.
The involved refactoring 1 class into 4 classes, considering "Single Responsibility Principle" the whole time.
This gives a much easier set of things to test, that require only a single Mock to remove the HttpClient necessity. Now it can be tested!

More to come another day on the Front end testing.


Job Search 2021 _ Technical Tests

I'm not a huge fan of taking or administering technical coding tests. Don't get me wrong, over the course of my career, I have done many of them, and have given many of them too.

They do have there place, especially if you are unsure of the skills. But most development jobs are way more than coding, I spend more time thinking than hammering at a keyboard outputting code.
The more time you spend thinking and designing than actually coding, the better chance that you end up with a well thought out system with as few lines of code as possible.

In steps a recruiter

So in this round of my Job Search, a Recruiter told me I needed to do a full coding challenge before I would be considered for interview.

Now this was not just a quick test, it was building a solution from a simple specification, and they expected the code to be written to satisfy the requirements.

The challenge was this kata from the Coding Dojo.

"I Object!"

Now at first I objected to doing this challenge, primarily as it takes time away from my family, but in the back of my mind I was also thinking: I have been coding competently for 20+ years, this is evident on my CV!

Most jobs you go to, would never ask you to do some work BEFORE you get to speak to them. Does a cleaner need to clean your house before you will interview them?
After all I have an existing portfolio on GitHub, and 20+ years of experience on my CV.

This made me think again, as I have worked with people of all ages and experiences, and neither of those things explicitly mean they can do the job.

Probably you have met many people in your career who have been doing it a long time, and are not very good at it.
You have probably also met lots of new people to the family, who have blown you away with there skills.

But with a thorough interview, you can identify these people. If not sure, you can then administer a test of some kind (or if unsure, pass on them and find someone else).

In the end I did the task!

But "why?" you ask

Well I decided to take it as a learning opportunity myself.

  • I wanted to flex my TDD skills to check they have not become rusty.
  • I wanted to solidify my Blazor Web Assembly skills.
  • I wanted to spend a bit more time with GitHub Actions and Azure Static Web Apps.

The thing is the kata did not require a website, all the recruiter wanted was to see was how I solved the problem. I could have done that with a Console App, or even just writing some tests on a single class.

However taking the time to build a full system gave me some benefit, and therefore allowed me to satisfy the request of the recruiter as well as continue my learning journey.

The finished app is running here and the code is available here.

What did I learn from it?

The biggest thing I learnt was not to dismiss the technical test before interview. But to take it as an opportunity to learn something else.
If I am asked to do another one of these, I can now either show them this existing one, OR I can take it as a learning experience and perhaps next time I'll learn some React?

As an industry the way we are approaching recruitment is certainly changing, but Im not sure for the better, for now I need to just keep learning.


Secrets Within Blazor WASM

A Blazor WASM app runs wholely on the client, therefore we cannot ask the server to store our secret and access it privately.

So how do we keep our secrets safe, yet still not commit them hard coded in our code in our repo?

The simple answer is YOU CANNOT. So don't commit any secrets in WASM code, as it will end up at the client.

This security process should be done at a server, so perhaps a simple Azure function whose job is to actually access the rest of the code once its authenticated.

WASM is great and all, but if you need secrets, then its not for you!


Why Does It Feel Weird That People Are Reading My Blog

I posted on twitter that it felt weird that people were actually reading my blog and Kuvondik Sayfiddinov asked could I describe that feeling!"

So here goes...

I started this blog as a learning experience on how to build a blog site in Blazor For FREE:
How I Built This Blog
How I Built This Blog Part 2
How I Built This Blog Part 3
Setting Up My Custom Domain

Writing content itself was always secondary to that.
However the more I write posts that describe my journey developing the website, the more I have felt that I want to write.

I have written a couple of non technical posts now too:
Its Been A Year
Being the Executor of Someones Will
Being a 5 Year Old

So that weird feeling is there because I'm not sure I want people to read these things, but I absolutely love that they do.

Do I want the world to see the pain I feel over the loss of my brother?

Do they need to see how inept I am at UI (just look at it)?

Do they need to hear the ramblings of a tired 43 year old parent?

Well no, but I also want to be able to expand this website, and traffic to it, will encourage me to do that.

The other big desire I have is writing about my technical challenges. My last job was so bleeding edge that I learnt so much. I regret now not writing more of that journey down (certainly publicly).

I was taught in my very first software roles 25 years ago that we had to keep a log of what we did, I have always kept notes, but in recent times I have found they have got less and less.

So I'm here to remedy that too, and record my learnings just for me. If someone else reads them, then great, but they are mostly for me and my journey.

One day I will aim for an MVP or something like that, but right now, this is for me.

This is why it feels weird, it feels like you are reading my diary, which I left in full view, taped to the front door, wide open and in a large font for everyone to read.

Mark


Getting Twitter To Work

Hello again, in How I Built This Blog Part 3, we were unable to get Twitter to render the timeline on our page, nor could we get it reliably showing the "Tweet me" button.

The "Tweet me" button.

Its simple really, we add

            <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
            
          

and

            <p><a href="https://twitter.com/intent/tweet?screen_name=MicbOliver&ref_src=twsrc%5Etfw" class="twitter-mention-button" data-show-count="false">Tweet to @@MicbOliver</a></p>
            
          

and we should see a "Tweet me" button.

However, this does not always render.

Sometimes it shows on a refresh, sometimes on an initial load, sometimes never.

So we forced Twitter to load with an

            await JSRuntime.InvokeVoidAsync("twttr.widgets.load");
            
          

in the OnAfterRenderAsync() call of the component. The lifecycle docs say this is sensible.

This makes it work "sometimes".
So what about a

            this.StateHasChanged();
            
          

aswell.

Nope! DOH!

This part of the documentation concerns me:

            Even if you return a Task from OnAfterRenderAsync, the framework doesn't schedule a further render cycle for your component once that task completes. This is to avoid an infinite render loop. This is different from the other lifecycle methods, which schedule a further render cycle once a returned Task completes.
            
          

Oddly, moving that code out of the MainLayout.razor file and directly into the component that is rendering the Twitter content gives me the result I need. But why?
The component is going to be rendered within the MainLayout, so why would that help? It should still have scope right????

Ok, lets move the Twitter invoking code to its own component, and put that in the page that needs it.

yes that works ok.

So lets have a component whose job is to say: "Come and tweet me" with the twitter mention button, I can then bundle it all up in one thing, and include the component where I want.

Awesome that is one down!

Timeline display

So when I add in this line to Embed my timeline on the page:
<a class="twitter-timeline" data-width="500" data-height="600" href="https://twitter.com/MicbOliver?ref_src=twsrc%5Etfw">Tweets by @@MicbOliver</a>

it all works fine, until I navigate to another page.

Then I get these errors

            crit: Microsoft.AspNetCore.Components.WebAssembly.Rendering.WebAssemblyRenderer[100]       Unhandled exception rendering component: Cannot read property 'removeChild' of null       TypeError: Cannot read property 'removeChild' of null  Microsoft.JSInterop.JSException: Cannot read property 'removeChild' of null TypeError: Cannot read property 'removeChild' of null  blazor.webassembly.js:1 crit: Microsoft.AspNetCore.Components.WebAssembly.Rendering.WebAssemblyRenderer[100]       Unhandled exception rendering component: No element is currently associated with component 22       Error: No element is currently associated with component 22
            
          

This suggests that Blazor is trying to cleanup the DOM that the Twitter javascript has changed after render.

I currently have no idea what this is, so i'll come back to it tomorrow.
Please tweet me if you have any ideas.

UPDATE:
Well I sent a tweet out into the world, and some lovely people tried to help.

They suggested several things, but the solution was really simple, and thanks to Kristoffer Strube it now works.

He said, that I just need to put the twitter link in a paragraph tag, and then it wont get confused. He said:

            Also, a fix for the last problem is to encapsulate the timeline in a paragraph or div tag since the Twitter JS removes the anchor tag which is the outermost tag in the page/component for which Blazor needs a reference.
            
          

You can see it working on the front page !

Simple really. Thanks so much Kristoffer!


Being A 5 Year Old

My son is now 5.

Its a wonderful time. He is reading and writing now.
Lockdown homeschooling at the start of 2021 improved both of these significantly.

But despite him knowing more about space, astronomy, the countries of the world and all manner of other things (thanks YouTube) than most people will ever want to hear about, he is still 5.

5 is a great time, his personality is asserted, his character is alive and vibrant.

He however is still 5. They are irrational and stubborn and so full of life and energy

I love him so much, but he is exhausting especially in lockdown. We cannot go to the park or the beach or see his friends and family. I fear it is negatively affecting him. We won't know for years if that is the case, but I'm very conscious of it.

Right now he is lying on the sofa leaning on his table and drawing the Numberblocks. What an amazing show on CBeebies, teaches numbers and math in such a fun way that they have no idea they are learning it.

Us parents love it too.

I don't know why he suddenly decided to draw them today, yesterday we drew the planet Jupiter, which I'm particularly proud of. He guided me in the drawing, so can you guess which is which?

A drawing of Jupiter by me A drawing of Jupiter by my son


Its Been A Year

January 2020 I lost my twin brother at the age of 41. I wanted to write something about him.

A year has passed since you left. It feels only yesterday.

The year has gone so fast, and with so much pain for not only the family, but the world too.
Its been so hard not having you here, but even harder that I am unable to be with the rest of the family.

You have missed so much of my son this year, going to school, becoming 5. He has grown so much, and misses his Uncle D, and all of his friends and family because of COVID-19.
I cannot give him the life he wants, only the safety he needs.

We miss going to the parks, seeing friends, hugging people. Even simple things like walking past people on the same pathway.

My anxiety has grown so much in this time that I struggle to leave the house, and I'm so paranoid about cleaning hands and avoiding people.

This year has been full of fear, and sorrow.

10:55am 8th January 2020 will forever be the worst moment of my life. It was the moment that you left us. Mum and i were there holding your hands as you went peacefully. I felt you leave. I still do now.
You just stopped breathing, and left. You knew Mum and I were there with you, and you decided you had fought enough, and just stopped. It was the best for you, and the worst for us. I don't wish you had stayed, you were never coming all the way back, but I still wish you were here.

My partner in life, childhood, innuendo, joy, happiness, sorrow, movies and gaming.

You are missed everyday.

I love you D.


How I Built This Blog Part 3

Yesterday I watched a great talk by Stacy Cashmore. In that talk she described a very similar process to mine to build her own branded Blog.

So I thought I would use her new Blazor based blog as an inspiration to some improvements to mine:

  • Add Favicons
  • Show my Twitter feed.
  • Add a picture of me.

So lets get on with it:

Favicons

I was going to write a post about adding favicons, but Dave Brock has already done such a good job, so Ill just drop a link here.

First I needed a picture to create a FavIcon.
I don't have anything I want to use, so I started to look at making a caricature of my standard profile pic. There are LOADS of websites that will do it for you, but I didn't like any of the options I tried.
I did like this one, but does not fit my needs:
Iron man me!
I did consider about commissioning one for me, but decided against it for now.

So my next thought was "Word Art" yes I am that old!
So a quick google, led me to Wordart.com
The effects there are cool, and I ended up making this:
My favicon in the shape of a floppy disk

Now I have the image I want to use, its time to visit Real Fav Icon Generator and generate my icons.

Then simply follow the instructions, copy the provided HTML to index.html in the head tag, and copy all the supplied files to the wwwroot folder.

That's it we are done!

EDIT - I think I need to get a less busy image, its not obvious what it is, but for now it will stay. Any thoughts are appreciated.

Show my Twitter Feed

I want to have my Twitter feed showing on the blog somehow.
Now this is normally quite simple, Twitter do the hard work for you. Go to there Publish page and they give you full instructions, and a small amount of code to add.

So I have added 2 options, the embedded one goes on the home page, and the "Tweet Me" option goes in the footer of my posts.

So I need to add this to the bottom of the body tag in index.html

            <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
            
          

Then in Index.razor I need to add this for the embedded timeline:

            <a class="twitter-timeline" href="https://twitter.com/MicbOliver?ref_src=twsrc%5Etfw">Tweets by @@MicbOliver</a>
            
          

And then in the bottom of my BlogPage.razor to add the Tweet to me button:

            <p>If you want to reach out, catch me on Twitter  <a href="https://twitter.com/intent/tweet?screen_name=MicbOliver&ref_src=twsrc%5Etfw" class="twitter-mention-button" data-show-count="false">Tweet to @@MicbOliver</a></p>
            
          

Well annoyingly this does not work. It is a Blazor WASM problem. I don't believe the Javascript widgets.js is being invoked!

So I found this Stackoverflow answer that suggests I just need to invoke the twttr.widgets.load() method via javascript. This also matches what Twitter say

So lets do that, by adding this to MainLayout.razor

            @code {     [Inject]     public IJSRuntime JSRuntime { get; set; }      protected override async Task OnAfterRenderAsync(bool firstRender)     {         if (firstRender)         {             await JSRuntime.InvokeVoidAsync("twttr.widgets.load");         }     } }
            
          

You can also tweak the views somewhat with these extended options

but this then generates these errors on load of any other page than the homepage:

            blazor.webassembly.js:1 crit: Microsoft.AspNetCore.Components.WebAssembly.Rendering.WebAssemblyRenderer[100]       Unhandled exception rendering component: Cannot read property 'removeChild' of null <snip>       blazor.webassembly.js:1 crit: Microsoft.AspNetCore.Components.WebAssembly.Rendering.WebAssemblyRenderer[100]       Unhandled exception rendering component: No element is currently associated with component 23       Error: No element is currently associated with component 23           at e.updateComponent (https://localhost:5001/_framework/blazor.webassembly.js:1:31703) <snip>        at Microsoft.JSInterop.WebAssembly.WebAssemblyJSRuntime.InvokeUnmarshalled[Int32,RenderBatch,Object](String identifier, Int32 arg0, RenderBatch arg1)    at Microsoft.AspNetCore.Components.WebAssembly.Rendering.WebAssemblyRenderer.UpdateDisplayAsync(RenderBatch& batch)    at Microsoft.AspNetCore.Components.RenderTree.Renderer.ProcessRenderQueue()
            
          

This tells me that Blazor and Twitter are not getting on!

So is this just to do with the Embedded code, lets remove it and see what happens.
Well we no longer see the error, but the "Tweet button" on the blog posts only displays on a refresh of the page, and generates no errors. Hmmm, this says we are not firing the Javascript for Twitter at the right time.

There is definitely a timing issue here, it does not always render the twitter stuff on a page load, but sometimes it does.

So it looks like I need to get the Twitter loading in the actual component on the page, and NOT in the MainLayout.razor file. I guess this is happening too early.
This is strange, according to the Blazor lifecycle docs, this is the last event to fire.

Ok, so we have 2 issues:

  • The Embedded timeline option generates an error in Blazor.
  • The "Tweet me" button does not always show up on every page render.

This is a new post of its self, so I'll carry on in Getting Twitter to work.

Add a picture of me

I like what Stacy has done with her blog, showing a picture of herself on the left. This suits me too (as it is "my" brand). But for now, I'm going to just add it in the About Me section.
Im going to use the Caricature and my standard profile pic in there as they show some humour which is always good.

These are just going to be standard images, with "alt" text for accessibility.


Error When Incorrect Post Uri Used

I noticed that when an invalid Uri is used to access one of these Blazor based Blog posts, we don't get a 404 error (see Why Can I Not Jump Direct To A Blog Post? ) we get an error in the console thrown by the code base.

What is going on? Off to the debugger we go.

Yikes, I cannot debug it!!

The error is :

            blazor.webassembly.js:1 crit: Microsoft.AspNetCore.Components.WebAssembly.Rendering.WebAssemblyRenderer[100]       Unhandled exception rendering component: Arg_NullReferenceException         System.NullReferenceException: Arg_NullReferenceException            at MarkOliverBlog.BlogPostReader.PopulateSingleBlogContents(String title)            at MarkOliverBlog.BlogPostReader.GetPost(String title)            at MarkOliverBlog.Pages.BlogPage.OnParametersSetAsync()            at Microsoft.AspNetCore.Components.ComponentBase.CallStateHasChangedOnAsyncCompletion(Task task)            at Microsoft.AspNetCore.Components.ComponentBase.RunInitAndSetParametersAsync()            at Microsoft.AspNetCore.Components.RenderTree.Renderer.GetErrorHandledTask(Task taskToHandle)
            
          

Ok, I have a NULL somewhere in the code, but that is not really the concern, why can I not debug it?

The clue here is the "OnParametersSetAsync()"
As per the docs (Note we are on .NET 5 here):

            Hit breakpoints during app startup before the debug proxy is running.  This includes breakpoints in Program.Main (Program.cs) and breakpoints in the OnInitialized{Async} methods of components that are loaded by the first page requested from the app.
            
          

Right, the problem is I cannot debug in the app startup routine.

So I should check lifecycle of the app which should tell me where I can put this code correctly.

UPDATE - Hang on, this is not the same step, this is the step after that. I should be able to debug this!!!

I'll come back to you!


Job Search 2021 _ Using Your Friends

When I started looking for a new job, my friends network was my first port of call.

This involved talking to my friends, telling them im looking and a post on facebook.

This garnered a few responses from friends telling me about positions open in the companies they work at.

Awesome start!

The companies highlighted to me by my friends were:

Then it was just a case of evaluating the positions available against my expectations.

Once I determined if the company and open roles were a good fit, I went through the "apply" process, which is always different for each company.

I also try and follow up via a LinkedIn contact, or through my friend to give a more personal response to my application.

Update 28/04/2021

Use your network too!

I have been contacted directly through Twitter for 2 excellent roles. One person found me because of my interaction on some articles in LinkedIn and the other because of a "shoutout" post asking if anyone knew of a position.


Job Search 2021 _ Where To Look

Now I have setup my shop front, its time to look for a job.

So Im going to use my network to start with before going to the professional recruiters. I like the idea of finding a job through my network without relying on external help. Besides its often more cost effective for the employer too, so that might go in my favour.

This is the order Im going to approach looking for a position:

  • Your friends
  • LinkedIn
    • Your network
    • The job board
  • Twitter

    • Your network
    • Use those hashtags to send a post to hit all the bots
    • Use those hashtags to find job posts
  • Virtual Meetups

  • Job Boards

  • Contracting options

Once I have exhausted these routes, professional recruiters will be next.
At present I am not pushed for time, so it will be interesting to see if they bear any fruit.

Will report back soon.


Job Search 2021 _ Initial Steps

After taking a long & well earned rest, its now time to venture back into the working world.

So I thought I would chronicle my journey.

First thing I did was ensure my LinkedIn profile was up to date.

I use LinkedIn to have a fuller account of my career history, as its not as limiting as a CV.

I added in missing courses, any missing details, improved my Avatar, and tailored by "description" to resonate the kind of position I want now, which is not the same as I had previously (more on that later).

When I had LinkedIn all updated, it was time to look at my CV. I had not updated it in over 2 years, and looking at it, it felt a bit lacklustre.

I have been working in software dev for over 20 years, so it contains a lot of information.
Although all of that info if accurate, and describes more about me and my skills, it is not relevant today (You would have to pay me a HUGE amount of money to work with Borland Builder again!)

After working in software dev for many years, I have always been good at leading, mentoring, directing people and projects. My career has always moved this way, primarily due to money - cos thats where the big bucks are right?

So after a lot of soul searching in this latest break, I have realised, that I enjoy mentoring people, but i don't really want to lead a team anymore, what with the associated management headaches that inevitably go with that role.

So I have decided to focus solely on the development work. I will look at a way to mentor people outside of the office if there is no opportunity inside.

So this is what I am looking for:

  • Somewhere I can learn and build on my skills.
  • Make a difference to the world, not just in making people money.
  • Follow Clean Code and TDD practices.
  • An excellent work life balance.
  • Work with a team that cares, not just for the products we build and the users/clients, but the people we build them with!

So with that in mind, I updated my CV, giving it a more modern look, ensuring it fit on 2 pages only, and focussing on the skills I want to keep using.

I was surprised how long this took me to do. Its easy to write facts about yourself, but keeping it relevant, within 2 pages, and showcasing yourself is really hard. I spent about a week working on it. First picking a "Style" I liked that said professional, but also modern. Then that dictated how I could display things. I ended up with it in 5 sections:

| _______________________ |
|Contact  |      Name                 |
|Details   |                                 |
| _______________________ |
|   2 Profile paragraphs             |
| _______________________ |
|           |                                    |
|           |                                    |
|           |  Career History           |
|           |                                    |
| _______________________ |

The final column on the left is split into:

  • Technical skills
  • Achievements
  • Education
  • Training
  • Interests & Other information

Now I have the "paperwork" in place, its time to start applying...


How I Built This Blog Part 2

So we left it as this needs to be done:

  • The posts ordered in date order.
  • The list of posts to be determined programmatically not from a hard coded list
  • Somewhere to store images for the posts for free (not my google drive account as at present)
  • A way to feedback on a post - preferably via Twitter.

Post ordering.

This is achieved by storing the date with the post contents. Then we can just order the list based on date.

List of posts programmatically

Currently, the list is hard coded in the source. As a move away from this, an Index file is created which contains all the posts and the dates they were posted.
This is easy to update directly when a new post is created. It does not require any code to be changed or compiled. Just a file updated in github.

This is not the completed approach, the intention was always to read this info directly from the Github repo. So that will be next. Unfortunately due to the CORS issue, the post itself cannot be read, but the list of posts can be.

Somewhere to store images for the posts for free (not my google drive account as at present)

This was easily resolved when I figured out how to put the blog posts up to the Azure Static Web App. I can do the same with the images, so they are stored in github and hosted in Azure for free.

A way to feedback on a post - preferably via Twitter.

TBD


Being The Executor Of Someones Will

Being an executor is a huge responsibility, that I did not realise at the time of being asked.
Mainly it is lots of letters, phone calls and emails, but also visits to the bank.

You need to be an excellent organiser and ideally know how to make spreadsheets too.
There's so much to do: Gather all monies, pay debtors, claim credits, sell possessions, empty homes, clean, tidy, possibly paint.

So much to do in so little time.

On top of that, you may need to deal with grief. Dependant on who asked you to be their executor. Most likely you know them well otherwise they would not have asked you. That makes it so much harder.

How do yo do this important role and grieve? How do you live your own life and deal with it all? It's hard! You get through it. But don't accept the job without thinking it through. It is hard!

I would say it has helped me grieve too, but I also dread the work. It feels like letting go when Im just not ready.


Why Can I Not Jump Direct To A Blog Post

So I thought I had got my blog up and running in Azure static web apps. It is displayed in a browser fine. I even setup a custom domain, however I cannot publish any direct post links, as it generates a 404.

It turns out this is a known issue (Thankyou Google), and I can resolve it.

The problem is because I have a Blazor WebAssembly app, there is no server to process the request at that blog post end point.
What I need to do is redirect the link back to the home page, then Blazor's dynamic routing will kick in and direct it to the correct page.
Its all down to the fact that the app is not loaded into memory UNTIL the home page has been downloaded to your browser, and generated, and can therefore do the routing.

So I need to configure a "Fallback Route"
This is done in a staticwebapp.config.json file, which should be in the app_location as defined in my github Action file

The following configuration will send all requests (except images/css) back to index.html. This should then kick the dynamic routing back in, and access the real page.

            {   "navigationFallback": {     "rewrite": "/index.html",     "exclude": ["/images/*.{png,jpg,gif}", "/css/*"]   } }
            
          

So this direct blog page link now works https://blog.markoliver.website/BlogPage/How-I-Built-This-Blog instead of giving a 404!


Setting Up My Custom Domain

So the blog is now hosted in Azure static web apps, but its no great using the default random hostname they give you.

Custom Domains to the rescue!

Azure static web apps give you this for free (including HTTPS!), and its really easy to setup.

You can follow the official docs here

You need :

  • a registered domain.
  • Access to its DNS settings to set a CNAME record
  • The Azure Portal.
  • a sub domain name you like (you cannot use the root domain, so blog.xyz.com is ok but not xyz.com)

In your DNS settings, add a CNAME record with HOST NAME set to "blog" and the "points to" set to the url of your staticapp e.g.
If I want to setup blog.markoliver.website as my blog website pointing at my static web app called zealous-pond-0d09d0c03.azurestaticapps.net, I set HOST name to "blog" and "points to" set to "zealous-pond-0d09d0c03.azurestaticapps.net"

Then in the Azure portal, locate "custom domains" option under the static web app section you want, and add the "blog.markoliver.website" in the top box, and hit the Validate button. Wait a few minutes, and you are done.

Now after a few hours (DNS takes a while to propagate around the world), you can visit blog.markoliver.website and you will find the Azure static web app. All for free (excluding the domain registration)!

Simples!


My OBS Configuration

Note - I don't do any streaming, neither am I a speaker at events. However I do attend a lot of online virtual events, and have multiple video calls a day, and always try to have my camera active.

Problem

My "Office" is in my bedroom, when I am on a video call, I want some of the bedroom filtered out, not everyone wants to see our dirty underwear on the floor!

Some video conferencing apps don't have a "blur background" option, and those that do tend to cause headsets and ears to appear and disappear as you move your head, which I personally find very distracting.

I want to be able to promote my contact details (Twitter) actually on my video feed.

Requirement

My single video camera feed should mask out a lot of the background view of my bedroom, focussing on the person sat in the middle of the camera view.
It should also display my name and twitter handle on the screen.

Solution

There is an awesome free bit of software out there called OBS (Open Broadcaster Software) which many streamers use to make there broadcasts awesome.
It allows you to create a "Virtual Camera", and do lots of cool things with it. Add overlays, insert pictures, videos the list is huge.
This sounds like you need some expensive hardware/software to pull off. I can assure you that is not the case. The better PC you have, the more you can do with OBS, but even for just a "work call", you can make your video feed better.

Now I won't go into how to use OBS, there are so many good guides on the web already, specifically there own guides, so if you are new to OBS, read some of these first.

Hardware Configuration

I have a single HD webcam with an inbuilt Microphone.
I also use a very cheap headset.
Total cost of these 2 things £25 from ebay.

OBS Configuration

Startup

  • Virtual camera turned on at startup.
  • "Studio mode" enabled on startup: "C:\Program Files\obs-studio\bin\64bit\obs64.exe" --startvirtualcam --studio-mode

Scenes

I have 4 scenes:

  • Normal Camera
  • Just head
  • Captions
  • Screen Share

Each scene is based on the previous to build into a final view.

"Normal Camera" scene

This is a scene with the web cam and the audio settings to be used.
No changes of the video inputs.
My audio input has a Noise Suppression filter
So its just a scene with 2 sources:

  • Video Capture Device
  • Audio Input Capture

"Just head" scene

This scene is to try and make the view focus on "just my head". Im not going to crop the feed to just be that space. I want to give the viewer the sense of the place I am in, but not allow them to see the detail.

To achieve this, the scene has the previous "Normal Camera" scene as the only source.
The reason to do that is I can now manipulate what the "camera" looks like by overlaying things on top without adjusting the actual video feed. This means I can use the original feed in any scene I like and adjust it how I feel. e.g. One scene can have it cropped to just my head, another can colour it blue.

Then a filter is applied to the SCENE, NOT the "Normal Camera" source.
The filter is "Image Mask/Blend".

This filter will allow us to setup a blur type affect for most of the camera view. Leaving a very clear view in the middle (where I am on the screen).

I got the idea from this post

I use a round image with a solid middle, and fading out to a transparent rectangle. e.g.
Background Mask image

I achieved this using paint.net:

  • Create a 480x320 image.
  • Select everything, and delete it which will make it transparent.
  • Then set the primary brush colour as #29FD2E
  • Use a big paint brush about 2/3 of the image size, drop a non opaque circle in the middle.
  • Then create a brush that is 3 times bigger than the image with a reduced hardness, and paint in the middle again.
  • You will need to play with this to achieve the blur you like in the view. This is a bit of trial and error, and depends on how much you want to blur out on the screen.

Once you have your image saved, using the "Image Mask/Blend" filter on the "Just Head" scene, add the path to the image, change the "Type" to "Alpha Mask (Alpha Channel)"

This should then give you your camera view with a partially blurred background with a centre "Spot light" for you to be within.

Now you have the basic framing for your camera, you can add additional features, such as screen captions...

"Captions" scene

This has 2 sources:

  • "Just Head" scene from above
  • Text (GDI+)

The text element has my name, twitter handle and email address on a grey blue background with a bit of opacity, and a nice clear font.
This is then placed at the bottom of my scene.

That's all folks

So using 3 scenes, I go from a full view of my bedroom to anyone looking at my camera, to a view focussed on my head, with my contact details displayed on the screen.

Before on the left, After on the right:
OBS Before and after

But wait, you said there are 4 scenes, what happened to the screen share scene...

"Screen Share" scene

This is a scene with the "Captions" scene as a source, which is shrunk to the bottom right hand corner.
Then a new source is added called "Screen Capture", which is aimed at one of my displays to show what is on the screen in my video feed.

This allows me to "screen share" while also being on screen still. This gives anyone I am talking to a more personal screen share. (Excellent Pair programming option)


How I Built This Blog

I wanted to build a blog website, as its all the rage, but primarily, because its a great way to learn more about Blazor WebAssembly (WASM).

My thinking was that I can host the blog for free in Azure Static Web Apps, and run the dynamic parts using Blazor WASM.
Most of a blog is static, so seemed a sensible start.

One thing I wanted though is the blog posts themselves to be loaded dynamically.

Dynamic in a static app does not easily go together, hence Blazor, where you can run your C# code in the Browser directly, and thus create dynamic things.

However as you are running in a Browser you are also in a security sandbox, and that is going to be a headache for loading the posts.

My first thought was to hold all the blog posts in Markdown on github. I can render the markdown directly as HTML using the Markdig nuget package.

My second thoughts were to host the posts and images in github. However due to the Sandbox constraints, it would seem a Blazor WASM app cannot access a "Raw" version of a github file. It hits the CORS restriction, which seems to be a bug in Blazor here.

So version 1 of this blog is setup like this:

Well this is all working, fine, but I want it to be better.

  • The posts ordered in date order.
  • The list of posts to be determined programmatically not from a hard coded list
  • Somewhere to store images for the posts for free (not my google drive account as at present)
  • A way to feedback on a post - preferably via Twitter.

More to come another day.