GLPI – Ripe for the Injecting

I’ll move on from GLPI eventually and start working on some interesting technical stuff, but today is not that day.

We ditched GLPI after we got hit by an accidental SQLi from HaveIBeenPwned – in short, version 9.4.5 is vulnerable to an SQL Injection flaw. You can exploit it by sending it an email (say, to helpdesk@company.tld) and once the email gets automatically turned into a ticket and assigned, the SQL will be executed. This affected us because the obscenely simple execution string was included in the header of the haveibeenpwned email notification.

I’ve just been poking around GLPI again (we have kept it around for non-end-user stuff, isolated and kept out of reach) and noticed that there was a “telemetry” scheduled task in the list of Automatic Actions which got me curious.

GLPI have decided to publish some of their telemetry data, which is nice of them. But it shows that there’s still a significant number of users running 9.4.5 and older.

Of the installs that report telemetry in the last year (and only those installs on 9.2 and above do this), 14,313 are on a version at or below 9.4.5, whilst 26,985 are on 9.4.6 and above. Over 34% of GLPI installs are potentially* vulnerable to this painfully simple exploit but over 12% of installs absolutely are still vulnerable as they’re on 9.4.5 exactly.

*Potentially, but I suspect only 9.4.5 is vulnerable – they fixed it by accident in 9.4.6 here which looks like a response to an issue that appeared in 9.4.5.

We learned a lesson with the GLPI issue – keep your software up to date. Though to be fair to us, it was up to date according to their website at the time. There was a newer version available (9.4.6) but that wasn’t advertised anywhere.

I hope these out of date installs get updated. We know there’s a lot of malicious activity out there, but at the same time… accidents can happen.

Iced

I love how ice looks when it forms on something smooth and flat like glass. If I didn’t have to scrape it all off in a hurry to get to work I’d stare at it for at least two minutes. What? It’s freezing out there…

Tales From Tech Support 01: Percussive Maintenance

If you’ve worked in IT for more than a year you probably have some crazy tales to tell. I certainly do – with nearly 15 years in the field I have seen insanity and hilarity – more than I could ever remember. So I thought to myself, why not write it all down somewhere?


This first tale comes from back in my early days.

As a PFY, I was trusted with little more than basic desktop repairs and printer toner replacements – a fairly common slice of life for many IT bods. I was relatively fresh faced, a few scars but nothing major. Eager to learn and eager to please, I was often the first to raise my hand and take any challenging job, despite the vast gaps in my knowledge. Our team was small – three techs (two general helpdesk end user support PFYs, one mobile device repairs) with one very isolationist “Network Guy” (who would now be called a SysAdmin) and one “Database guy” (although their only qualification was knowing what “SQL” stood for.)

It came as quite a surprise when, early on a sunny Monday, we were told that the Network Guy was leaving. And he wasn’t being replaced.

It quickly became apparent that the entirety of his duties were to fall on myself and PFY#2, who had been working at this place for a bit less than me. Networks Guy’s last day rolled around without much issue, nor much communication from him – in fact at one point I asked for his help with a network issue and he told me “I don’t care, I’m leaving!” So it came as a bit of a surprise (and equal amounts of relief) when he called me and PFY#2 up to his office to give us his handover document.

His handover document consisted of a single sheet of A4 with handwritten notes about a few things barely qualifying as useful. A few IPs and other miscellaneous details about servers and switches, the odd issue he knew about but hadn’t fixed, and maybe a password or two.

We received the document at the door to his office (we weren’t tolerable enough to go inside on this occasion?) and quickly shoo’d off to our regular helpdesk support duties.

That evening, he left and we never saw him again.

The following weeks and months were absolutely insane. I can’t recall much about what happened during this time. Myself and PFY#2 managed to keep on top of most of the helpdesk support calls and make a start on untangling the network. We quickly found that most switches were essentially unboxed and plugged in without any config changes, servers were unpatched and had uptime into the hundreds of days (you could tell when the last powercut had been by looking at the uptime) and we were getting very close to the limit on resources – maxed CPU and/or RAM, HDDs filling up. Group policy was a mess, roaming profiles reaching into the tens of gigabytes with nothing preventing their growth… it was a mess.

To top it off, out the back of Network Guys office was another small closet containing almost all the servers we had. Neither of us had ever been back there, and when we did we found dust, cobwebs and equipment that seemed to be switched on but we had absolutely no idea what it did. Helpfully the single sheet of A4 told us some server names and serial numbers.

We were fueled by Red Bull and the long (long) days began to blur into one massive learning experience. To this day I have never learned so much so quickly as I did back then as we fought to keep the place running, the users happy(ish) and continue to learn as much as we could.

There was one event, though, that utterly stumped us.

Cheerily and awesomely smashing the helpdesk as we were, we suddenly had calls coming in about email being offline. After a quick check we realised that, yep, email was down. We couldn’t RDP into the Exchange (2003) server either – something was wrong, clearly. Off to the dusty old Network Guys office we go.

We walk in, grab the single 4:3 CRT monitor in the room, stretch the cables across the room from one of the power extension cables with a spare socket to one of the nearby tables, and plug the crusty old VGA cable into the back of the absolute beast of an exchange server. I mean, this thing was huge. An old tower, black, solid steel everything. No idea why it was up in this first floor room when all the other servers were on the ground floor, but… whatever.

Flicking the monitor on we quickly saw everyones favourite screen. Yep, it’s blue, and it signifies death. The BSOD.

We panicked a little, probably downed another Red Bull each, then got to work trying to bring this thing back online.

Remember: we were literally flying by the seat of our pants here, and had been four months at this point. We had no idea what we were doing.

First things first – switch it off and back on again. We held the power button down, heard a massive CLUNK, the fans span down and the screen went black. Take a breath. Switch it on again, wait for the BIOS, wait for Windows to start booting, wait some more…. BSOD.

Crap.

We try again, switch it off and back on. This time, we hear a horrible grinding noise as the machine spins up. We get to Windows trying to boot and everything freezes – not even a BSOD. Off it goes once more, switch it on, grinding noise, BIOS doesn’t even finish loading.

Double crap.

Backups! Backups? There are backups, right? One of our jobs is to take the tapes out of the drive and swap them with the next numbered batch in the fire safe, surely we could restore this? But… Network Guy did the backups. Network Guy didn’t elect to write anything down about the backups! We don’t know anything about them! An oversight of the highest order!

We know the boss has the phone number of Network Guy, but we need to try fixing this ourselves first. We don’t want him shouting down the phone at us like the one and only time we called him for help before this…

More Red Bull, more diagnosing. On the odd occasion we can get Windows to try booting, and sometimes we can get to the login screen using Safe Mode, but no matter how quick we are, we can never get logged in, and even this doesn’t last forever. Eventually the server just stops trying to load Windows and we’re presented with some error about not finding a HDD. The grinding at this point is still going on and we’re faced with our fear that the grinding isn’t a fan, but the single HDD that all of our email is stored on.

There’s nothing else for it. We’ve gotta call up Network Guy and ask his advice. Neither of us want to do this though – he wasn’t helpful to us when he worked here, and especially not when he was leaving.

Is there nothing we can do?

I remember the moment – we were stood either side of this huge hulking great server with no more options (that we are aware of at any rate). Our eyes meet, and without saying a word we both think the same thing at the same time.

PFY#2: “Shall we?”

Me: “I dunno… I mean it’s not working so maybe?”

PFY#2: “I think we should.”

Me: “Okay. Let’s do it.”

PFY#2: “Go on then, you can try first.”

Me: “No way, you do it.”

We look down at this ancient black server, lined with solid steel frame, sides and front, touched with the occasional bit of cheap plastic and the odd faded sticker.

I sigh.

PFY#2 raises his foot, and kicks the bastard right in the side, smack bang in the middle.

The grinding noise changes in pitch audibly. It’s still there, still buzzing away in that audio range that you think you can put up with but slowly sends you insane without you realising it. PFY#2 reaches down and holds the power button in to switch it off. He switches it back on.

BIOS loads up, boots fully, screen goes black.

Windows loads. And loads. We stare. And it continues to load. The login window appears after what must have been at least 25 minutes. We’re in shock. No BSOD. No lock up. Buzzing? Yeah that’s still there, but the server has booted. What the f-?

We rush to a nearby office, get the first user to open up outlook. It connects. Emails from the upstream start flooding in to their mailbox.

We check our own, same thing – it’s working.

We’re buzzing. Our blood is filled with adrenaline, caffiene, sugar, and whatever the hell else they put in Red Bull, but it also filled with joy. We fixed a server by kicking it.

After spreading the word that email is back, we get right back to our helpdesk calls, of which dozens have appeared since our exchange issue surfaced.

Not long after this we do eventually employ a sysadmin who I work with to this day, but this exchange server didn’t get replaced immediately. I can’t remember exactly when it was retired, but it whirred on for a good year or more after this. We tried very hard to not touch it – it wasn’t perfect, and we definitely had at least one more very confusing issue with it, which I’m sure I’ll write about at some point, but the beast chugged on and kept our email flowing until it was eventually replaced by a younger sexier model.

Some say that if you can find your way into Network Guys old (and long repurposed) office, and if then you can manage to make your way into the back room, you can, on quiet days when email traffic is up, still hear the buzzing of that once-failed-but-then-recovered hard drive.

This is how I came to learn about and respect percussive maintenance.

Ditching GLPI

The recent bad experience with GLPI we had at work was the final nail in the coffin and, after patching the issue, I quickly began looking for alternative ticket management systems. We have wanted a new helpdesk for a long time – the support we provide has evolved over the years since GLPI was first introduced and with the covid-19 pandemic this support has seen yet another shift in the way in which we work. Not only are we doing things differently now, some unrecognised or unrealised issues have surfaced which we all wish to resolve or automate away.

We struggled through the worst of the lockdown but quickly identified some limitations with our existing way of working, namely that whilst our helpdesk did fine enough when it came to tracking tickets, it should do more for us. We spent a lot of time on the management of tickets and attempting to contact people just to get them to perform simple tasks. Why do we have to fight our helpdesk to achieve a goal, and why can’t we have a system that would let us run these simple tasks (read: scripts) ourselves in the background, invisibly to the end user?

I did some reading and some thinking post-GLPI-exploit and realised that we are essentially an MSP. We provide support for departments within a school, each of which has different objectives, priorities, demands and tools. Plus, we support several primary schools on top of that, and they are their own beasts entirely with unique networks, hardware and software, let alone processes and requirements.

As we emerge from total lockdown to a lesser version, we are also going to need to do our normal jobs but much more efficiently. With a “remote first” approach to minimise risk to our end users (both staff and students, but also guardians and members of the public) I quickly decided to look into RMM tools instead of basic ticketing systems.

Given that we also had issues in other areas (namely monitoring and alerting, in that what we have is barebones and actually broke a month ago) I was looking out for a solution that would kill as many birds with as few stones as possible.

The requirements were that it had:

  • A ticketing system
  • Monitoring and alerting
  • Patching, installing and scripting capabilities
  • Easy remote support
  • A good price (hey, we’re a school and don’t pass the cost on to the end user, we don’t have a lot of money!)

I found a list of RMM tools and their features on the /r/MSP subreddit and went through each one, checking videos, documentation and feature lists working out which would obviously not work, which might work, and which would absolutely work. I narrowed down the options to four potentials. In the end, only two of them had a “per technician” pricing model – the “per monitored device/agent” pricing model would end up costing us tens of thousands – so it came down to a war between the two. These were:

To be honest they were a close match. I preferred the look and feel of Atera although SyncroMSP was more feature rich. We didn’t really need all the features SyncroMSP boasted though, and Atera was cheaper (we’re on the cheapest plan) so ended up going with that.

Although we haven’t rolled out the agent to every device yet (we’re still mostly closed and until we have the agent installed have no way of deploying software out to machines not on our network) we have started using it heavily. So far, zero problems and we are all liking it. It has already enabled us to preempt some problems that would become tickets, solving them before they ever affect an end user or get reported. I’m looking forward to diving into it more, I am especially excited about the recently announced Chocolatey support, which seems to work wonderfully.

It’s early days yet, hopefully we can become much more efficient and provide a better service. Only time will tell!

Haveibeenpwned.com pwned our helpdesk! GLPI 9.4.5 SQL Injection

TL;DR:

I should say before we get started, the fault for this lies entirely with GLPI, I place no blame at the feet of haveibeenpwned.com or Troy Hunt for this issue. It’s all good fun! Concerning? Oh, for sure. You can’t help but laugh though. Obligatory XKCD.

On GLPI 9.4.5, creating a call (via the standard interface or email, etc) that contains the basic SQL injection string ';-- " will be logged normally with no abnormal behaviour, however if a Technician assigns themselves to that call via the quick “Assign to me” button, the SQL query will be executed. In the case of the example string given above, all existing calls, open or closed, will be updated to have their descriptions deleted and replaced with any text that appears before the aforementioned malicious string. You can of course modify this to perform other SQL queries.

This is fixed on 9.4.6, however at the time of writing the GLPI Download page still links 9.4.5 as the latest update available, you need to go to github releases page to see 9.4.6. [2020-06-03 update:] now available directly from the GLPI download page.

9.4.6 was released before I found this exploit, however the GLPI website still showed 9.4.5 as the latest version. As far as we were concerned, we were on the latest version. Credit goes to whoever submitted it first, however at the time I had no knowledge of this already being known and resolved. Here’s a video showcasing the issue:

Low quality to minimise filesize

The Long Version – how I found it, or how haveibeenpwned pwned our helpdesk

GLPI

We use GLPI as our technical support ticketing system at work. There are better solutions out there and we’re investigating others, however GLPI has served us well since 2009ish. It is a web based, self hosted PHP/MySQL application.

haveibeenpwned.com

A website that catalogues and monitors data dumps for email addresses. It collects leaked or stolen databases, analyses them, pulls out any email addresses and can be searched by anyone for free to see if your email address has been included in a breach. You can also subscribe to receive alerts if your email address, or an email address on a domain that you own, is included in any future breaches.

haveibeenpwned.com pwned GLPI

Around late April we upgraded from an ancient version to the latest of GLPI – 9.4.5. All was well until we received an email from haveibeenpwned to our helpdesk support address, which automatically got logged as a support ticket. This email alerted us to some compromised accounts on our domain which were included in the latest Wishbone data dump.

Spoiler: that header isn’t an image, it’s text!

I rushed to get the HIBP report generated to see who’s data on our domain had been compromised by clicking a link in this email-turned-support-ticket. We got the report in a second email, which created a second ticket. I grabbed the data, deleted the second ticket (as we still had the original open) and perused the data. After doing the necessary work alerting any users to the breach of their data I went back to the original HIBP ticket, and realising I hadn’t assigned it to myself did so and promptly solved it. All is well, time to move on?

Not quite. I and the other techs quickly noticed that every single ticket description had been deleted and replaced with partial header data from the HIBP email.

This immediately stunk of some kind of SQL Injection flaw and my mind raced as to what the cause was. I had a suspicion I knew… Unfortunately we were in the middle of business hours and due to Covid-19 are fully remote – we need a working helpdesk, and I don’t have the priviledge of working on potential security issues in the day job. We restored from a backup taken on the previous evening (not too much data was lost, thankfully) and carried on with our day supporting our users.

Understanding the flaw

As soon as work ended, I grabbed an ubuntu .iso and built me a webserver VM. I had a feeling I knew what the cause of this SQLi was (check the header of the email shown above, you don’t need long to figure out where the ‘malicious’ code is!) but wasn’t sure how it got executed – the email was parsed correctly and tickets weren’t affected when the email came in, it wasn’t until around the time I deleted the second ticket and closed the first call that problems arose.

After building the VM with PHP and MySQL, I hopped onto the GLPI website and grabbed the latest version from their site, which is shown as 9.4.5.

The “Download” button took you to the 9.4.5 archive

After setting it up and adding some test calls, I forwarded our original HIBP email to a temporary account I linked this test GLPI install to. Once the email was pulled in I went through the same steps as I had done earlier in the day:

  1. Generate the report (which I didn’t do again via the link in the email, I just forwarded the original email to my test email account creating a second ticket in my install of GLPI)
  2. Delete the second ticket
  3. Assign the first ticket to myself
  4. Close it

I checked my test tickets I loaded in there beforehand and lo and behold, they had all been wiped and replaced with the same content from the HIBP email!

I restored the VM to an earlier snapshot and went through the process again, pausing to check the other tickets at each step. I quickly discovered that the issue only occurs when you assign yourself to the ticket using the handy “Assciate myself” button.

Adding yourself as a watcher also triggers the query

Making it malicious

The email data already wipes the content of all tickets, but as it stands it leaves a lot of junk data behind. I wanted to minimise the data required to exploit the flaw yet retain the same behaviour.

Another restore of GLPI followed, with more tests trying to determine the minimum amount of data needed to execute the flaw. I spent some time cutting down the email from HIBP and quickly found that the opening lines of the HIBP email were indeed the culprit – I managed to shrink the exploit down to six characters (';-- " – the space and double-quote at the end appear to be required though this could do with more testing) to achieve the same kind of malicious behaviour, in this case deleting all content of the descriptions for every ticket in the database. If you log the malicious call with this string as the title (or leave the title field blank – GLPI will then automatically add the contents of the description to the title, in this case the malicious string we have identified gets added as the title) the title on all other calls gets wiped too, however if you do include a non-malicious title in the malicious ticket the original titles on the other calls do not get modified.

Success! This is a pretty severe issue, and although it does require some user interaction you can easily hide this exploit in an innocent looking support call. GLPI supports HTML emails, which get rendered (almost) normally within the interface. Simply hiding the text in an attribute or the <head> or something will keep it invisible to the tech. You’ve just gotta wait for them to assign it to themselves.

In the end, this isn’t a zero-click flaw but it is easily hidden. If you hide the exploit and it doesn’t work out the first time (a tech doesn’t assign it to themselves) you can easily try again with another ticket until it works. Odds are the techs aren’t going to read through the raw HTML of each ticket looking for problems.

Reporting it – late to the party

I hopped over to GLPI’s github page to check for an existing issue and log my own if one didn’t exist when what do I see but 9.4.6! I check the changelog and find this:

Looks like this may have already been fixed!

Well, darn. I downloaded and installed the update and can confirm the issue no longer exists. Congrats to whoever spotted it first! Edit 2020-06-04: Twitter user @thetaphi took a look at this and found that it was spotted and/or accidentally fixed by a developer whilst fixing a separate issue.

As it is already solved I don’t really want to dig through the code and find the offending line or develop the exploit further. Edit 2020-06-04: some people have taken a look after @troyhunt tweeted about this issue. It is interesting (concerning?) that something this simple got through to release, especially when you consider the way to initiate the exploit is by assigning yourself the call. Why does the call description get parsed at all here?

Either way – if you’re running GLPI, make sure you’re on the latest release. Or look for alternative software.

G Suite Migration for Outlook Error 0x8004106b

TL;DR: You’re logging in using a non-primary alias for the account. Use the primary alias and your email will migrate smoothly.

We’re migrating around 100 email accounts that were on Exchange over to Google. This has involved changing some peoples alias so that they match up with everyone elses (<initials>.<id>@<domain>) however we’ve added their old address as an alias so they can still receive emails sent to their old address (eg: <fullname>@<domain>).

Due to some political reasons we’re unable to touch the exchange server, so we’re using the G Suite Migration for Microsoft Outlook tool. It’s been fine up until today where we started seeing the following:

Notice that there are the same number of Processed and Failure items

What was odd was that this was only happening for one of the techs. He must have been doing something wrong! We monitored their steps and it all looked fine though. Taking a look into the trace file we could see the error appearing as Failed with 0x8004106b. Googling this didn’t result in any usable results, but as we were looking into it another tech appeared and just happen to glance at the screen as the original tech was attempting to migrate a user again. Our spreadsheet contained some info including the primary email address and the alias for us to check and the tech noticed that the guy with the sync issues on his users was using the non-primary alias to log in. After trying the primary alias, the emails migrated successfully.

Interestingly, the Contacts and Calendar data synced correctly.

Working with Covid-19

I don’t appear to have had Covid-19, this post is more about working with it in the world, not working whilst ill!

Bit of a rambling post, I’m adding to it between jobs so forgive any abrupt topic shifts.

Schools are closed, but students still need to learn. As tech support for a school, enabling this has been… Well, easy to be honest. We’re a Google school so the shift for our students (and those teachers that bothered to learn it…) has been relatively simple. But there’s more to a school than enabling teachers to provide material and help to students. The challenges arose (and continue to become apparent) with all the other departments that enable teachers to do their jobs; wellbeing, admin, safeguarding, leadership, the list goes on.

For us in tech support, the whole team has been busy providing support for the schools in the local area. We’re avoiding going on-site wherever possible, though this is not always avoidable. Remote support is not something we had prepared for, either. We’ve been using Chrome Remote Desktop to get on to other user devices who are also at home, but as nobody has local admin and Chrome Remote Desktop won’t pass through UAC prompts we’ve been unable to fix a small subset of issues. Something to correct moving forward for sure – a remote assistance like product (something along the lines of GoToAssist or LogMeIn) would be swell, but it’s a bit late to deploy it now.

We had to roll out laptops to our non-teaching staff, but unfortunately with limited devices available many staff simply don’t have the ability to do much work that can’t be done on a phone. Many people who said they have devices at home didn’t account for the fact that their two-point-four kids and partner would also be needing it. We didn’t take this in to account either and are paying for it now.

Some students also don’t have devices, however the DfE is offering devices to those most vulnerable. We’ll see how that goes, I can’t imagine there’s a stock of millions of laptops stored in a warehouse somewhere, but something is better than nothing!

I’ve wanted to work from home for a long time and have biased towards tools that can be used from anywhere. Rolling these out to a wider user base has certainly highlighted issues in our processes and changes we can make in the future, however for now we’re… doing pretty well, actually.

The impact on education won’t be felt for a long time, but there are already some interesting developments.

Some staff are arguing for the use of what are effectively spying tools to check up on both staff and students. Thankfully, this is being outright rejected by senior staff in our school. Others are demanding a lot of their teams, forgetting that some of them have kids and/or elderly and vulnerable relatives and neighbours to look after. This is having a direct impact on the teaching and learning – some staff are just exhausted. I’m getting emails from staff late at night and very early in the morning. Students are also submitting work at unhealthy hours.

I love working from home. Much more productive, fewer interruptions lead to better quality of work faster. It’s great. But for some it’s clear their home-life does not facilitate it well.

I’m hoping once the pandemic is over, we keep what we’ve learned and don’t just rush right back in to how life was before. I’m not sure I can survive a full-time office job anymore….

Dead Wood

Next to our house is large Oak (I think?) tree which has, over last three years, slowly died.

It’s not on our property, but had grown over the house. We had some high winds earlier this year which resulted in a decent percentage of the small branches falling on to our roof and garden. No damage, but it was clear the tree was becoming more brittle and heading towards dangerous territory.

The owners recently organised to have it cut down. This was a sad moment, however I asked for a slice of wood from the tree. Our house sign rotted away a while ago and I had my eye on the wood from this tree. I’m hoping to use this slice to make a new sign at some point as a nice little tribute to the beast that has stood guard over our property for decades.

All I need to do now is figure out how to properly do this, or find someone to do it for me!

The neighbour will be using the wood in their log burner but has offered some to us, too. Doesn’t look like we’re going to be cold next winter, or perhaps the winter after depending on how long it needs to dry for.

One silver lining: it’s now feasible for us to get solar panels!

A leak here and a leak there

We’re slowly eradicating our mould problems and it seems the universe has taken offense.

First off, our shower is leaking. The leaky pipes are within the wall and just happen to be directly above a power outlet located on the other side. The paint has bubbled up on this wall as the moisture slowly found its way out and down toward the source of electricity. We’re no longer able to shower or we risk the wall getting more damaged and the potential for a fire to start. We noticed the paint bubbling, popped the side of the bath off (our shower is above the bath tub) and realised the concrete floor was soaked, the underside of the bath and the wood that helps support it were literally caked in mould. The water has also soaked into the concrete and found its way under the wall into the kitchen. This appears to have been happening for a while now as the kitchen units along that wall are also caked in mould across the back. We will likely need to replace the units in the kitchen, which isn’t too much of an issue as we needed a new kitchen anyway, as well as dismantle half the bathroom to fix the source of the problem, which is also not too much of an issue as we also needed a new bathroom. Maybe £10k to do them both to a decent standard.

To add insult to injury, the cess pit decided to spring a leak. The container itself is, thankfully, fine. There’s a broken inlet pipe somewhere, letting rainwater in. We don’t have an issue with sewage leaking out but every time it rains the cess pit fills up pretty quickly. We could likely dig up the pipes and try to fix it but we need to replace the cess pit for a water treatment plant anyway. This is not a cheap thing to do – we’re looking at a starting figure of about £8k, though this will likely increase due to some issues with the water run-off and our lack of land.

They say it comes in three’s. They’re wrong.

My car engine light came on. Hopefully this is just the sensor – clearing it hasn’t worked so the next step is to replace the sensor itself. I’m not sure what the sensor is called but it checks the engine vibration and if it detects something amiss, alerts the driver. If the sensor is not faulty it’s potentially a new engine, which with my old-ass car likely means a new car.

A day after the engine warning light came on, my windscreen cracked. Luckily my insurance covers the cost of repair minus £125, which I have to pay. Not too bad.

Though a few days after, my partner and the baby were in a car crash. Both are fine! However the car is likely a write-off. The other party admitted fault right away and their insurance will pay out the value of the car, but until that happens we’re without a second car, which makes it difficult for me to fix mine. Not impossible, just difficult.

I guess it comes in six’s? Maybe two sets of three.

I’m excited to get these problems sorted. Though they’ve all come at once, besides the written off car they’re all things we planned on doing anyway. They’re the last renovation things we need to do before we can start on the detailing – primarily, sorting out the network (allowing me to finally post something technical on here again)

The fact that these things are now active problems means we’re more focused on fixing them. We could have paid to update our bathroom at any point but have been putting it off. Now, though, we’re looking at getting it done sooner. The downside is that we don’t have the money to do it all in one go so we do need to pick our battles in the right order (the cess pit first!)

Mould

We’ve had mould problems ever since moving in to this property. We found it under coving, under the old carpet, behind wallpaper and in the kitchen cupboards.

We’ve slowly been eradicating it – we had most of the issues resolved (except for in the kitchen, but more on that in the next post…) by the time we had finished decorating. However, it kept coming back in our bedroom and in the bathroom along the edges of the ceiling/walls. We had put this down to humidity. At night we breathe whilst sleeping, and in the bathroom… well, that one is obvious. I was getting frustrated with the constant mould recently and called up a specialist company.

They refused to take my money.

Instead, they gave me some (free) tips which, in hindsight, are quite obvious if I had thought about it. I appreciate that they did this instead of charging for someone to come out and do the work for me!

First off, they told me that a high humidity does not result in mould. Instead, where condensation forms is where mould forms. Condensation appears where a surface is cold enough for the moisture in the air to turn into water when it comes into contact with that surface. If you can remove the moisture from the air (essentially impossible in a home) OR warm up those surfaces, condensation won’t form as easily, and therefore mould will have a harder time growing.

Heating a room does not achieve this on its own. As the mould was forming along the edges of where the ceiling/walls met (on externally facing walls, too!) the specialist company surmised that the insulation in the loft was not covering those areas. The heat was escaping through the plasterboard ceiling and not being held by insulation, cooling the area down. Water in the warm air would travel up, hit the ceiling barrier and condense there, dripping down the walls if enough formed.

Up to the loft I went to disover that this was indeed the case. In some areas (not coincidentally the same areas where we had the most issues with mould) up to two foot of ceiling was exposed. Luckily, there was a spare roll of insulation up there already. I used this to fill in the edges about a week ago and since then we’ve not had any more mould appear in those trouble spots. Early days of course, and we’re still getting mould around the windows, but I have some thermal insulating paint to test there.

Ideally we’d replace our windows but can’t really afford to do so at the moment – they’re old and the seal has broken on some of them, which we will get repaired.