Something about these toys appeals to me. The Kid got a few so I snapped some pictures, expecting a blurry mess. And that’s what I got, but in a good way! I love how the projected red light looks in the longer exposure photos.
I’ve said previously that things come in threes. This particular month in 2020 gave us three quite severe issues in the server room at work.
One – as the tide comes rolling in
We had some heavy rain in early October over a weekend. I spent the majority of that time in front of the fireplace until work on Monday morning.
Before I even got through the office I was alerted to a flood in our server room. I rushed down there to find an absolute mess. We are unfortunately strapped for space and also use the server room to store our spare equipment and some peripherals and consumables. We lost quite a lot of stuff due to water damage however most of it was old so doesn’t hold much value. There were some switches that got soaked, and also my own personal Draytek I was going to use as a secondary VPN (ours wasn’t great at the time) during lockdown in case of emergencies.
You can see in the following image a “tide line” around the walls of the room – we had about an inch of water in there that slowly soaked away. A phenomenal amount of water given the size of the room and the fact that there are sizeable gaps under the two doors that allow water to escape to neighbouring rooms
Somehow our servers didn’t get too wet. Humidity was at 90%+ for quite a while though, and we have experienced several hard drive failures since which were likely related.
After three days of drying, ripping up carpet and sorting through our stuff we could finally get back to the normal slog. However we learned that the fitted Aircon unit in the room doesn’t have a cut off for drying the air. It’s either on or off, and only when manually set. We can’t set a target humidity. In a server room you typically want to hover at about 50% – too high and the moisture in the air corrodes internals, too low and static electricity can more easily discharge in the dry air. Before ripping out the carpet and other detritus we’d hover at about 50-70% without any flood water in the room, but since then we’ve gone as low as 30%.
We need proper climate control and monitoring. And for the leak to be fixed… It has been an issue for a while which involves the design of the roof – water does drain away but if there’s too much too quickly, or the drainage is blocked as was the case this time, the water overflows and somehow finds it’s way to the ground through the second path of least resistance… Which just so happens to pass right through the server room about two feet in front of the cabinets.
Two – splashback
About two weeks after the first incident we had more heavy rain. More water in the server room. Luckily not as much water and because the room was empty and carpetless it dried out within a day. Unfortunately it came in through a slightly different part of the ceiling about half a foot closer to the server rack.
I’d love to get up on to the roof to try and figure out where this water is getting in. I’m told the fix is “new roof” but wonder if making a path for the water to escape by which doesn’t pass through one of the most expensive rooms on the campus is feasible. Not a fix, certainly, but a workaround until we can move the servers (which we should be moving before the end of this year.)
Three – this is nuts
Finally (at least I hope) is this, which happened about a week after flood #2:
We saw him on the camera we set up after the first flood. We raced down there and eventually managed to coerce it out from its temporary home directly under the server rack before it could eat through anything. There have since been more squirrels find their way into the building but none have yet braved the server room. For fear of drowing, I suspect.
Although management doesn’t particularly care about incident management and response reports, I find the whole process fascinating. So I wrote up an incident report for the squirrel invasion and it was quite entertaining to type out to say the least. So many puns.
TL;DR: Try disabling “SNMP Status” for the port on the Windows print server in Print Management. No guarantee this will work everywhere but it appears to have worked for me. Give it a go, let me know!
Printer says… what, exactly?
Feel free to skip the rest of this post. It’s just the sequence of events that took me to the supposed conclusion/solution.
I arrived at a customer site to find a stack of issues, but one in particular caused some confusion to my smooth-brained self. A HP Office Jet Pro 8710 printer, which worked up until last Wednesday, had a black screen with an error on it stating:
There is a problem with the printer. Turn the printer off, then on.
I of course tried a reboot. The printer switched on, booted up and appeared to be fine. But after 5 to 10 seconds it would make a clicking noise, the screen would flash blue, then go black with the same error. I hate printers.
It looked like there was some white text on the blue screen so I recorded it with the phone in slow motion and picked up an error code: b8bb2b3e
Googling this took me to one relevant result, a post on the HP forum with no replies. Great. I hate printers (I may say this several times) and I hate an error code with no official documentation anywhere on the internet. So, what is one to do in such a situation?
Well, poke about, push buttons, and hope you figure out a pattern, obviously!
My immediate thought was something PrintNightmare related, however as the post on the HP forum was made in April I moved that away from the forefront of my mind for now and started looking elsewhere. As luck would have it, I noticed that the error occurred just as the Wireless light finished flashing (because yes, this printer is connected to a domain network wirelessly. Literally the definition of evil.)
I plugged in a network cable that I pinched from a nearby desktop and the printer stopped crashing. Great, progress! This building had a new wireless network installed a couple of months ago so I began poking at that but nothing had changed in the last couple of weeks. Instead, I moved across to the Print server to change the ports over to the new IP address the device has from the wired network assuming it was wireless related and… pop! Blue screen with b8bb2b3e again. My head flipped back around to PrintNightmare patch issues or something up with the server. I didn’t really have a direction of travel from this point as Event Viewer had nothing of interest in it as far as I could see, but I decided to send the printer a Test Print just before it connected. I’ve seen before issues with printers where it’ll process an existing queue of work and then crash out or error, indicating that the core technology (network, server, printer) is functioning but some feature is causing an issue, and that’s exactly what I saw here. The device connected to the wired network, sent out the test print page, then immediately crashed out again. Historically I’ve fixed this (rare) occurrence by removing and re-adding the printer to the print server but I decided to poke around some more and try to nail down which setting was causing this (if, indeed, any at all.)
Luckily it didn’t take too many attempts to switch off SNMP Status Enabled, reboot the printer, and not see any crashes. The printer is still working a few hours later (back on the wireless… grr. We’ll run some cable into this particular office soon) and I have since checked Windows Updates installation date – the printer stopped working right after updates were installed on the server. The server was up to date prior to this month’s (July 2021) patch release so it does look like something in the most recent set of updates caused this. And yes, I have ensured the server is not vulnerable to PrintNightmare by checking for those registry keys, so it’s not that.
If I unveil the exact cause I will update this post.
I just read the truesec analysis of the Kaseya VSA 0-day that hit the news earlier in the month. I love reading articles like this, but this one in particular I had to highlight.
The authentication… “bypass”… utilised as a first step: D’oh! How did something like that even get into production? The linked article has more details but essentially, if all authentication checks fail (when querying this particular file, not generally) instead of saying “Nope, you are not authenticated!” they instead say “Oh, you don’t supply a password that we can verify? Ok, let’s give you authenticated status anyway👍”.
Logic failures like this generally don’t happen in the ideal world because they’re blindingly obvious, so allow me to speculate for the rest of this paragraph. I can only assume that a developer temporarily set this up to diagnose a bug or test a feature and simply forgot to flip it back to “fail by default”. This is why peer review is important, though the rush to get things out to production works against this. It’s too easy to miss this kind of thing in the modern world. It shouldn’t be, but it is. There should be no blame here on any individual – I suspect a process/procedure needs to be looked at, or a team needs to be better resourced.
Just noticed this and when Googling it has been picked up already, so this isn’t new, but the wp-statistics module (v13.0.8 for sure but likely other versions too) seems to be logging information into the “wp-statistics.log” file in the root directory of the site it is installed on. You can therefore access it and in some cases read the IP addresses of visitors to a site if they have the addon enabled by visiting
You can block external access to it in the .htaccess file via:
<Files "wp-statistics.log"> Require all denied </Files>
A quick google dork will show up a fair number of affected sites, including some… potentially embarrassing ones.
I’ll move on from GLPI eventually and start working on some interesting technical stuff, but today is not that day.
We ditched GLPI after we got hit by an accidental SQLi from HaveIBeenPwned – in short, version 9.4.5 is vulnerable to an SQL Injection flaw. You can exploit it by sending it an email (say, to email@example.com) and once the email gets automatically turned into a ticket and assigned, the SQL will be executed. This affected us because the obscenely simple execution string was included in the header of the haveibeenpwned email notification.
I’ve just been poking around GLPI again (we have kept it around for non-end-user stuff, isolated and kept out of reach) and noticed that there was a “telemetry” scheduled task in the list of Automatic Actions which got me curious.
GLPI have decided to publish some of their telemetry data, which is nice of them. But it shows that there’s still a significant number of users running 9.4.5 and older.
Of the installs that report telemetry in the last year (and only those installs on 9.2 and above do this), 14,313 are on a version at or below 9.4.5, whilst 26,985 are on 9.4.6 and above. Over 34% of GLPI installs are potentially* vulnerable to this painfully simple exploit but over 12% of installs absolutely are still vulnerable as they’re on 9.4.5 exactly.
*Potentially, but I suspect only 9.4.5 is vulnerable – they fixed it by accident in 9.4.6 here which looks like a response to an issue that appeared in 9.4.5.
We learned a lesson with the GLPI issue – keep your software up to date. Though to be fair to us, it was up to date according to their website at the time. There was a newer version available (9.4.6) but that wasn’t advertised anywhere.
I hope these out of date installs get updated. We know there’s a lot of malicious activity out there, but at the same time… accidents can happen.
I love how ice looks when it forms on something smooth and flat like glass. If I didn’t have to scrape it all off in a hurry to get to work I’d stare at it for at least two minutes. What? It’s freezing out there…
If you’ve worked in IT for more than a year you probably have some crazy tales to tell. I certainly do – with nearly 15 years in the field I have seen insanity and hilarity – more than I could ever remember. So I thought to myself, why not write it all down somewhere?
This first tale comes from back in my early days.
As a PFY, I was trusted with little more than basic desktop repairs and printer toner replacements – a fairly common slice of life for many IT bods. I was relatively fresh faced, a few scars but nothing major. Eager to learn and eager to please, I was often the first to raise my hand and take any challenging job, despite the vast gaps in my knowledge. Our team was small – three techs (two general helpdesk end user support PFYs, one mobile device repairs) with one very isolationist “Network Guy” (who would now be called a SysAdmin) and one “Database guy” (although their only qualification was knowing what “SQL” stood for.)
It came as quite a surprise when, early on a sunny Monday, we were told that the Network Guy was leaving. And he wasn’t being replaced.
It quickly became apparent that the entirety of his duties were to fall on myself and PFY#2, who had been working at this place for a bit less than me. Networks Guy’s last day rolled around without much issue, nor much communication from him – in fact at one point I asked for his help with a network issue and he told me “I don’t care, I’m leaving!” So it came as a bit of a surprise (and equal amounts of relief) when he called me and PFY#2 up to his office to give us his handover document.
His handover document consisted of a single sheet of A4 with handwritten notes about a few things barely qualifying as useful. A few IPs and other miscellaneous details about servers and switches, the odd issue he knew about but hadn’t fixed, and maybe a password or two.
We received the document at the door to his office (we weren’t tolerable enough to go inside on this occasion?) and quickly shoo’d off to our regular helpdesk support duties.
That evening, he left and we never saw him again.
The following weeks and months were absolutely insane. I can’t recall much about what happened during this time. Myself and PFY#2 managed to keep on top of most of the helpdesk support calls and make a start on untangling the network. We quickly found that most switches were essentially unboxed and plugged in without any config changes, servers were unpatched and had uptime into the hundreds of days (you could tell when the last powercut had been by looking at the uptime) and we were getting very close to the limit on resources – maxed CPU and/or RAM, HDDs filling up. Group policy was a mess, roaming profiles reaching into the tens of gigabytes with nothing preventing their growth… it was a mess.
To top it off, out the back of Network Guys office was another small closet containing almost all the servers we had. Neither of us had ever been back there, and when we did we found dust, cobwebs and equipment that seemed to be switched on but we had absolutely no idea what it did. Helpfully the single sheet of A4 told us some server names and serial numbers.
We were fueled by Red Bull and the long (long) days began to blur into one massive learning experience. To this day I have never learned so much so quickly as I did back then as we fought to keep the place running, the users happy(ish) and continue to learn as much as we could.
There was one event, though, that utterly stumped us.
Cheerily and awesomely smashing the helpdesk as we were, we suddenly had calls coming in about email being offline. After a quick check we realised that, yep, email was down. We couldn’t RDP into the Exchange (2003) server either – something was wrong, clearly. Off to the dusty old Network Guys office we go.
We walk in, grab the single 4:3 CRT monitor in the room, stretch the cables across the room from one of the power extension cables with a spare socket to one of the nearby tables, and plug the crusty old VGA cable into the back of the absolute beast of an exchange server. I mean, this thing was huge. An old tower, black, solid steel everything. No idea why it was up in this first floor room when all the other servers were on the ground floor, but… whatever.
Flicking the monitor on we quickly saw everyones favourite screen. Yep, it’s blue, and it signifies death. The BSOD.
We panicked a little, probably downed another Red Bull each, then got to work trying to bring this thing back online.
Remember: we were literally flying by the seat of our pants here, and had been four months at this point. We had no idea what we were doing.
First things first – switch it off and back on again. We held the power button down, heard a massive CLUNK, the fans span down and the screen went black. Take a breath. Switch it on again, wait for the BIOS, wait for Windows to start booting, wait some more…. BSOD.
We try again, switch it off and back on. This time, we hear a horrible grinding noise as the machine spins up. We get to Windows trying to boot and everything freezes – not even a BSOD. Off it goes once more, switch it on, grinding noise, BIOS doesn’t even finish loading.
Backups! Backups? There are backups, right? One of our jobs is to take the tapes out of the drive and swap them with the next numbered batch in the fire safe, surely we could restore this? But… Network Guy did the backups. Network Guy didn’t elect to write anything down about the backups! We don’t know anything about them! An oversight of the highest order!
We know the boss has the phone number of Network Guy, but we need to try fixing this ourselves first. We don’t want him shouting down the phone at us like the one and only time we called him for help before this…
More Red Bull, more diagnosing. On the odd occasion we can get Windows to try booting, and sometimes we can get to the login screen using Safe Mode, but no matter how quick we are, we can never get logged in, and even this doesn’t last forever. Eventually the server just stops trying to load Windows and we’re presented with some error about not finding a HDD. The grinding at this point is still going on and we’re faced with our fear that the grinding isn’t a fan, but the single HDD that all of our email is stored on.
There’s nothing else for it. We’ve gotta call up Network Guy and ask his advice. Neither of us want to do this though – he wasn’t helpful to us when he worked here, and especially not when he was leaving.
Is there nothing we can do?
I remember the moment – we were stood either side of this huge hulking great server with no more options (that we are aware of at any rate). Our eyes meet, and without saying a word we both think the same thing at the same time.
PFY#2: “Shall we?”
Me: “I dunno… I mean it’s not working so maybe?”
PFY#2: “I think we should.”
Me: “Okay. Let’s do it.”
PFY#2: “Go on then, you can try first.”
Me: “No way, you do it.”
We look down at this ancient black server, lined with solid steel frame, sides and front, touched with the occasional bit of cheap plastic and the odd faded sticker.
PFY#2 raises his foot, and kicks the bastard right in the side, smack bang in the middle.
The grinding noise changes in pitch audibly. It’s still there, still buzzing away in that audio range that you think you can put up with but slowly sends you insane without you realising it. PFY#2 reaches down and holds the power button in to switch it off. He switches it back on.
BIOS loads up, boots fully, screen goes black.
Windows loads. And loads. We stare. And it continues to load. The login window appears after what must have been at least 25 minutes. We’re in shock. No BSOD. No lock up. Buzzing? Yeah that’s still there, but the server has booted. What the f-?
We rush to a nearby office, get the first user to open up outlook. It connects. Emails from the upstream start flooding in to their mailbox.
We check our own, same thing – it’s working.
We’re buzzing. Our blood is filled with adrenaline, caffiene, sugar, and whatever the hell else they put in Red Bull, but it also filled with joy. We fixed a server by kicking it.
After spreading the word that email is back, we get right back to our helpdesk calls, of which dozens have appeared since our exchange issue surfaced.
Not long after this we do eventually employ a sysadmin who I work with to this day, but this exchange server didn’t get replaced immediately. I can’t remember exactly when it was retired, but it whirred on for a good year or more after this. We tried very hard to not touch it – it wasn’t perfect, and we definitely had at least one more very confusing issue with it, which I’m sure I’ll write about at some point, but the beast chugged on and kept our email flowing until it was eventually replaced by a younger sexier model.
Some say that if you can find your way into Network Guys old (and long repurposed) office, and if then you can manage to make your way into the back room, you can, on quiet days when email traffic is up, still hear the buzzing of that once-failed-but-then-recovered hard drive.
This is how I came to learn about and respect percussive maintenance.
The recent bad experience with GLPI we had at work was the final nail in the coffin and, after patching the issue, I quickly began looking for alternative ticket management systems. We have wanted a new helpdesk for a long time – the support we provide has evolved over the years since GLPI was first introduced and with the covid-19 pandemic this support has seen yet another shift in the way in which we work. Not only are we doing things differently now, some unrecognised or unrealised issues have surfaced which we all wish to resolve or automate away.
We struggled through the worst of the lockdown but quickly identified some limitations with our existing way of working, namely that whilst our helpdesk did fine enough when it came to tracking tickets, it should do more for us. We spent a lot of time on the management of tickets and attempting to contact people just to get them to perform simple tasks. Why do we have to fight our helpdesk to achieve a goal, and why can’t we have a system that would let us run these simple tasks (read: scripts) ourselves in the background, invisibly to the end user?
I did some reading and some thinking post-GLPI-exploit and realised that we are essentially an MSP. We provide support for departments within a school, each of which has different objectives, priorities, demands and tools. Plus, we support several primary schools on top of that, and they are their own beasts entirely with unique networks, hardware and software, let alone processes and requirements.
As we emerge from total lockdown to a lesser version, we are also going to need to do our normal jobs but much more efficiently. With a “remote first” approach to minimise risk to our end users (both staff and students, but also guardians and members of the public) I quickly decided to look into RMM tools instead of basic ticketing systems.
Given that we also had issues in other areas (namely monitoring and alerting, in that what we have is barebones and actually broke a month ago) I was looking out for a solution that would kill as many birds with as few stones as possible.
The requirements were that it had:
- A ticketing system
- Monitoring and alerting
- Patching, installing and scripting capabilities
- Easy remote support
- A good price (hey, we’re a school and don’t pass the cost on to the end user, we don’t have a lot of money!)
I found a list of RMM tools and their features on the /r/MSP subreddit and went through each one, checking videos, documentation and feature lists working out which would obviously not work, which might work, and which would absolutely work. I narrowed down the options to four potentials. In the end, only two of them had a “per technician” pricing model – the “per monitored device/agent” pricing model would end up costing us tens of thousands – so it came down to a war between the two. These were:
To be honest they were a close match. I preferred the look and feel of Atera although SyncroMSP was more feature rich. We didn’t really need all the features SyncroMSP boasted though, and Atera was cheaper (we’re on the cheapest plan) so ended up going with that.
Although we haven’t rolled out the agent to every device yet (we’re still mostly closed and until we have the agent installed have no way of deploying software out to machines not on our network) we have started using it heavily. So far, zero problems and we are all liking it. It has already enabled us to preempt some problems that would become tickets, solving them before they ever affect an end user or get reported. I’m looking forward to diving into it more, I am especially excited about the recently announced Chocolatey support, which seems to work wonderfully.
It’s early days yet, hopefully we can become much more efficient and provide a better service. Only time will tell!
I should say before we get started, the fault for this lies entirely with GLPI, I place no blame at the feet of haveibeenpwned.com or Troy Hunt for this issue. It’s all good fun! Concerning? Oh, for sure. You can’t help but laugh though. Obligatory XKCD.
On GLPI 9.4.5, creating a call (via the standard interface or email, etc) that contains the basic SQL injection string
';-- " will be logged normally with no abnormal behaviour, however if a Technician assigns themselves to that call via the quick “Assign to me” button, the SQL query will be executed. In the case of the example string given above, all existing calls, open or closed, will be updated to have their descriptions deleted and replaced with any text that appears before the aforementioned malicious string. You can of course modify this to perform other SQL queries.
This is fixed on 9.4.6,
however at the time of writing the GLPI Download page still links 9.4.5 as the latest update available, you need to go to github releases page to see 9.4.6. [2020-06-03 update:] now available directly from the GLPI download page.
9.4.6 was released before I found this exploit, however the GLPI website still showed 9.4.5 as the latest version. As far as we were concerned, we were on the latest version. Credit goes to whoever submitted it first, however at the time I had no knowledge of this already being known and resolved. Here’s a video showcasing the issue:
The Long Version – how I found it, or how haveibeenpwned pwned our helpdesk
We use GLPI as our technical support ticketing system at work. There are better solutions out there and we’re investigating others, however GLPI has served us well since 2009ish. It is a web based, self hosted PHP/MySQL application.
A website that catalogues and monitors data dumps for email addresses. It collects leaked or stolen databases, analyses them, pulls out any email addresses and can be searched by anyone for free to see if your email address has been included in a breach. You can also subscribe to receive alerts if your email address, or an email address on a domain that you own, is included in any future breaches.
haveibeenpwned.com pwned GLPI
Around late April we upgraded from an ancient version to the latest of GLPI – 9.4.5. All was well until we received an email from haveibeenpwned to our helpdesk support address, which automatically got logged as a support ticket. This email alerted us to some compromised accounts on our domain which were included in the latest Wishbone data dump.
I rushed to get the HIBP report generated to see who’s data on our domain had been compromised by clicking a link in this email-turned-support-ticket. We got the report in a second email, which created a second ticket. I grabbed the data, deleted the second ticket (as we still had the original open) and perused the data. After doing the necessary work alerting any users to the breach of their data I went back to the original HIBP ticket, and realising I hadn’t assigned it to myself did so and promptly solved it. All is well, time to move on?
Not quite. I and the other techs quickly noticed that every single ticket description had been deleted and replaced with partial header data from the HIBP email.
This immediately stunk of some kind of SQL Injection flaw and my mind raced as to what the cause was. I had a suspicion I knew… Unfortunately we were in the middle of business hours and due to Covid-19 are fully remote – we need a working helpdesk, and I don’t have the priviledge of working on potential security issues in the day job. We restored from a backup taken on the previous evening (not too much data was lost, thankfully) and carried on with our day supporting our users.
Understanding the flaw
As soon as work ended, I grabbed an ubuntu .iso and built me a webserver VM. I had a feeling I knew what the cause of this SQLi was (check the header of the email shown above, you don’t need long to figure out where the ‘malicious’ code is!) but wasn’t sure how it got executed – the email was parsed correctly and tickets weren’t affected when the email came in, it wasn’t until around the time I deleted the second ticket and closed the first call that problems arose.
After building the VM with PHP and MySQL, I hopped onto the GLPI website and grabbed the latest version from their site, which is shown as 9.4.5.
After setting it up and adding some test calls, I forwarded our original HIBP email to a temporary account I linked this test GLPI install to. Once the email was pulled in I went through the same steps as I had done earlier in the day:
- Generate the report (which I didn’t do again via the link in the email, I just forwarded the original email to my test email account creating a second ticket in my install of GLPI)
- Delete the second ticket
- Assign the first ticket to myself
- Close it
I checked my test tickets I loaded in there beforehand and lo and behold, they had all been wiped and replaced with the same content from the HIBP email!
I restored the VM to an earlier snapshot and went through the process again, pausing to check the other tickets at each step. I quickly discovered that the issue only occurs when you assign yourself to the ticket using the handy “Assciate myself” button.
Making it malicious
The email data already wipes the content of all tickets, but as it stands it leaves a lot of junk data behind. I wanted to minimise the data required to exploit the flaw yet retain the same behaviour.
Another restore of GLPI followed, with more tests trying to determine the minimum amount of data needed to execute the flaw. I spent some time cutting down the email from HIBP and quickly found that the opening lines of the HIBP email were indeed the culprit – I managed to shrink the exploit down to six characters (
';-- " – the space and double-quote at the end appear to be required though this could do with more testing) to achieve the same kind of malicious behaviour, in this case deleting all content of the descriptions for every ticket in the database. If you log the malicious call with this string as the title (or leave the title field blank – GLPI will then automatically add the contents of the description to the title, in this case the malicious string we have identified gets added as the title) the title on all other calls gets wiped too, however if you do include a non-malicious title in the malicious ticket the original titles on the other calls do not get modified.
Success! This is a pretty severe issue, and although it does require some user interaction you can easily hide this exploit in an innocent looking support call. GLPI supports HTML emails, which get rendered (almost) normally within the interface. Simply hiding the text in an attribute or the <head> or something will keep it invisible to the tech. You’ve just gotta wait for them to assign it to themselves.
In the end, this isn’t a zero-click flaw but it is easily hidden. If you hide the exploit and it doesn’t work out the first time (a tech doesn’t assign it to themselves) you can easily try again with another ticket until it works. Odds are the techs aren’t going to read through the raw HTML of each ticket looking for problems.
Reporting it – late to the party
I hopped over to GLPI’s github page to check for an existing issue and log my own if one didn’t exist when what do I see but 9.4.6! I check the changelog and find this:
Well, darn. I downloaded and installed the update and can confirm the issue no longer exists. Congrats to whoever spotted it first! Edit 2020-06-04: Twitter user @thetaphi took a look at this and found that it was spotted and/or accidentally fixed by a developer whilst fixing a separate issue.
As it is already solved I don’t really want to dig through the code and find the offending line or develop the exploit further. Edit 2020-06-04: some people have taken a look after @troyhunt tweeted about this issue. It is interesting (concerning?) that something this simple got through to release, especially when you consider the way to initiate the exploit is by assigning yourself the call. Why does the call description get parsed at all here?
Either way – if you’re running GLPI, make sure you’re on the latest release. Or look for alternative software.