0 00:00:00,000 --> 00:00:30,000 This subtitle is not finished yet. If you are able to, please support us and watch the talk in amara for the last changes: https://c3subtitles.de/talk/1583 Thanks! 1 00:00:00,000 --> 00:00:10,667 *rc3 preroll music* 2 00:00:12,235 --> 00:00:17,520 Herald: So for the next talk, I have Jo Van Bulck, and Fritz Alder from the 3 00:00:17,520 --> 00:00:24,640 University of Leuven in Belgium, and David Oswald professor for cyber security in 4 00:00:24,640 --> 00:00:29,840 Birmingham. They are here to talk about the trusted execution environment. You 5 00:00:29,840 --> 00:00:36,320 probably know from Intel and so on, and you should probably not trust it all the 6 00:00:36,320 --> 00:00:42,160 way because it's software and it has its flaws. And so they're talking about 7 00:00:42,160 --> 00:00:47,680 ramming enclave gates, which is always good, a systematic vulnerability 8 00:00:47,680 --> 00:00:52,080 assessment of TEE shielding runtimes. Please go on with your talk. 9 00:00:52,080 --> 00:00:58,690 Jo van Bulck: Hi, everyone. Welcome to our talk. So I'm Jo, former imec-DistriNet 10 00:00:58,690 --> 00:01:02,640 research group at KU Leuven. And today joining me are Fritz, also from 11 00:01:02,640 --> 00:01:06,800 Leuven and David from the University of Birmingham. And we have this very exciting 12 00:01:06,800 --> 00:01:11,440 topic to talk about, ramming enclave gates. But before we dive into that, I 13 00:01:11,440 --> 00:01:16,400 think most of you will not know what are enclave's, let alone what are these TEEs. 14 00:01:16,400 --> 00:01:23,520 So let me first start with some analogy. So enclave's are essentially a sort of a 15 00:01:23,520 --> 00:01:29,520 secure fortress in the processor, in the CPU. And so it's an encrypted memory 16 00:01:29,520 --> 00:01:36,960 region that is exclusively accessible from the inside. And what we know from the last 17 00:01:36,960 --> 00:01:41,560 history of fortress attacks and defenses, of course, is that when you cannot take a 18 00:01:41,560 --> 00:01:46,560 fortress because the walls are high and strong, you typically aim for the gates, 19 00:01:46,560 --> 00:01:51,280 right? That's the weakest point in any in any fortress defense. And that's exactly 20 00:01:51,280 --> 00:01:57,440 the idea of this research. So it turns out to apply to enclave's as well. And we have 21 00:01:57,440 --> 00:02:01,520 been ramming the enclave gates. We have been attacking the input/output interface 22 00:02:01,520 --> 00:02:07,600 of the enclave. So a very simple idea, but very drastic consequences I dare to say. 23 00:02:07,600 --> 00:02:14,640 So this is sort of the summary of our research. With over 40 interface 24 00:02:14,640 --> 00:02:20,480 sanitization vulnerabilities that we found in over 8 widely used open source enclave 25 00:02:20,480 --> 00:02:27,040 projects. So we will go a bit into detail over that in the rest of the slides. Also, 26 00:02:27,040 --> 00:02:32,400 a nice thing to say here is that this resulted in two academic papers to date, 27 00:02:32,400 --> 00:02:38,880 over 7 CVEs and altogether quite some responsible disclosure, lengthy embargo 28 00:02:38,880 --> 00:02:46,095 periods. David Oswald: OK, so, uh, I guess we 29 00:02:46,095 --> 00:02:55,197 should talk about why we need such enclave fortresses anyway. So if you look at a 30 00:02:55,197 --> 00:03:00,230 traditional kind of like operating system or computer architecture, you have a very 31 00:03:00,230 --> 00:03:06,131 large trusted computing base. So you, for instance, on the laptop that you most 32 00:03:06,131 --> 00:03:12,265 likely use to watch this talk, you trust the kernel, you trust maybe a 33 00:03:12,265 --> 00:03:16,909 hypervisor if you have and the whole hardware under the systems: a CPU, 34 00:03:16,909 --> 00:03:23,116 memory, maybe hard drive, a trusted platform module and the like. So actually 35 00:03:23,116 --> 00:03:28,830 the problem is here with such a large TCB, trusted computing base, you can also have 36 00:03:28,830 --> 00:03:35,521 vulnerabilities basically everywhere. And also malware hiding in all these parts. So 37 00:03:35,521 --> 00:03:41,951 the idea of this enclaved execution is as we find, for instance, in Intel SGX, which 38 00:03:41,951 --> 00:03:48,406 is built into most recent Intel processors, is that you take most of the 39 00:03:48,406 --> 00:03:54,078 software stack between an actual application, here the enclave app and the 40 00:03:54,078 --> 00:04:01,005 actual CPU out of the TCB. So now you only trust really the CPU and of course, you 41 00:04:01,005 --> 00:04:05,148 trust your own code, but you don't have to trust the OS anymore. And SGX, for 42 00:04:05,148 --> 00:04:10,049 instance, promises to protect against an attacker who has achieved root in the 43 00:04:10,049 --> 00:04:14,694 operating system. And even depending on who you ask against, for instance, a 44 00:04:14,694 --> 00:04:20,864 malicious cloud provider. So imagine you run your application on the cloud and then 45 00:04:20,864 --> 00:04:26,724 you can still run your code in a trusted way with hardware level isolation. And you 46 00:04:26,724 --> 00:04:30,754 have attestation and so on. And you don't no longer really have to trust even the 47 00:04:30,754 --> 00:04:40,503 administrator. So the problem is, of course, that attack surface remains, so 48 00:04:40,503 --> 00:04:47,378 previous attacks and some of them, I think will also be presented at this remote 49 00:04:47,378 --> 00:04:52,395 Congress this year, have targeted vulnerabilities in the microarchitecture 50 00:04:52,395 --> 00:04:58,589 of the CPU. So you are hacking basically the hardware level. So you had foreshadow, 51 00:04:58,589 --> 00:05:05,711 you had microarchitectural data sampling, spectre and LVI and the like. But what 52 00:05:05,711 --> 00:05:10,182 less attention has been paid to and what we'll talk about more in this presentation 53 00:05:10,182 --> 00:05:17,028 is the software level inside the enclave, which I hinted at, that there is some 54 00:05:17,028 --> 00:05:22,360 software that you trust. But now we'll look in more detail into what actually is 55 00:05:22,360 --> 00:05:30,300 in such an enclave. Now from the software side. So can an attacker exploit 56 00:05:30,300 --> 00:05:34,304 any classical software vulnerabilities in the enclave? 57 00:05:35,520 --> 00:05:40,880 Jo: Yes David, that's quite an interesting approach, right? Let's aim for the 58 00:05:40,880 --> 00:05:45,200 software. So we have to understand what is the software landscape out there for these 59 00:05:45,200 --> 00:05:49,760 SGX enclaves and TEEs in general. So that's what we did. We started with an 60 00:05:49,760 --> 00:05:53,760 analysis and you see some screenshots here. This is actually a growing open 61 00:05:53,760 --> 00:05:58,960 source ecosystem. Many, many of these runtimes, library operating systems, SDKs. 62 00:05:58,960 --> 00:06:03,760 And before we dive into the details, I want to stand still with what is the 63 00:06:03,760 --> 00:06:09,760 common factor that all of them share, right? What is kind of the idea of these 64 00:06:09,760 --> 00:06:17,040 enclave development environments? So here, what any TEE, trusted execution 65 00:06:17,040 --> 00:06:22,400 environment gives you is this notion of a secure enclave oasis in a hostile 66 00:06:22,400 --> 00:06:27,200 environment. And you can do secure computations in the green box while the 67 00:06:27,200 --> 00:06:33,440 outside world is burning. As with any defense mechanism, as I said earlier, the 68 00:06:33,440 --> 00:06:37,680 devil is in the details and typically at the gate, right? So how do you mediate 69 00:06:37,680 --> 00:06:42,880 between that untrusted world where the desert is on fire, and the secure oasis in 70 00:06:42,880 --> 00:06:48,480 the enclave? And the intuition here is that you need some sort of intermediary 71 00:06:48,480 --> 00:06:53,040 software layer, what we call a shielding runtime. So it kind of makes a secure 72 00:06:53,040 --> 00:06:57,760 bridge to go from the untrusted world to the enclave and back. And that's what we 73 00:06:57,760 --> 00:07:03,680 are interested in. To see, what kind of security checks you need to do there. So 74 00:07:03,680 --> 00:07:07,680 it's quite a beautiful picture you have on the right, the fertile enclave and on the 75 00:07:07,680 --> 00:07:13,680 left the hostile desert. And we make this secure bridge in between. And what we are 76 00:07:13,680 --> 00:07:19,520 interested in is what if it goes wrong? What if your bridge itself is flawed? So 77 00:07:19,520 --> 00:07:25,600 to answer that question, we look at that yellow box and we ask what kind of 78 00:07:25,600 --> 00:07:30,400 sanitization, what kind of security checks do you need to apply when you go from the 79 00:07:30,400 --> 00:07:35,360 outside to the inside and back from the inside to the outside. And one of the key 80 00:07:35,360 --> 00:07:38,960 contributions that we have built up in the past two years of this research, I think, 81 00:07:38,960 --> 00:07:45,920 is that that yellow box can be subdivided into 2 smaller subsequent layers. And the 82 00:07:45,920 --> 00:07:51,440 first one is this ABI, application binary interface, very low level CPU state. And 83 00:07:51,440 --> 00:07:54,640 the second one is what we call API, application programing interface. So 84 00:07:54,640 --> 00:07:58,160 that's the kind of state that is already visible at the programing language. In the 85 00:07:58,160 --> 00:08:02,400 remainder of the presentation, we will kind of guide you through some relevant 86 00:08:02,400 --> 00:08:06,080 vulnerabilities on both these layers to give you an understanding of what this 87 00:08:06,080 --> 00:08:11,760 means. So first, Fritz will guide you to the exciting low level landscape of the 88 00:08:11,760 --> 00:08:15,440 ABI. Fritz: Yeah, exactly. And Jo, you just 89 00:08:15,440 --> 00:08:21,840 said it's the CPU state and it's the application binary interface. But let's 90 00:08:21,840 --> 00:08:27,200 take a look at what this means, actually. So it means basically that the attacker 91 00:08:27,200 --> 00:08:39,348 controls the CPU register contents and that... On every enclave entry and every 92 00:08:39,348 --> 00:08:46,480 enclave exit, we need to perform some tasks. So that's the enclave and the 93 00:08:46,480 --> 00:08:56,560 trusted runtime have some like, well initialized CPU state and the compiler can 94 00:08:56,560 --> 00:09:03,360 work with the calling conventions that it expects. So these are basically the key 95 00:09:03,360 --> 00:09:09,120 part. We need to initialize the CPU registers when entering the enclave and 96 00:09:09,120 --> 00:09:15,520 scrubbing them when we exiting the enclave. So we can't just assume anything 97 00:09:15,520 --> 00:09:20,960 that the attacker gives us as a given. We have to initialize it to something proper. 98 00:09:20,960 --> 00:09:30,320 And we looked at multiple TEE runtimes and multiple TEEs and we found a lot of 99 00:09:30,320 --> 00:09:37,840 vulnerabilities in this ABI layer. And one key insight of this analysis is basically 100 00:09:37,840 --> 00:09:45,120 that a lot of these vulnerabilities happen on complex instruction set processors, so 101 00:09:45,120 --> 00:09:51,760 on CISC processors and basically on the Intel SGX TEE. We also looked at some RISC 102 00:09:51,760 --> 00:09:57,840 processors and of course, it's not representative, but it's like immediately 103 00:09:57,840 --> 00:10:06,000 visible that the complex x86 ABI seems to be... have a way higher, larger attack 104 00:10:06,000 --> 00:10:13,760 surface than the simpler RISC designs. So let's take a look at one example of this 105 00:10:13,760 --> 00:10:20,080 more complex design. So, for example, there's the x86 string instructions that 106 00:10:20,080 --> 00:10:26,800 are controlled by the direction flag. So there's a special x86 rep instruction that 107 00:10:26,800 --> 00:10:33,200 basically allows you to perform streamed memory operations. So if you do a memset 108 00:10:33,200 --> 00:10:40,960 on a buffer, this will be compiled to the rep string operation instruction. And the 109 00:10:40,960 --> 00:10:50,720 idea here is basically that the buffer is read from left to right and written over 110 00:10:50,720 --> 00:10:56,880 it by memset. But this direction flag also allows you to go through it from right to 111 00:10:56,880 --> 00:11:03,200 left. So backwards. Let's not think about why this was a good idea or why this is 112 00:11:03,200 --> 00:11:08,720 needed. But definitely it is possible to just set the direction flag to one and run 113 00:11:08,720 --> 00:11:16,000 this buffer backwards. And what we found out is that the System-V ABI actually says 114 00:11:16,000 --> 00:11:21,120 that this must be clear or set to forward on function entry and return. 115 00:11:21,120 --> 00:11:26,880 And that compilers expect this to happen. So let's take a look at this when we do 116 00:11:26,880 --> 00:11:33,840 this in our enclave. So in our enclave, when we, in our trusted application, 117 00:11:33,840 --> 00:11:39,680 perform this memset on our buffer, on normal entry with the normal direction 118 00:11:39,680 --> 00:11:45,040 flag this just means that we walk this buffer from front to back. So you can see 119 00:11:45,040 --> 00:11:51,680 here it just runs correctly from front to back. But now, if the attacker enters the 120 00:11:51,680 --> 00:11:58,880 enclave with the direction flag set to 1 so set to run backwards, this now means 121 00:11:58,880 --> 00:12:05,840 that from the start of our buffer. So from where the pointer points right now, you 122 00:12:05,840 --> 00:12:10,640 can now see it actually runs backwards. So that's a problem. And that's definitely 123 00:12:10,640 --> 00:12:16,193 something that we don't want in our trusted applications because, well, as you 124 00:12:16,193 --> 00:12:22,880 can think, it allows you to overwrite keys that are in the memory location that you 125 00:12:22,880 --> 00:12:27,280 can go backwards. It allows you to read out things, that's definitely not 126 00:12:27,280 --> 00:12:32,960 something that is useful. And when we reported this, this actually got a nice 127 00:12:32,960 --> 00:12:38,960 CVE assigned with the base score High, as you can see here on the next slide. And 128 00:12:38,960 --> 00:12:46,800 while you may say, OK, well, that's one instance. And you just have to think of 129 00:12:46,800 --> 00:12:54,400 all the flags to sanitize and all the flags to check. But wait, of course, 130 00:12:54,400 --> 00:13:02,960 there's always more, right? So as we found out, there's actually the floating point 131 00:13:02,960 --> 00:13:07,440 unit, which comes with a like, whole lot of other registers and a whole lot of 132 00:13:07,440 --> 00:13:17,040 other things to exploit. And I will spare you all the details. But just for this 133 00:13:17,040 --> 00:13:25,704 presentation, just know that there is an older x87 FPU and a new SSE that does 134 00:13:25,704 --> 00:13:31,920 vector floating point operations. So there's the FPU control word and the MXCSR 135 00:13:31,920 --> 00:13:39,849 register for these newer instructions. And this x87 FPU is older, but it's still used 136 00:13:39,849 --> 00:13:45,680 for example, for extended precision, like long double variables. So old and new 137 00:13:45,680 --> 00:13:49,120 doesn't really apply here because both are still relevant. And that's kind of the 138 00:13:49,120 --> 00:13:58,160 thing with x86 and x87 here. That old archaic things that you could say are 139 00:13:58,160 --> 00:14:03,280 outdated, are still relevant or are still used nowadays. And again, if you look at 140 00:14:03,280 --> 00:14:09,200 the System-V ABI now, we saw that these control bits are callee-saved. So they are 141 00:14:09,200 --> 00:14:13,680 preserved across function calls. And the idea here is which to some degree holds 142 00:14:13,680 --> 00:14:22,400 merit, is that these are some global states that you can set and they are all 143 00:14:22,400 --> 00:14:27,680 transferred within one application. So one application can set some global state and 144 00:14:27,680 --> 00:14:35,280 keep the state across all its usage. But the problem here as you can see here is 145 00:14:35,280 --> 00:14:39,760 our application or enclave is basically one application, and we don't want our 146 00:14:39,760 --> 00:14:44,480 attacker to have control over the global state within our trusted application, 147 00:14:44,480 --> 00:14:52,502 right? So what happens if FPU settings are preserved across calls? Well, on a normal, 148 00:14:52,502 --> 00:14:57,760 for a normal user, let's say we just do some calculation inside the enclave. Like 149 00:14:57,760 --> 00:15:03,280 2.1 times 3.4, which just nicely calculates to a 7.14, a long double. 150 00:15:03,280 --> 00:15:09,680 That's nice, right? But what happens if the attacker now enters the enclave with 151 00:15:09,680 --> 00:15:15,680 some corrupt precision and rounding modes for the FPU? Well, then we actually get 152 00:15:15,680 --> 00:15:21,520 another result. So we get distorted results with a lower precision and a 153 00:15:21,520 --> 00:15:26,400 different rounding mode. So actually it's rounding down here, whenever it exceeds 154 00:15:26,400 --> 00:15:31,280 the precision. And this is something we don't want, right? So this is something 155 00:15:31,280 --> 00:15:38,240 where the developer expects a certain precision or long double precision, but 156 00:15:38,240 --> 00:15:43,840 the attacker could actually just reduce it to a very short position. And we reported 157 00:15:43,840 --> 00:15:49,760 this and we actually found this issue also in Microsoft OpenEnclave. That's why it's 158 00:15:49,760 --> 00:15:55,600 marked as not exploitable here. But what we found interesting is that the Intel SGX 159 00:15:55,600 --> 00:16:01,200 SDK, which was vulnerable, patched this with some xrstore instruction, which 160 00:16:01,200 --> 00:16:10,400 completely restores the extended state to a known value, while OpenEnclave only 161 00:16:10,400 --> 00:16:16,320 restored the specific register that was affected, the ldmxcsr instruction. And 162 00:16:16,320 --> 00:16:19,600 so let's just skip over the next few slides here, because I just want to give 163 00:16:19,600 --> 00:16:27,120 you the idea that this was not enough. So we found out that even if you restored 164 00:16:27,120 --> 00:16:32,640 this specific register, there's still another data register that you can just 165 00:16:32,640 --> 00:16:40,000 mark as in use before entering the enclave and with which the attacker can make that 166 00:16:40,000 --> 00:16:45,600 any floating point calculation results in a not a number. And this is silent, so 167 00:16:45,600 --> 00:16:50,080 this is not programing language specific, this is not developer specific. This is a 168 00:16:50,080 --> 00:16:55,840 silent ABI issue that the calculations are just not a number. So we also reported 169 00:16:55,840 --> 00:17:03,600 this. And now, thankfully, all enclave runtimes use this full xrstor instruction 170 00:17:03,600 --> 00:17:09,600 to fully restore this extended state. So it took two CVEs, but now luckily, they 171 00:17:09,600 --> 00:17:15,760 all perform this nice full restore. So I don't want to go to the full details of 172 00:17:15,760 --> 00:17:21,280 our use cases now or of our case studies that we did now. So let me just give you 173 00:17:21,280 --> 00:17:29,440 the ideas of these case studies. So we looked at these issues and wanted to look 174 00:17:29,440 --> 00:17:36,800 into whether they just feel difficult or whether they are bad. And we found that we 175 00:17:36,800 --> 00:17:41,680 can use overflows as a side channel to deduce secrets. So, for example, the 176 00:17:41,680 --> 00:17:49,120 attacker could use this register to unmask exceptions, that inside the 177 00:17:49,120 --> 00:17:58,400 enclave are then triggered by some input dependent multiplication. And we found out 178 00:17:58,400 --> 00:18:03,040 that these side channels if you have some input dependent multiplication can 179 00:18:03,040 --> 00:18:11,920 actually be used in the enclave to perform a binary search on this input space. And 180 00:18:11,920 --> 00:18:16,880 we can actually retrieve this multiplication secret with a deterministic 181 00:18:16,880 --> 00:18:23,920 number of steps. So even though we just have a single mask we flip, we can 182 00:18:23,920 --> 00:18:31,760 actually retrieve a secret with deterministic steps. And just for the, just 183 00:18:31,760 --> 00:18:36,560 so that you know, there's more you can do. We can also do machine learning in the 184 00:18:36,560 --> 00:18:44,080 enclave. So Jo said it nicely, you can run it inside the TEE, inside the cloud. And 185 00:18:44,080 --> 00:18:47,760 that's great for machine learning, right? So let's do a handwritten digit 186 00:18:47,760 --> 00:18:55,200 recognition. And if you look at just the model that we look at, we just have two 187 00:18:55,200 --> 00:19:00,560 users where one user pushes some machine learning model and the other user 188 00:19:00,560 --> 00:19:05,520 pushes some input and everything is protected with enclaves, right? 189 00:19:05,520 --> 00:19:10,960 Everything is secure. But we actually found out that we can poison these FPU 190 00:19:10,960 --> 00:19:18,320 registers and degrade the performance of this machine learning down from all digits 191 00:19:18,320 --> 00:19:24,160 were predicted correctly to just eight percent of digits were correctly. And 192 00:19:24,160 --> 00:19:31,600 actually all digits were just predicting the same number. And this basically made 193 00:19:31,600 --> 00:19:37,520 this machine learning model useless, right? There's more we did so we can also 194 00:19:37,520 --> 00:19:42,320 attack blender with image differences, slight image differences between blender 195 00:19:42,320 --> 00:19:48,720 images. But this is just for you to see that it's small, but it's a tricky thing 196 00:19:48,720 --> 00:19:56,480 and indicate that that can go wrong very fast on the ABI level once you play around 197 00:19:56,480 --> 00:20:02,560 with it. So this is about the CPU state. And now we will talk more about the 198 00:20:02,560 --> 00:20:06,400 application programing interface that I think more of you will be comfortable 199 00:20:06,400 --> 00:20:09,440 with. David: Yeah, we take, uh, thank you, 200 00:20:09,440 --> 00:20:14,160 Fritz. We take a quite simple example. So let's assume that we actually load a 201 00:20:14,160 --> 00:20:18,560 standard Unix binary into such an enclave, and there are frameworks that can do that, 202 00:20:18,560 --> 00:20:24,960 such as graphene or so. And what I want to illustrate with that example is that it's 203 00:20:24,960 --> 00:20:29,680 actually very important to check where pointers come from. Because the enclave 204 00:20:29,680 --> 00:20:34,686 kind of partitions memory into untrusted memory and enclave memory and they live in 205 00:20:34,686 --> 00:20:40,800 a shared address space. So the problem here is as follows. Let's assume we have 206 00:20:40,800 --> 00:20:47,120 an echo binary that just prints an input. And we give it as an argument a string and 207 00:20:47,120 --> 00:20:52,720 that normally, when everything is fine, points to some string, let's say hello 208 00:20:52,720 --> 00:20:58,480 world, which is located in the untrusted memory. So if everything runs as it 209 00:20:58,480 --> 00:21:03,040 should, this enclave will run, will get the pointer to untrusted memory and will 210 00:21:03,040 --> 00:21:08,800 just print that string. But the problem is now actually the enclave has access also 211 00:21:08,800 --> 00:21:15,520 to its own trusted memory. So if you don't check this pointer and the attacker passes 212 00:21:15,520 --> 00:21:20,640 a pointed to the secret that might live in enclave memory, what will happen? Well the 213 00:21:20,640 --> 00:21:25,200 enclave will fetch it from there and will just print it. So suddenly you have turned 214 00:21:25,200 --> 00:21:32,080 this kind of like into a like a memory disclosure vulnerability. And we can see 215 00:21:32,080 --> 00:21:35,840 that in action here for the framework named graphene that I mentioned. So we 216 00:21:35,840 --> 00:21:40,640 have a very simple hello world binary and we run it with a couple of command line 217 00:21:40,640 --> 00:21:45,440 arguments. And now on the untrusted side, we actually change a memory address to 218 00:21:45,440 --> 00:21:50,080 point into enclave memory. And as you can see, normally, it should print here test, 219 00:21:50,080 --> 00:21:55,120 but actually it prints a super secret enclave string that lived inside 220 00:21:55,120 --> 00:22:00,960 the memory space of the enclave. So these kind of vulnerabilities are quite 221 00:22:00,960 --> 00:22:05,600 well known from user to kernel research and from other instances. And they're 222 00:22:05,600 --> 00:22:11,600 called confused deputy. So the deputy kind of like has a gun now can read and if 223 00:22:11,600 --> 00:22:17,280 memory and suddenly then does something which is not not supposed to do because he 224 00:22:17,280 --> 00:22:22,000 didn't really didn't really check where the memory should belong or not. So I 225 00:22:22,000 --> 00:22:27,600 think this vulnerability, uh, seems seems to be quite trivial to solve. You simply 226 00:22:27,600 --> 00:22:31,680 check all the time where, uh, where pointers come from. But as you will tell, 227 00:22:31,680 --> 00:22:37,920 you know, it's often not quite quite that easy. Yes. David, that's quite insightful 228 00:22:37,920 --> 00:22:41,840 that we should check all of the pointers. So that's what we did. We checked all of 229 00:22:41,840 --> 00:22:46,320 the pointer checks and we noticed that Endo has a very interesting kind of all 230 00:22:46,320 --> 00:22:49,760 the way to check these things. Of course, the code is high quality. They checked all 231 00:22:49,760 --> 00:22:53,360 of the pointers, but you have to do something special for things. We're 232 00:22:53,360 --> 00:22:57,840 talking here, the C programing language. So things are no terminated, terminated. 233 00:22:57,840 --> 00:23:02,880 They end with a new byte and you can use a function as they are struggling to compute 234 00:23:02,880 --> 00:23:05,920 the length of this thing. And let's see how they check whether thing that's 235 00:23:05,920 --> 00:23:10,880 completely outside of memory. So the first step is you compute the length of the 236 00:23:10,880 --> 00:23:15,600 interest, it's ten, and then you check whether the string from start to end lives 237 00:23:15,600 --> 00:23:19,280 completely outside of the anchor. That sounds so legitimate. Then you eject the 238 00:23:19,280 --> 00:23:23,760 steam. So so this works beautifully. Let's see, however, how it behaves when we when 239 00:23:23,760 --> 00:23:27,440 we partnered. And so we are not going to parse this thing has a world outside of 240 00:23:27,440 --> 00:23:34,160 the enclave that we pass on string secret, one that lies within the. So the first 241 00:23:34,160 --> 00:23:38,320 step will be that the conclave starts computing the length of that string that 242 00:23:38,320 --> 00:23:42,960 lies within the anklet. That sounds already fishy, but then luckily everything 243 00:23:42,960 --> 00:23:46,800 comes OK because then it will detect that this actually should never have been done 244 00:23:46,800 --> 00:23:50,880 and that this thing lies inside the enclave. So it will reject the call so 245 00:23:50,880 --> 00:23:56,080 that the call into the anklet. So that's fine. But but some of you who know such 246 00:23:56,080 --> 00:24:00,160 channels know that this is exciting because the English did some competition 247 00:24:00,160 --> 00:24:04,080 it was never supposed to do. And the length of that competition depends on the 248 00:24:04,080 --> 00:24:10,480 amount of of non-zero bites within the anklet. So what we have here is a side 249 00:24:10,480 --> 00:24:16,080 channel where the English will always return false. But the time it takes to 250 00:24:16,080 --> 00:24:21,600 return false depends on the amount of of zero bytes inside that secret Arncliffe 251 00:24:21,600 --> 00:24:26,640 memory block. So that's what we found. We are excited and we said, OK, it's simple 252 00:24:26,640 --> 00:24:31,920 timing channel. Let's go with that. So we did that and you can see a graph here and 253 00:24:31,920 --> 00:24:36,480 it turns out it's not as easy as it seems. So I can tell you that the blue one is for 254 00:24:36,480 --> 00:24:39,840 a string of length one, and that one is for a string of like two. But there is no 255 00:24:39,840 --> 00:24:43,760 way you can see that from that graph because it said six processors are 256 00:24:43,760 --> 00:24:47,920 lightning fast so that one single incrementing section is completely 257 00:24:47,920 --> 00:24:52,560 dissolves into the pipeline. You will not see that by by measuring execution time. 258 00:24:52,560 --> 00:24:59,120 So we need something different. And what we have smart papers and in literature, 259 00:24:59,120 --> 00:25:03,920 one of the very common attacks in ASICs is also something that Intel describes here. 260 00:25:03,920 --> 00:25:09,520 You can see which memory pages for memory blocks are being accessed while the 261 00:25:09,520 --> 00:25:14,080 English executes because you control the operating system and the paging machinery. 262 00:25:14,880 --> 00:25:19,680 So that's what we tried to do. We thought this is a nice channel and we were there 263 00:25:19,680 --> 00:25:24,480 scratching our heads, looking at that code of very simple for loop that fits entirely 264 00:25:24,480 --> 00:25:29,040 within one page and a very short string that fits entirely within one page. So 265 00:25:29,040 --> 00:25:33,920 just having access to for a memory, it's not going to help us here because because 266 00:25:34,560 --> 00:25:39,440 votes the code and the data fit on a single page. So this is essentially what 267 00:25:39,440 --> 00:25:44,320 we call the temporal resolution of the sideshow. This is not accurate enough. So 268 00:25:44,320 --> 00:25:51,040 we need a lot of take. And well, here we have been working on quite an exciting 269 00:25:51,040 --> 00:25:55,120 framework. It uses indirects and it's called as a big step. So it's a completely 270 00:25:55,120 --> 00:26:01,280 open source framework on Hadoop. And what it allows you to do essentially is to 271 00:26:01,280 --> 00:26:05,200 execute an enclave one step at a time, hence the name. So it allows you to 272 00:26:05,200 --> 00:26:09,040 interleave the execution of the enclave with attacker code after every single 273 00:26:09,040 --> 00:26:12,640 instruction. And the way we pull it off is highly technical. We have this Linux 274 00:26:12,640 --> 00:26:18,480 kernel drive around a little library operating system in userspace, but that's 275 00:26:18,480 --> 00:26:23,200 a bit out of scope. The matter is that we can interrupt an enclave after every 276 00:26:23,200 --> 00:26:27,538 single restriction and then let's see what we can do with that. So. What we 277 00:26:27,538 --> 00:26:33,720 essentially can do here is to execute and follow up with all this extra increment 278 00:26:33,720 --> 00:26:38,918 instructions one of the time, and after every interrupt, we can simply check 279 00:26:38,918 --> 00:26:45,066 whether the enclave accessed the string residing of our target. That's another way 280 00:26:45,066 --> 00:26:50,683 to think about it, is that we have that execution of the enclave and we can break 281 00:26:50,683 --> 00:26:56,998 that up into individual steps and then just count the steps and hands and hands. 282 00:26:56,998 --> 00:27:03,440 A deterministic timing. So in other words, we have an oracle that tells you where all 283 00:27:03,440 --> 00:27:08,824 zero bytes are in the anklet. I don't know if that's useful, actually do so. It turns 284 00:27:08,824 --> 00:27:12,737 out that this I mean, some people who might be born into exploitation already 285 00:27:12,737 --> 00:27:17,760 know that it's good to know whether zero is somewhere in memory or not. And we do 286 00:27:17,760 --> 00:27:23,537 now do one example where we break A-S and Iowa, which is the hardware acceleration 287 00:27:23,537 --> 00:27:29,000 of enterprises process for AI. So finally, that actually operates only on registers. 288 00:27:29,000 --> 00:27:34,130 And you just said you can kind of like do that on onepoint us on memory, but says 289 00:27:34,130 --> 00:27:38,832 another trick that comes into play here. So whenever the enclave is interrupted, it 290 00:27:38,832 --> 00:27:44,080 will store its current registers, date somewhere to memory Quazi as a frame so we 291 00:27:44,080 --> 00:27:50,425 can actually interrupt it and clarify make it right. It's memory to to it's it's 292 00:27:50,425 --> 00:27:56,840 register sorry to to say memory. And then we can run the zero byte oracle on this 293 00:27:56,840 --> 00:28:02,722 SSA a memory. And what we figure out is where zero is or if there's any zero in 294 00:28:02,722 --> 00:28:08,747 the state. So I don't want to go into the gory details of a yes. But what we 295 00:28:08,747 --> 00:28:15,835 basically do is we find whenever there's a zero in the last in the state before the 296 00:28:15,835 --> 00:28:21,850 last round of ads and then that zero will go down to the box will be X or to a key 297 00:28:21,850 --> 00:28:27,520 byte, and then that will give us a cipher text. But we actually know the ciphertext 298 00:28:27,520 --> 00:28:33,601 byte so we can go backwards. So we can kind of compute, uh, we can compute from 299 00:28:33,601 --> 00:28:39,763 zero up to here and from here to this X1. And that way we can compute directly one 300 00:28:39,763 --> 00:28:45,840 key byte. So we repeat that whole thing 16 times until we have found a zero in every 301 00:28:45,840 --> 00:28:51,460 bite of this state before the last round. And that way we get the whole final round 302 00:28:51,460 --> 00:28:56,294 key. And for those that know as if you have one round key, you have the whole key 303 00:28:56,294 --> 00:29:00,654 in it. So you get like the original key, you can go backwards. So sounds 304 00:29:00,654 --> 00:29:05,988 complicated, but it's actually a very fast attack when you see it running. So here is 305 00:29:05,988 --> 00:29:11,473 a except doing this attack and as you can see, was in a couple of seconds and maybe 306 00:29:11,473 --> 00:29:16,342 five hundred twenty invocations of of Asir, we get the full KeIso. That's 307 00:29:16,342 --> 00:29:21,401 actually quite impressive, especially because the whole uh. Yeah, one of the 308 00:29:21,401 --> 00:29:26,268 points in essence is that you don't put anything in memory, but this is 309 00:29:26,268 --> 00:29:33,062 interaction with SGX, which is kind of like allows you to put stuff into into 310 00:29:33,062 --> 00:29:41,372 memory. So I want to wrap up here. Um, we have found various other attacks. Yeah. 311 00:29:41,372 --> 00:29:47,838 So, um, both in research code and in production code, such as the Intel SDK and 312 00:29:47,838 --> 00:29:52,708 the Microsoft SDK. And they basically go across the whole range of foreign 313 00:29:52,708 --> 00:29:57,700 abilities that we have often seen already from use it to kind of research. But there 314 00:29:57,700 --> 00:30:02,680 are also some, uh, some interesting new new kind of like vulnerabilities due to 315 00:30:02,680 --> 00:30:08,240 some of the aspects we explained. There was also a problem with all call centers 316 00:30:08,240 --> 00:30:13,770 when the enclave calls into untrust, the codes that is used when you want to, for 317 00:30:13,770 --> 00:30:18,740 instance, emulate system calls and so on. And if you return some kind of like a 318 00:30:18,740 --> 00:30:24,839 wrong result here, you could again go out of out of bounds. And they were actually 319 00:30:24,839 --> 00:30:30,697 quite, quite widespread. And then finally, we also found some issues with padding, 320 00:30:30,697 --> 00:30:36,115 with leakage in the padding. I don't want to go into details. I think we have, uh, 321 00:30:36,115 --> 00:30:40,880 learned a lesson here that that we also know from from the real world. And that is 322 00:30:40,880 --> 00:30:47,105 it's important to wash your hands. So it's also important to sanitize and state to 323 00:30:47,105 --> 00:30:54,213 check pointers and so on. No. So that is kind of one one of the take away message 324 00:30:54,213 --> 00:30:58,585 is really that to build and connect securely, yes, you need to fix all the 325 00:30:58,585 --> 00:31:03,445 hardware issues, but you also need to write safe code. And for enclave's, that 326 00:31:03,445 --> 00:31:09,674 means you have to do a proper API and APIs sanitization. And that's quite a difficult 327 00:31:09,674 --> 00:31:15,718 task actually, as as we've seen, I think in that presentation, there's quite a 328 00:31:15,718 --> 00:31:21,066 large attack surface due to the attack model, especially of intellectual X, where 329 00:31:21,066 --> 00:31:25,781 you can interrupt after every instruction and so on. And I think for from a research 330 00:31:25,781 --> 00:31:31,888 perspective, there's really a need for a more. Approach, then just continue if you 331 00:31:31,888 --> 00:31:38,010 want, maybe we can learn something from from the user to analogy which which I 332 00:31:38,010 --> 00:31:43,734 invoked, I think a couple of times so we can learn kind of like how what an enclave 333 00:31:43,734 --> 00:31:48,650 should do, uh, from from what we know about what a colonel should do. But they 334 00:31:48,650 --> 00:31:54,239 are quite important differences also that need to be taken account. So I think, as 335 00:31:54,239 --> 00:31:59,670 you said, all all our code is is open source. So you can find that on the below 336 00:31:59,670 --> 00:32:07,016 GitHub links and you can, of course, ask also questions after you have watched this 337 00:32:07,016 --> 00:32:15,077 talk. So thank you very much. Hello, so back again. Here are the questions. Hello 338 00:32:15,077 --> 00:32:21,680 to see your life. Um, we have no questions yet, so you can put up questions in the 339 00:32:21,680 --> 00:32:28,200 see below if you have questions. And on the other hand. Oh, let me make close this 340 00:32:28,200 --> 00:32:36,751 up so I'll ask you some questions. How did you come about this topic and how did you 341 00:32:36,751 --> 00:32:43,484 meet? Uh, well, that's actually interesting. I think this such as has been 342 00:32:43,484 --> 00:32:50,158 building up over the years. Um, and it's so, so, so I think some some of the 343 00:32:50,158 --> 00:32:56,691 vulnerabilities from our initial paper, I actually started in my master's thesis to 344 00:32:56,691 --> 00:33:01,763 sort of see and collect and we didn't really see the big picture until I think I 345 00:33:01,763 --> 00:33:06,774 met David and his colleagues from Birmingham at an event in London, the nice 346 00:33:06,774 --> 00:33:11,326 conference. And then we we started to collaborate on this and we went to look at 347 00:33:11,326 --> 00:33:14,955 this a bit more systematic. So I started with this whole list of vulnerabilities 348 00:33:14,955 --> 00:33:19,880 and then with with David, we kind of made it into a more systematic analysis. And 349 00:33:19,880 --> 00:33:26,362 and that was sort of a Pandora's box. I dare to say from the moment on this, this 350 00:33:26,362 --> 00:33:32,003 kind of same errors being repeated. And then also Fitzhugh, who recently joined 351 00:33:32,003 --> 00:33:36,237 our team in London, started working together with us on one or more of these 352 00:33:36,237 --> 00:33:40,520 low level Sebu estate. And that's the Pandora's box in itself. I would say, 353 00:33:40,520 --> 00:33:46,506 especially one of the lessons, as we said, that particular six is extremely complex. 354 00:33:46,506 --> 00:33:51,233 And it turns out that almost all of that complexity, I would say, can be abused, 355 00:33:51,233 --> 00:33:55,904 potentially biodiversity. So it's more like a fractal in a fraction of a fractal 356 00:33:55,904 --> 00:34:01,831 where you're opening a box and you're getting more and more of questions out of 357 00:34:01,831 --> 00:34:08,731 that. In a way, I think. Yes, I think it's fair to say this this research is not the 358 00:34:08,731 --> 00:34:13,573 final answer to to this, but it's an attempt to to give a systematic way of 359 00:34:13,573 --> 00:34:19,068 looking at probably never ending up actually funding is. So there is a 360 00:34:19,068 --> 00:34:26,034 question from the Internet. So are there any other circumstances where he has 361 00:34:26,034 --> 00:34:33,188 Mianus and he is writing its registers into memory, or is this executed exclusive 362 00:34:33,188 --> 00:34:44,160 to SGX? So I repeat, I do not understand the question either, so, so well, I think 363 00:34:44,160 --> 00:34:49,280 the question is that this is a tactical defeat. Prison depends on, of course, 364 00:34:50,000 --> 00:34:54,720 having a memory disclosure about the content and people that are accusing us 365 00:34:54,720 --> 00:34:58,960 except to kind of forcibly right the memory content of the content into memory. 366 00:35:00,000 --> 00:35:05,040 So that is definitely a specific um. However, I would say one of the the 367 00:35:05,040 --> 00:35:08,960 lessons from the past five years of research is that often these things 368 00:35:08,960 --> 00:35:13,200 generalize beyond the six and at least the general concept of, let's say, the 369 00:35:13,200 --> 00:35:18,880 insights that sebu, that justice end up in memory one way or another sooner or later. 370 00:35:18,880 --> 00:35:23,040 I think that also applies to creating systems that if you somehow can force an 371 00:35:23,040 --> 00:35:26,080 operating system to complex, which pertain to applications, that you also have to 372 00:35:27,200 --> 00:35:32,160 register temporarily in memory. So if you would have something similar like what we 373 00:35:32,160 --> 00:35:37,200 have in an operating system, Colonel, you would potentially mount a similar attack. 374 00:35:37,760 --> 00:35:43,680 But maybe David wants to say something about operating systems there as well. No, 375 00:35:43,680 --> 00:35:48,240 no, not really. I think, like one one thing that helps with SGX is that you have 376 00:35:48,240 --> 00:35:53,200 very precise control, as you explained, which was the interrupts and stuff because 377 00:35:53,200 --> 00:35:58,080 you were your route outside the outside the enclave. So you can signal step 378 00:35:58,080 --> 00:36:03,280 essentially the whole enclave where it's like, um, interrupting the operating 379 00:36:03,280 --> 00:36:08,320 system. Exactly repeatedly at exactly the point you want or some other process also 380 00:36:09,120 --> 00:36:13,760 tends to be probably probably harder just by design. But of course, on a context 381 00:36:13,760 --> 00:36:19,360 which keep us to save somewhere, it's register set and then then it will end up 382 00:36:19,360 --> 00:36:25,840 in memoria in some situations probably not not as controlled as it is for for as 383 00:36:25,840 --> 00:36:34,480 Asgeirsson. So there is the question, what about other CPU architectures other than 384 00:36:34,480 --> 00:36:41,840 Intel, did you test those? So maybe I can I can go into this so. Well, interesting. 385 00:36:41,840 --> 00:36:48,160 See, that's the largest one with the largest software base and the most runtime 386 00:36:48,160 --> 00:36:53,440 that is also that we could look at. Right. But there, of course, some other stuff we 387 00:36:53,440 --> 00:37:01,040 have or as this eternity that we developed some years ago, it's called Sancho's. And 388 00:37:01,040 --> 00:37:05,440 of course, for this, there are similar issues. Right. So you always need the 389 00:37:05,440 --> 00:37:14,880 software layer to interact, to enter the enclave into the enclave. And I think you 390 00:37:14,880 --> 00:37:20,880 had David in the earlier work, also found issues in our TI. So it's not just Intel 391 00:37:20,880 --> 00:37:27,120 and really related product projects that mess up there, of course. But what we 392 00:37:27,120 --> 00:37:34,000 definitely found is it's easier to to think of all cases for simpler designs 393 00:37:34,000 --> 00:37:38,080 like risk five or simpler risk designs then for this complex actually six 394 00:37:39,360 --> 00:37:43,840 architecture. Right. So right now there are not that many sites into less Jicks. 395 00:37:43,840 --> 00:37:48,880 So so they have the advantage and disadvantage of being the first widely 396 00:37:48,880 --> 00:37:56,000 deployed, let's say. And um, but I think as soon as others start to, to grow out 397 00:37:56,000 --> 00:38:00,960 and simpler designs start to be more common, I think we will see this, that 398 00:38:00,960 --> 00:38:05,649 it's easier to fix alleged cases for simpler designs. OK, so what is a 399 00:38:05,649 --> 00:38:18,966 reasonable alternative to tea, or is there any way you want to take that or think, 400 00:38:18,966 --> 00:38:27,215 should I say what? Uh, well, we can probably both give our perspectives. So I 401 00:38:27,215 --> 00:38:31,842 think. Well, the question to start statute, of course, is do we need an 402 00:38:31,842 --> 00:38:34,992 alternative or do we need to find more systematic ways to to to sanitize 403 00:38:34,992 --> 00:38:39,212 Australians? That's, I think, one part of the answer here, that we don't have to 404 00:38:39,212 --> 00:38:43,240 necessarily throw away these because we have problems with them. We can also look 405 00:38:43,240 --> 00:38:46,990 at how to solve those problems. But apart from that, there is some exciting 406 00:38:46,990 --> 00:38:52,124 research. OK, maybe David also wants to say a bit more about, for instance, on 407 00:38:52,124 --> 00:38:57,305 capabilities, but that's not in a way not so different than these necessarily. But 408 00:38:57,305 --> 00:39:00,864 but when you have high tech support for capabilities like like the Cherry 409 00:39:00,864 --> 00:39:04,646 Borjesson computer, which essentially associates metadata to a point of 410 00:39:04,646 --> 00:39:09,692 metadata, like commission checks, then you could at least for some cause of the 411 00:39:09,692 --> 00:39:14,837 issues we talked about point to point of poisoning attacks, you could natively 412 00:39:14,837 --> 00:39:20,651 catch those without support. But but it's a very high level idea. Maybe David wants 413 00:39:20,651 --> 00:39:26,080 to say something. Yeah. So so I think, like alternative to tea is whenever you 414 00:39:26,080 --> 00:39:31,640 want to partition your system into into parts, which is, I think, a good idea. And 415 00:39:31,640 --> 00:39:37,523 everybody is now doing that also in there, how we build online services and stuff so 416 00:39:37,523 --> 00:39:44,276 that these are one systems that we have become quite used to from from mobile 417 00:39:44,276 --> 00:39:48,976 phones or from maybe even even from something like a banking card or so out, 418 00:39:48,976 --> 00:39:52,729 which is sort of like a protected environment for a very simple job. But the 419 00:39:52,729 --> 00:39:57,501 problem then starts when you throw a lot of functionality into the tea. As we saw, 420 00:39:57,501 --> 00:40:03,318 the trusted code base becomes more and more complex and you get traditional box. 421 00:40:03,318 --> 00:40:08,059 So I'm saying like, yeah, it's really a question if you need an alternative or a 422 00:40:08,059 --> 00:40:11,794 better way of approaching it. How are you partition software? And as you mentioned, 423 00:40:11,794 --> 00:40:16,406 there are some other things you can do architecturally so you can change the way 424 00:40:16,406 --> 00:40:21,386 we or extends the way we build build architectures for with capabilities and 425 00:40:21,386 --> 00:40:25,955 then start to isolate components. For instance, in one software project, say it, 426 00:40:25,955 --> 00:40:30,299 say in your Web server, you isolate the stack or something like this. And also, 427 00:40:30,299 --> 00:40:37,526 thanks for the people noticing the secret password here. You so obviously only for 428 00:40:37,526 --> 00:40:45,854 decoration purposes to give the people something to watch. So but it's not 429 00:40:45,854 --> 00:40:54,612 fundamentally broken, isn't? Yeah, not 60. I mean, these are so many of them, I 430 00:40:54,612 --> 00:41:02,261 think, like you cannot say, fundamentally broken for but for a question I had was 431 00:41:02,261 --> 00:41:08,342 specifically for SGX at that point, because signal uses its mobile coin, 432 00:41:08,342 --> 00:41:15,680 cryptocurrency uses it and so on and so forth. Is that fundamentally broken or 433 00:41:15,680 --> 00:41:24,428 would you rather say so? So I guess it depends what you call fundamentally right. 434 00:41:24,428 --> 00:41:29,915 So there has been in the past, we have worked also on what I would say for 435 00:41:29,915 --> 00:41:35,107 breaches of attitudes, but they have been fixed and it's actually quite a beautiful 436 00:41:35,107 --> 00:41:40,909 instance of a well researched and have short term industry impact. So you find a 437 00:41:40,909 --> 00:41:45,924 vulnerability, then the vendor has to devise a fix that they are often not 438 00:41:45,924 --> 00:41:50,014 available and there are often workarounds to the problem. And then the later, 439 00:41:50,014 --> 00:41:54,432 because you're are talking, of course, about how to talk to. So then you need new 440 00:41:54,432 --> 00:41:58,668 processes to really get a fundamental fix for the problem and then you have 441 00:41:58,668 --> 00:42:04,655 temporary workarounds. So I would say, for instance, a company like Signeul using it, 442 00:42:04,655 --> 00:42:10,059 if they so it does not give you security by default. But you need to think about 443 00:42:10,059 --> 00:42:14,108 the software. That's what you focused on in this stock. We also need to think about 444 00:42:14,108 --> 00:42:20,394 all of the hardware, micro patches and on the processors to take care of all the 445 00:42:20,394 --> 00:42:26,470 known vulnerabilities. And then, of course, the question always remains, are 446 00:42:26,470 --> 00:42:30,824 the abilities that we don't know of yet with any secure system? I guess. But but 447 00:42:30,824 --> 00:42:36,676 maybe also David wants to say something about some of his latest work there. 448 00:42:36,676 --> 00:42:42,504 That's a bit interesting. Yeah. So I think what what your source or my answer to this 449 00:42:42,504 --> 00:42:48,083 question would be, it depends on your threat model, really. So some some people 450 00:42:48,083 --> 00:42:54,045 use SGX as a way to kind of like remove the trust in the cloud provider. So you 451 00:42:54,045 --> 00:42:59,511 say like RSS and Signaler. So I move all this functionality that that is hosted 452 00:42:59,511 --> 00:43:04,655 maybe on some cloud provider into an enclave and then then I don't have to 453 00:43:04,655 --> 00:43:10,673 trust the cloud provider anymore because there's also some form of protection 454 00:43:10,673 --> 00:43:15,760 against physical access. But recently we actually we published another attack, 455 00:43:15,760 --> 00:43:22,130 which shows that if you have hardware access to an SGX processor, you can inject 456 00:43:22,130 --> 00:43:28,138 false into into the processor by playing with the on the voting interface with was 457 00:43:28,138 --> 00:43:33,155 hardware. And so you really saw that to the main board to to a couple of a couple 458 00:43:33,155 --> 00:43:38,442 of wires on the bus to the voltage regulator. And then you can do voltage 459 00:43:38,442 --> 00:43:43,824 glitching, as some people might know, from from other embedded contexts. And that way 460 00:43:43,824 --> 00:43:48,680 then you can flip bits essentially in the enclave and of course, do all kinds of, 461 00:43:48,680 --> 00:43:54,589 um, it kind of like inject all kinds of evil effects that then can be used further 462 00:43:54,589 --> 00:43:59,612 to get keys out or maybe hijack control flow or something. So it depends on your 463 00:43:59,612 --> 00:44:04,802 threat model. I wouldn't say so. That ASX is completely pointless. It's, I think, 464 00:44:04,802 --> 00:44:10,200 better than not having it at all. But it definitely cannot you cannot have, like, 465 00:44:10,200 --> 00:44:15,314 complete protection against somebody who has physical access to your server. So I 466 00:44:15,314 --> 00:44:20,880 have to close this talk. It's a bummer. And I would ask all the questions that I 467 00:44:20,880 --> 00:44:26,100 flew in. But one very, very fast answer, please. What is that with a password in 468 00:44:26,100 --> 00:44:30,627 your background? I explained it. It's it's, of course, like just a joke. So I'll 469 00:44:30,627 --> 00:44:35,609 say it again, because some people seem to have taken it seriously. So it was such an 470 00:44:35,609 --> 00:44:40,438 empty whiteboard. So I put a password there. Unfortunately, it's not fully 471 00:44:40,438 --> 00:44:46,234 visible in the in the screen. OK, so I think you should open book out of David 472 00:44:46,234 --> 00:45:00,100 Oswald. Thank you for having that nice talk. And now we make the transition to 473 00:45:00,100 --> 00:45:03,840 the new show. 474 00:45:03,840 --> 00:45:34,000 Subtitles created by c3subtitles.de in the year 2021. Join, and help us!