Aug 16, 2018
Ron Keidar

Game of Zones

How it all began

 

I still have an old Apple IIe, fully functional. At those days before the Internet, SW ran “bare-metal” it could access any HW registers, ROM and RAM. Programs even overwrote their own code either as a bug or a weird feature.

Later with the advent of multi-process OS, virtual memory was introduced to prevent one process from overwriting another, and a kernel was set to manage the mapping of physical memory to the various processes through a Memory Management Unit (MMU). 

 

And now

 

Fast-Forward to the age of Systems on a Chip (SoC) and Network on a Chip (NoC) - now, many memory masters and memory slaves reside on the same chip, sharing the same memory. At this stage memory sharing and isolation is required to be much broader and typically adds additional hierarchies of memory separation and sharing across multiple CPUs, GPUs, DSPs, DMAs and peripherals. Now, an MMU is needed on any master. A master without an MMU could potentially cross domains and violate the entire separation scheme. And while each kernel manages its local processes, a system-wide manager needs to form coherent system domains.

 

 

A typical solution for this is the Trusted Execution Environment (TEE)

That means a TEE needs another level of protection, above the rest. In many cases the TEE protects its storage and secure registers on the memory slave side, meaning that a special signal in the bus is required to identifies TEE transactions.

 

 

Memory Protection Units (MPUs) are added at the slave side and thus TEE does not depend on each MMU in the system to be properly configured. 

Following the TEE came the hypervisor which elegantly supports complex use cases of isolation of virtual compute resources and on the other hand, sharing of system domains across compute engines. Hypervisor achieves that by an additional address translation layer:

 

 

So now we have many memory zones. Some belong to different cores and some are shared in some way across various cores (e.g. Content Protection Zone is decrypted video, shared between crypto engines, video decoders, audio decoders, display core and potentially DSP and GPU)

 

So, what’s wrong?

 

In the setup described above, the attack surface is quite huge, as many compute engines, DMA machines and external buses, debug ports, etc. are all connected to a shared address space. Furthermore, all the MMUs’ and MPUs’ translation tables need to be maintained coherently across memory allocations, sustain state through local and global hibernations and reset events, support Kernels, Hypervisor and TEE – without errors! It usually requires many engineers covering for many disciplines to get it right. 

Regarding testing, it is hard to get good coverage of all use cases at system level, and rarely the test team tries to test that all invalid access attempts will fail. Such penetration-test cases are – in many cases - not part of the system and regression tests.

All the above are reasons why security of a shared address space is hard to evaluate and assure!

 

Got it, it is complex to protect memory - but there is more

 

Last year we heard for the first time about a completely different set of attacks – side channel attacks on the cache - or in their theatrical new names: Spectre and Meltdown.

These attacks allow one process to attack the kernel or another process by abusing the traces that some CPU optimizations leave in the cache. There is lots of material about Spectre and Meltdown on the internet, so let’s keep our focus. To summarize the key points of our article so far, they are: 

  • Any sort of shared memory with Memory Access Control is a potential security risk 
    • MMU, MPU, Cache, DMA, are all cores that adds security risks
  • Complexity is a security risk by itself
    • Many Compute cores, debug modes, bus interfaces
    • Many hibernation, sleep and reset modes
    • Dependency on many development teams
    • Tapeout deadlines, multiple cores

 

Keep it Simple 

 

Are you looking for a silver bullet to address all the above - and keep your design secure and simple at the same time?

An Embedded(!!)and Isolated(!!)Root-of-TrustEngine is the answer!

  • Embedded– because an external chip (e.g. TPM) is great for protecting itself, but cannot protect the SoC from within, share keys, storage keys, internal state, debug, internal control lines, etc.
  • Isolated- with a completely separate address space, independent from all the other cores, MMUs, MPUs, Hypervisor and not exposed to any attacks on the main address space
  • Root-of-TrustEngine – all keys are sourced from HW, all operations on the keys are done in HW and are isolated from the rest of the system.
    • The rest of the system can only request to do operations with the keys, but it cannot actually access the keys
    • Only the owner of a key can use it
    • A key can only be used for the operations that are allowed by its policy

 

Inside Secure Root-of-Trust and Programmable Root-of-Trust engines

 

Inside Secure offers two mature and certified solutions to address the issues above, while keeping your design simple and low-cost, and allowing you to bring it fast to the market. The Root-of-Trustengine offers secure boot and many other security services (see the article: “Embedded Security Step by Step”). In addition, it can offer FIPS 140-2 certification, fault-injection protection and side channel protection.

The Programmable Root-of-Trust engine incorporates a RISC-V core inside the Root-of-Trust. That allows the customer to run code inside the secure perimeter, away from the shared memory space and from all the risks discussed above. The System Architects can also extend the API of the core by adding more commands.

 

 

One example could be a TLS client to enable remote management from the cloud; another could be programmability to support Evita-Full requirements for automotive; Many customers demand a SW-upgradeable core, and so on ...

Hint: in the diagram above we also show a key provisioning  system that is a separate solution from Inside Secure, and there is a lot more - stay tuned!

 

 

Summary: Security is a SoC Commitment! 

 

Inside Secure is your trusted partner to advise and assist on implementing a security strategy for your design.

  • HSM
  • Secure element
  • Secure processor
  • Enclave
  • Vault IP
  • EIP-130
  • EIP-133
  • TrustZone