The NSA Information Sheet on Memory Safety: Main Things to Consider

On Nov 10, 2022 The National Security Agency published an information sheet on memory safety. Its aim is to bring more awareness to memory safety issues and provide developers with actionable tips on how to improve them. And since the topic of cyber security is becoming more acute every year, this CSI guide is definitely worth your attention if you want to safeguard your software applications from malicious actors.

The NSA Information Sheet on Memory Safety: Main Things to Consider

What is memory safety and why is it so important?

Memory safety is a property of certain programming languages that does not allow developers to unintentionally introduce bugs. This is possible due to the memory management of these languages and they are typically considered safe. Examples of safe languages are C#, Go, Ruby, Java, Swift, and Rust as per the NSA.

So what makes them safe? The thing is, these languages manage the memory automatically which means, developers do not have to input code to add memory protections. In this way, safe languages come with inherent features for memory management and ensure solid protection. 

Unsafe languages, on other hand, provide developers with lots of freedom in terms of memory management. These languages rely on software engineers to take care of memory safety and thus they leave room for unintentional mistakes. Examples of unsafe languages are C++ and C.

Note though that even safe languages do not guarantee 100% protection from errors as sometimes an application has to perform unsafe functions in order to accomplish a specific task. We’ll talk more about such cases below but for now, remember that it’s preferable to use safe languages. Though the NSA does not force you to rewrite your whole legacy apps written in unsafe languages – you’ll just have to pay double attention to security.

The dangers of poor memory management

Even though memory safety may not sound like a big deal, it is a very big issue in the world of cybersecurity. Microsoft, for example, announced that between 2006 – 2018, 70% of their vulnerabilities were because of memory safety issues. Google also stated that approximately the same number of vulnerabilities was detected in Chrome and the reason for them was the same – memory safety. Thus, developers need to be aware of both memory safety bugs and the issues that poor memory management may bring.

The most common memory safety bugs

In case of poor memory management, multiple security and memory safety issues may arise and it’s important to know about them in advance. Otherwise, it would be difficult to track down the problem source and prevent/mitigate the issue.

These memory safety bugs include:

  • Buffer overflow: happens when the amount of data exceeds the storage capacity of the memory buffer;
  • Poorly managed memory allocation: may lead to unintentional memory release;
  • Logic errors: in this case, programs may try to use memory that has already been freed;
  • Race condition: happens when two threads try accessing the shared variable simultaneously;
  • Out of bound reads: when the software reads memory content that existed before or after the valid contents of the list.

Of course, these are not all the bugs that may occur due to improper memory management but we can consider them the most common ones. Now, as for the consequences, they are:

  • Agents can enter unusual inputs to the software;
  • Agents can get access to sensitive information;
  • Agents can execute unauthorized code;
  • Agents can crash software with the “fuzzing” technique.

To sum up, issues with memory management may lead to threat actors accessing software and making it act as they need. In order to prevent this, you need to be aware of preventative measures, including the ones recommended by the NSA.

Recommendations on secure and proper memory management 

Below we’ll look at several best practices for secure memory management, including the ones recommended by the NSA. Keep in mind though that memory safety is not the only source of potential vulnerabilities and you need to perform all-around security testing to ensure that your software is 100% safeguarded.

The use of safe languages and their fine-tuning

Remember we said the use of safe languages does not guarantee 100% security? This is because sometimes, the software has to perform unsafe memory management tasks. In order to battle that, you can pay attention to such unsafe activities as they may be the primary source of vulnerabilities. That means, in case something happens, you will at least know where to look first.

As for existing software written in an unsafe language, you don’t have to rewrite it in a safe one. Just make sure your developers understand the code and the possible mistakes that can be made. An understanding of memory safety basics will help a lot in avoiding bugs.

Application security testing

Application security testing is a set of practices that are aimed at detecting vulnerabilities in an application and helping developers fix them. There are various types of application security testing:

  • SAST: static application security testing implies checking for vulnerabilities in the app’s source code (while it’s at rest);
  • DAST: dynamic application security testing implies testing various types of attacks while the app is running;
  • Application penetration testing: the app is tested against the most recent and most powerful cyber attacks;
  • IAST: interactive application security testing implies simulating various scenarios of user activity and thus searching for known vulnerabilities;
  • MAST: mobile application security testing aimed at testing mobile applications;
  • SCA: software composition analysis is used to analyze the origin of libraries that the app uses.

While all testing types are important, the NSA pays special attention to SAST and DAST. The National Security Agency names these testing types as the ones that can help identify memory use issues and thus make programming languages in use more memory-safe.

Expert Opinion

When discussing memory-safe and unsafe languages, what immediately comes to my mind is Ken Thompson’s ACM Turing Award acceptance paper “Reflections on Trusting Trust”. The moral of the paper is that one can never 100% trust the code if one did not create this code by yourself. Note that this paper is dated 1984 and already back then, a well-installed micro bug was almost impossible to detect. Now imagine how hard it can be to detect a micro bug installed with the latest hacking techniques.

What I’m trying to say is that even if your program is written in a memory-safe language, you can never be 100% sure about its security. Hence, never underestimate the risk of an attack or a vulnerability, and don’t rely solely on the programming language to handle your memory’s security.

Co-Founder at SoftTeco

Alex Kutsko

Expert Opinion

It’s interesting to observe the evolution of memory management in the example of programming languages evolution. First, we had very early languages (if we can call them languages – Basic, Pascal, Fortran) that were kind of safe. That was due to the fact that they had very strict memory management and the only vulnerability that you could probably face was out of bounds. On the other hand, we had Assembler too back then and this language consists basically of memory management processes. But everyone understood that even the simplest text form for name and surname input would take too much time in Assembler

Then we got C and C++ which allowed much more in terms of memory management. And the first rule that everyone had to learn was: “make sure that the number of take-offs (new) equals the number of landings (delete)”. There is a really good book by John Viega that greatly illustrates the fact that safe code consists of 20% logic and 80% of organization of the safe space.
As for today, we have “Swift and co” aka the languages that are considered highly safe and that remind of old languages in terms of memory management. So now, exiting the array border has become a complex task.

Of course, it’s much easier to write safe code now than it was before. Now, many things, including the language and the system itself take responsibility for certain things, freeing developers from doing so. A modern developer is focused more on resolving business tasks rather than on organizing a safe environment for his code to work. That’s what I can call evolution and progress.

CEO at SoftTeco

Alexey Shevchik

Want to stay updated on the latest tech news?

Sign up for our monthly blog newsletter in the form below.

Softteco Logo Footer