- Dev Notes
- Posts
- NASA’s 10 Rules for Developing Safety-Critical Code
NASA’s 10 Rules for Developing Safety-Critical Code
Good Morning! Today we’re talking about NASA's 10 Rules for Developing Safety-Critical Code, looking at the relevance of these principles in modern software development. Also Google's unveiling of their powerful new Axion Arm-based chips, designed to revolutionize cloud computing performance and efficiency. Google's also making an expansion of the Gemma AI family with the introduction of CodeGemma and RecurrentGemma, specialized models that promise to enhance developer productivity and AI research capabilities.
NASA’s 10 Rules for Developing Safety-Critical Code
On Monday, many of us enjoyed the total solar eclipse. Here’s an actual photo of me watching it. And as expected, NASA livestreamed the entire event. But a few decades ago, NASA wrote a paper on something slightly different: software development.
Gerard J. Holzmann of the NASA/JPL Lab for Reliable Software created The Power of 10 Rules in his paper titled The Power of Ten – Rules for Developing Safety Critical Code1. The goal? To eliminate certain C coding practices which make code difficult to review or statically analyze. I wonder if they still hold up today (keep in mind, they were specifically written with C in mind, though can be generalized for coding in any programming language).
The Rules
Restrict all code to very simple control flow constructs – do not use goto statements, setjmp or longjmp constructs, and direct or indirect recursion.
All loops must have a fixed upper bound.
Do not use dynamic memory allocation after initialization.
No function should be longer than what can be printed on a single sheet of paper (no more than about 60 lines of code per function).
The assertion density of the code should average a minimum of two assertions per function.
Data objects must be declared at the smallest possible level of scope.
The return value of non-void functions must be checked by each calling function, and the validity of parameters must be checked inside each function.
The use of the preprocessor must be limited to the inclusion of header files and simple macro definitions.
The use of pointers should be restricted. Specifically, no more than one level of dereferencing is allowed.
All code must be compiled, from the first day of development, with all compiler warnings enabled at the compiler’s most pedantic setting.
There’s more information about the 10 rules and the rationale behind them in Gerard’s 2006 paper. But is it still relevant today? I say: absolutely. I mean, some rules may be considered too restrictive in your every day software application, like rule #1 about recursion (which is a valuable tool for certain types of algorithms) and rule #3 about dynamic memory allocation after initialization (which is common in many applications today), but these rules are still highly relevant in safety-critical systems like aerospace, medical devices, and nuclear power plants. The underlying principles of these rules (simplicity, reliability, and maintainability) are universally applicable, too. But what do you think?
Are NASA's 10 rules for developing safety-critical code still relevant today, 18 years after they were published? |
Read the full research paper here or here, and read how these coding standards can be applied to JavaScript here.
Google Unveils Powerful New Axion Arm Chips for the Cloud
Google just made a big announcement - they have unveiled their first custom-designed Arm-based computer processors, called Axion. These new Axion chips are built for powering data centers and cloud computing workloads.
Arm-based CPUs have been gaining a lot of ground in cloud infrastructure in recent years. Amazon, Microsoft, and others have all developed their own Arm server chips. This trend is driven by Arm's ability to deliver great performance while being more power-efficient than traditional x86 processors.
The Axion processors are built using Arm's latest and most powerful Neoverse V2 technology. Google claims Axion can outperform the fastest Arm-based cloud instances available today by up to 30%. Compared to current x86 chips, Axion offers:
Up to 50% better performance
Up to 60% better energy efficiency
Google has already started running some of its own services like YouTube Ads, Spanner database, and Google Earth on Arm-based servers. Now with Axion, they plan to expand the use of their custom Arm chips across Google Cloud Platform.
Axion is designed to work seamlessly with the broader Arm software ecosystem. Google has contributed to industry standards to ensure easy integration and deployment of Arm-native applications and tools.
Leading cloud service providers like Elastic, Datadog, and CrowdStrike have expressed excitement about testing Axion and seeing the performance and efficiency benefits it can bring to their cloud-hosted applications.
Read More Here
Google Expands Gemma AI Family with CodeGemma and RecurrentGemma
Google has announced two new additions to its Gemma family of lightweight, state-of-the-art open AI models - CodeGemma and RecurrentGemma. These new models expand the capabilities of the Gemma platform and provide specialized tools for developers and researchers.
CodeGemma is designed to bring powerful code completion and generation capabilities to developers. It's built on the foundation of the original Gemma models, and comes in three different versions. There's a 7 billion parameter pretrained model for general code tasks, a 7 billion parameter instruction-tuned model for code chat and following instructions, and a 2 billion parameter pretrained model for fast local code completion. CodeGemma excels at intelligent code completion, generating entire code blocks, and supporting multiple programming languages like Python, JavaScript and Java.
The other new model is called RecurrentGemma. This one uses a distinct architecture that leverages recurrent neural networks and local attention. This allows it to be more memory efficient, using less GPU/TPU memory to generate long text samples. RecurrentGemma also achieves higher throughput, generating more tokens per second, especially for longer sequences. This makes it an attractive option for AI researchers.
Both CodeGemma and RecurrentGemma are available now on platforms like Kaggle, Hugging Face and Vertex AI. Developers and researchers are encouraged to try them out and provide feedback as Google continues expanding the powerful Gemma model family.
Read More Here
🔥 More Notes
Synopsys hopes to mitigate upstream risks in software supply chains with new SCA tool
Denodo Partners with Google Cloud on the Future of Enterprise Innovation with New Data Virtualization and Generative AI Integration
Elon Musk: AI will be smarter than any human around the end of next year
Youtube Spotlight
Predicting Eclipses: The Three-Body Problem
Nearly 3,000 years ago, ancient Babylonians began one of the longest-running science experiments in history. The goal: to predict eclipses. This singular aim has driven innovation across the history of science and mathematics, from the Saros cycle to Greek geometry to Newton’s calculus to the three-body problem. Today, eclipse prediction is a precise science; NASA scientists predict eclipses hundreds of years into the future.
Was this forwarded to you? Sign Up Here
Reply