The simpler a security design is, the better. Simple systems are;
This principle supports the idea of having a Trusted Computing Base (TCB) - a small, trusted part of the system that is responsible for keeping it secure. A smaller TCB means fewer chances of something going wrong.
By default, users or programs should not be allowed to access things unless they are explicitly given permission. This means;
This is safer than trying to detect and block bad behavior because hackers are constantly changing their tactics. Some modern tools like antivirus software break this rule by trying to detect and block threats instead of just allowing known safe things.
Every time someone or something tries to access a resource (like a file or system), the system should check if it is allowed based on the security rules. No shortcuts!
Even if the user was allowed last time, the system should check again. This avoids giving unauthorized access due to reused permissions.
Security should not depend on hiding how the system works. Instead, it should rely only on secret keys or passwords.
This way, engineers, auditors, and others can check the system for errors or flaws. Hiding how something works (known as "security by obscurity") is risky — especially if someone can figure it out or is an insider.
Some actions should require multiple people or permissions to go through.
For example, in banking, large transactions often require approval from more than one person. This adds extra safety but can also slow things down. Still, it's useful for critical operations where mistakes or misuse would be costly.
People or programs should only have the minimum permissions they need to do their jobs. If a user just needs to read a file, they shouldn't be allowed to delete or edit it. This limits damage if something goes wrong — like a virus infecting a program. Security systems often split big systems into small parts, each with limited powers, to follow this rule.
Sharing system parts (like memory or processors) between users or programs can be risky. A shared part could:
So, it’s better to avoid shared tools unless really necessary. In extreme cases, completely separate systems (“air-gapped”) are used to keep things safe.
Security should be easy and natural for users. If people find a system too hard or confusing, they’ll make mistakes or try to avoid using it. When the protection tools match how users think, security improves. This idea led to a whole field called Human Factors in Security.
Saltzer and Schroeder also mentioned two more principles. These are useful, but not always perfectly reliable.
Good security means it’s too expensive or time-consuming for attackers to break in. For example, if cracking a password would take 10 years, most hackers won’t bother. However, this is harder to measure for things like insider threats or software bugs.
Sometimes, instead of stopping an attack, it helps to keep good records so you know when and how something went wrong. Security logs help detect and respond to problems quickly. But relying only on logs instead of prevention can be risky, depending on the situation.
Saltzer and Schroeder’s principles are like the golden rules of secure system design. While technology has changed a lot since 1975, these ideas still help guide engineers, developers, and security professionals in building safer systems. Whether you're designing software, managing networks, or just curious about how digital security works — these principles are a great place to start.
No comments yet.
You must be logged in to leave a comment. Login here