Security without Obscurity: A Guide to Confidentiality, Authentication, and Integrity

Free download. Book file PDF easily for everyone and every device. You can download and read online Security without Obscurity: A Guide to Confidentiality, Authentication, and Integrity file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Security without Obscurity: A Guide to Confidentiality, Authentication, and Integrity book. Happy reading Security without Obscurity: A Guide to Confidentiality, Authentication, and Integrity Bookeveryone. Download file Free Book PDF Security without Obscurity: A Guide to Confidentiality, Authentication, and Integrity at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Security without Obscurity: A Guide to Confidentiality, Authentication, and Integrity Pocket Guide.

Navigation menu

It is also worth asking what is being authenticated; remote systems, transactions, or people. A concept in network security involves knowing that the remote system is a particular program or piece of hardware is called remote attestation. This is usually attempted by hiding an encryption key in some tamper-proof part of the system, but is vulnerable to all kinds of disclosure and side-channel attacks, especially if the owner of the remote system is the adversary.

The most successful example seems to be the satellite television industry, where they embed cryptographic and software secrets in an inexpensive smart card with restricted availability, and change them frequently enough that the resources required to reverse engineer each new card exceeds the cost of the data it is protecting. The obvious crack is to simply remove that part of the code, but then you will trigger another check that looks at the code for the first check, and so on.

Jeff Stapleton | RSA Conference

The sorts of non-cryptographic self-checks they request the card to do, such as computing a checksum such as a CRC over some memory locations, are similar to the sorts of protections against reverse engineering, where the program computes a checksum to detect modifications to itself. Ideally, all services would be impossible to abuse.

The "C.I.A." security concepts.

Since this is difficult or impossible, we often restrict access to them, to limit the potential pool of adversaries. You will want your employees to do things an anonymous Internet user cannot see 4. Thus, many adversaries want to escalate their privileges to that of some more powerful user, possibly you.

travis+security@subspacefield.org

Generally, privilege escalation attacks refer to techniques that require some level of access above that of an anonymous remote system, but grant an even higher level of access, bypassing access controls. They can come in horizontal user becomes another user or vertical normal user becomes root or Administrator escalations.


  1. Encryption?
  2. The Towers of Titan and Other Stories.
  3. Security without Obscurity: A Guide to Confidentiality, Authentication, and Integrity | KSA | Souq!

These include locks. I like Medeco, but none are perfect. They can choose to let other people write or read, etc. This is how file permissions on classic Unix and Windows work. Often they are combined, where the access request has to pass both tests, meaning that the effective permission set is the intersection of the MAC and DAC permissions. Another way of looking at this configuration is that MAC sets the maximum permissions that can be given away by a user with DAC. In RBAC, there are roles to whom permissions are assigned, and one switches roles to change permission sets.

This prevents you from accidentally running malware see Unix emulates this with pseudo-users and sudo. Note that it may not be possible to prevent a user from giving his own access away; as a trivial example, on most operating systems, it is possible for a user to grant shell access with his permissions by creating a listening socket that forwards commands to a shell often via netcat.

There are many applications which have tried to allow some users to perform some functions, but not others. For example, network-based authorization may depend on in descending order of value :. There are other factors involved in authorization decisions but these are just examples. In a well-designed system these primitive functions would be rather complete and not the few we have here.


  • Confidentiality - Cybersecurity Glossary?
  • Pseudodifferential Operators and Nonlinear PDE.
  • 56 Best Authentication eBooks of All Time - BookAuthority.
  • Who Invented Egyptian Arab Nationalism!
  • Further, there should be some easy way to compose these tests to achieve the desired access control:. Systems which do not do this kind of authorization are necessarily incomplete, and cannot express all desired combinations of sets. Anything whitelisted can always communicate with us, no matter what. In the context of IPs and firewalls, this allows us to blacklist people trying to exploit us using UDP attacks, which area easily forged, but keep our default gateway and root DNS servers, which we really do want to be able to communicate with, even if forged attacks appear to come from them.

    Read Security without Obscurity: A Guide to Confidentiality, Authentication, and Integrity PDF

    In the context of domains, for example in a web proxy filter, we may whitelist example. In these cases, it seems you want a more complex access control mechanism to capture the complexity of the sets you are describing. And remember, blacklisting is always playing catch-up. By having an allow list and a deny list, we have four sets of objects defined:. The truth table for this is as follows D means default, O means open, X means denied :.

    Do you see what I mean? Now, suppose we wish to allow in everyone except the naughty prime numbers. We would write:. So far so good, right? Apache has no way to combine primitives, so is unable to offer such access control. What we really want is a list of directives that express the set we wish very easily. Aside from the bad user interface of having numbers, netfilter has a number of problems when compared to pf that have always bothered me. My main complaint with pf is that it rearranges the order of your rules such that certain types all get processed before others.

    Still, it is my favorite language for explaining firewall rules. Keynote, or something like it, is definitely the best authorization trust management framework I have found. If your program makes complicated access decisions, or you want it to be able to do so, you should check it out.

    Apart from basic prevention steps i. You should monitor your systems to help plan your security strategy and become aware of problems, security-related and otherwise. A good system administrator recognizes when something is wrong with his system. I used to have a computer in my bedroom, and could tell what it was doing by the way the disk sounded. Change management is the combination of both pro-active declaring and approving of intended changes, and retroactively monitoring the system for changes, comparing them to the approved changes, and altering and escalating any unapproved changes.

    Change management is based on the theory that unapproved changes are potentially bad, and therefore related to anomaly detection see It is normally applied to files and databases. Any change to these parameters made on a given system but not in the central configuration file are considered to be accidents or attacks, and so if you really want to make a change it has to be done on the centrally-managed and ostensibly monitored configuration file.

    You can also implement similar concepts by using a tool like rsync to manage the contents of part of the file system. Often homogeneous solutions are easier to administer. But there are cases where heterogeneity is easier, or where homogeneity is impossible.

    See the principle of uniform fronts It is absolutely vital that your systems have consistent timestamps. Consistency is more important than accuracy, because you are primarily going to be comparing logs between your systems. There are a number of problems comparing timestamps with other systems, including time zones and the fact that their clocks may be skewed. My suggestion is to have one system at every physical location that act as NTP servers for the location, so that if the network connections go down, the site remains consistent.

    They should all feed into one server for your administrative domain, and that should connect with numerous time servers. This also minimizes network traffic and having a nearby server is almost always better for reducing jitter. The basic premise is that they form a hash chain, where each line includes a hash of the last line. These systems can be linked together, where one periodically sends its hash to another, which makes the receiving system within the detection envelope.

    They can even cross-link, where they form a lattice, mutually supporting one another. I spend a lot of time reading the same things over and over in security reports. What I want is something that will let me see the changes from day to day. If an adversary overtly disables our system, we are aware that it has been disabled, and we can assume that something security-relevant occurred during that time.

    But if through some oversight on our side, we allow a system to stop monitoring something, we do not know if anything has occurred during that time. Therefore, we must be vigilant that our systems are always monitoring, to avoid that sort of ambiguity. Therefore, we want to know if they are not reporting because of a misconfiguration or failure. One wants to allow benign email, and stop unsolicited bulk email. So I generalized IDS, anti-virus, and anti-spam as abuse detection.

    Most intrusion detection systems categorize behavior, making it an instance of the classification problem see 4. Generally, there are two kinds of intrusion detection systems, commonly called misuse detection and anomaly detection. Misuse detection involves products with signature databases which indicate bad behavior. By analogy, this is like a cop who is told to look for guys in white-and-black striped jumpsuits with burlap sacks with dollar signs printed on them.

    This is how physical alarm sensors work; they detect the separation of two objects, or the breaking of a piece of glass, or some specific thing. The first has more false negatives and fewer false positives than the second.

    Robert Charpentier

    The first theoretically only finds security-relevant events, whereas the second theoretically notes any major changes. The first is great for vendors; they get to sell you a subscription to the signature database. In misuse detection, you need to have a good idea of what the adversary is after, or how they may operate.

    In this sense, misuse detection is a kind of enumerating badness , which means anything not specifically listed is allowed, and therefore violates the principle of least privilege see This is an interesting research direction which draws inspiration from biological systems which distinguish self from non-self and destroy non-self objects. Most anti-virus software looks for certain signatures present in virii.

    Instead, they could look at what the virii is attempting to do, by simulating running it. Perhaps virtual machines may help to run a quarantined virus at nearly real speed. Noted security expert Marcus Ranum gave a talk on burglar alarms once at Usenix Security, and had a lesson that applies to computer security.

    He said that when a customer of theirs had an alarm sensor that was disguised as a jewelry container or a gun cabinet, it was almost always sure to trick the burglar, and trigger the alarm. Criminals, by and large, are opportunistic, and when something valuable is offered to them, they rarely look a gift horse in the mouth.

    I also recall a sting operation where a law enforcement agency had a list of criminals they wanted to locate but who never seemed to be home. So a honey trap may well be the cheapest and most effective misuse detection mechanism you can employ. One of the ways to detect spam is to have an email address which should never receive any email; if any email is received, then it is from a spammer.

    These are called spamtraps. Unix systems may have user accounts which may have guessable passwords and no actual owners, so they should never have any legitimate logins. Any transaction on such an account is, by definition, fraudulent and a sign of a compromised system. One could even go farther and define a profile of transactions, possibly pseudo-random, any deviation from which is considered very important to investigate. The advantage of these types of traps are the extremely low false-positive rate, and as a deterrent to potential adversaries who fear being caught and punished.

    One may wish to check that it has a controlling tty as well, so that root-owned scripts do not set it off. In fact, having a root-owned shell with no controlling tty may be an event worth logging. These terms are not mutually exclusive; a given piece of malware may be a trojan which installs a rootkit and then spies on the user.

    If you find malware on your system, there are few good responses see Therefore, it may be wise to avoid the big names.