CISSP exam How to pass on your first try: Tips to get a good score
Want to become a CISSP? Here’s everything you need to know, such as how difficult the exam is, tips for studying, what’s needed to obtain a passing score for CISSP, and Q &A for practice exams.
Everything you’ve heard about what it takes to pass the CISSP exam is true. It’s both disarmingly easy and bewilderingly difficult. It’s at once incredibly rewarding and pull-out-your-hair aggravating.
This article aims to demystify the process and help you prepare with tips for obtaining one of the most prestigious cybersecurity certifications in the field.
What is the CISSP?
CISSP stands for Certified Information Systems Security Professional. The credential was created in 1991 by (ISC)2 Inc., a nonprofit that is the caretaker and credentialing body for the CISSP.
According to (ISC)2, the certification is “an elite way to demonstrate your knowledge, advance your career and become a member of a community of cybersecurity leaders. It shows you have all it takes to design, engineer, implement, and run an information security program.”
What are the requirements for obtaining and maintaining a CISSP?
To qualify, you need at least five cumulative years of paid, full-time professional experience, including at least two years of work in the exam’s eight Common Body of Knowledge (CBK) domains.
Alternatively, you can have four years of experience, plus either a four-year college degree or an approved credential from the CISSP Prerequisite Pathway. You also have to agree to the (ISC)2 Code of Ethics and provide background information on things like felony convictions and involvement with hackers.
The second step is to pass the CISSP exam. If you fail the first time, you can retake it, though you have to pay each time. If you pass, you must obtain a written endorsement within nine months from someone who can attest to your professional experience and who is an active (ISC)2 credential holder in good standing.
The certification is valid for three years. Each year, you must earn and post at least 40 continuing professional education credits through educational activities, such as attending live events, online seminars, and other learning opportunities. There is also an annual maintenance fee.
Why get a CISSP?
Most current and would-be CISSPs say the primary reason they want a CISSP is to increase their marketability. Other motivations include filling in knowledge gaps, earning peer recognition, expanding one’s professional network, and contributing to the development and maturation of the cybersecurity profession.
One benefit of CISSP certification is that, in preparing for the exam, you’re going to learn a lot about subjects you didn’t know about before. Sure, some of this material is boring and impractical, but studying for the exam will give you a very strong knowledge base in topics like security architecture, risk management, business continuity, information assurance, and more — no matter how hard they seem at the time.
What’s the exam like?
The English-language exam is 100 to 150 questions. These comprise multiple-choice questions, as well as advanced innovative questions.
The English exam uses Computerized Adaptive Testing, using an algorithm to adjust the difficulty of each successive question based on the candidate’s ability level. Candidates are given three hours to complete the exam.
The questions are weighted differently, adding up to 1,000 points. To pass the CISSP exam, you must obtain a minimum passing score of 700. You only receive a score of pass or fail.
If you fail the exam, (ISC)2 reveals some details of your performance. You will receive a ranking of the exam domains according to the percentage of questions you answered correctly. If you’re preparing to take the test a second or third time, one of the most important tips is to look at which domains you did poorly on and pay extra attention to those areas when studying.
CISSP Library (Video Training), 2nd Edition. CISSP Complete Video Course contains 24 hours of training with content divided into 9 lessons with 94 video sub-lessons. The videos consist of live trainer discussions, screencasts, animations, and live demos. The video lessons in this course review each exam objective so you can use this course as a complete study tool for taking the CISSP exam. Get Here
What subjects does the exam cover?
The exam tests on topics from the eight CBK domains:
- Security and Risk Management
- Asset Security
- Security Engineering
- Communications and Network Security
- Identity and Access Management
- Security Assessment and Testing
- Security Operations
- Software Development Security
Tips for passing the CISSP exam
The exam is best characterized as an inch deep and a mile wide. With that in mind, how difficult is it to pass the CISSP exam? It is a matter of perspective.
Here are a few tips to consider when preparing for the big day:
- Don’t play favorites when studying. Some domains cover more material — and in greater depth — than others, but this can be deceiving. Many candidates score poorly because they over-prepare for the big domains and under-prepare for the small ones. It’s unlikely that the exam will present you with an equal distribution of questions across all eight domains. To achieve a passing score, the only safe bet is to study each domain thoroughly.
- But remember that the exam isn’t homogenous. Another common mistake is to adopt a uniform approach to learning the material. Some domains are fact-oriented. You either know the range of dynamic port numbers or you don’t. Others are more contextual and interpretative, focusing on cybersecurity standards, principles, or best practices.
- Mind the gaps. The first thing you should do is to review the main topics in each domain. This will reveal your strengths and weaknesses, helping you to identify and subsequently fill any gaps in knowledge.
- Practice questions are your friend. Take the plunge and buy at least one of the all-in-one books. As you read each chapter or domain, take the practice exams in the book and online. Plan to take at least two full-length practice tests before sitting for the exam. Considering that you’ll need to answer 70% of the real exam questions correctly, it’s advisable to reach a point where you can consistently nail at least 85% of a practice test. While you’ll never encounter a practice question on the actual exam, running through them will help drill the broader concepts into your head.
- Develop — and stick to — a training schedule. Just as if you were preparing to run a marathon, create a study schedule. It can be helpful to work backward from your exam date to ensure you’re allotting enough time to cover each domain. While you should stick to your training plan as closely as possible, it’s also important to be flexible. Don’t arbitrarily move on from one topic before you’re ready just because the schedule says so.
- Make time to review previously studied material. Decades of research have shown that cramming simply does not work. The brain retains information best when it’s been reviewed several times over a longer-term. Think about how many times you’ve met someone and forgotten their name within five seconds. Earning a CISSP passing score will require you to recall a lot more than that.
- Don’t underestimate basic logistics. It sounds cliché, but get plenty of sleep the night before. Eat before the test. Avoid selecting an exam location more than an hour away or an exam time close to rush hour. Find out whether the test computers at your location use Macs or PCs. If you’re uncomfortable with one, choose a location that uses your preferred machine — and, importantly, mouse.
Do I need to take one of the CISSP exam-cram classes?
a boot camp will give you a lot more confidence that you’re on the right track. The instructors can help you grasp complex topics, and you can band together with fellow students to form study groups. All of these things help you get motivated and pass the CISSP exam. one of such highly recommended course is here
Take the Domain 1 and 2 CISSP certifications boot camp: Get 3 hours of video, downloadable slides, & practice questions.
What you’ll learn
- Prepare for the 2018 version of the Certified Information Systems Security Professional (CISSP) Certification Exam (next CISSP update is in 2021).
- Clear understanding of CISSP Domain 1 (Security and Risk Management).
- Clear understanding of CISSP Domain 2 (Asset Security).
- Understand IT Security and Cyber Security from a management level perspective.
- Where to start on your CISSP certification journey.
- Learn why you want to get your CISSP certification, what it can give you.
Here are some best of the best publications from CISCO PEARSON
CISSP Cert Guide, 3rd Edition. Learn, prepare, and practice for CISSP exam success with this Cert Guide from Pearson IT Certification, a leader in IT Certification learning. Get Here
cissp Q&A bank
QUESTION 1 – (Topic 1)
An employee ensures all cables are shielded, builds concrete walls that extend from the true floor to the true ceiling and installs a white noise generator. What attack is the employee trying to protect against?
A. Emanation Attacks
B. Social Engineering
C. Object reuse
Explanation: Explanation :
Emanation attacks are the act of intercepting electrical signals that radiate from computing equipment. There are several countermeasures including shielding cabling, white noise, control zones, and TEMPEST equipment (this is a Faraday cage around the equipment)
The following answers were incorrect:
Social Engineering: Social Engineering does not involve hardware. A person makes use of his/her social skills in order to trick someone into revealing information they should not disclose.
Object Reuse: It is related to the reuse of storage media. One must ensure that the storage media has been sanitized properly before it would be reuse for another usage. This is very important when computer equipment is discarded or given to a local charity organization. Ensure there is no sensitive data left by degaussing the device or overwriting it multiple times.
Wiretapping: It consists of legally or illegally tapping into someone else phone line to eavesdrop on their communication. The following reference(s) were/was used to create this question:
Shon Harris AIO 4th Edition
CISSP QUESTION 2 – (Topic 1)
What is an error called that causes a system to be vulnerable because of the environment in which it is installed?
A. Configuration error
B. Environmental error
C. Access validation error
D. Exceptional condition handling error
Explanation: In an environmental error, the environment in which a system is installed somehow causes the system to be vulnerable. This may be due, for example, to an unexpected interaction between an application and the operating system or between two applications on the same host. A configuration error occurs when user-controllable settings in a system are set such that the system is vulnerable. In an access validation error, the system is vulnerable because the access control mechanism is faulty. In an exceptional condition handling error, the system somehow becomes vulnerable due to an exceptional condition that has arisen.
Source: DUPUIS, Clement, Access Control Systems, and Methodology CISSP Open Study Guide, version 10, march 2002 (page 106).
CISSP QUESTION 3 – (Topic 1)
Which of the following biometric devices has the lowest user acceptance level?
A. Retina Scan
B. Fingerprint scan
C. Hand geometry
D. Signature recognition Answer: A
Explanation: According to the cited reference, of the given options, the Retina scan has the lowest user acceptance level as it is needed for the user to get his eye close to a device and it is not user friendly and very intrusive.
However, retina scan is the most precise with about one error per 10 millions usage
CISSP QUESTION 4 – (Topic 1)
When a biometric system is used, which error type deals with the possibility of GRANTING access to impostors who should be REJECTED?
A. Type I error
B. Type II error
C. Type III error
D. Crossover error
Explanation: When the biometric system accepts impostors who should have been rejected , it is called a Type II error or False Acceptance Rate or False Accept Rate.
Biometrics verifies an individual’s identity by analyzing a unique personal attribute or behavior, which is one of the most effective and accurate methods of verifying identification.
Biometrics is a very sophisticated technology; thus, it is much more expensive and complex than the other types of identity verification processes. A biometric system can make authentication decisions based on an individual’s behavior, as in signature dynamics, but these can change over time and possibly be forged.
Biometric systems that base authentication decisions on physical attributes (iris, retina, fingerprint) provide more accuracy, because physical attributes typically don’t change much, absent some disfiguring injury, and are harder to impersonate.
When a biometric system rejects an authorized individual, it is called a Type I error (False Rejection Rate (FRR) or False Reject Rate (FRR)).
When the system accepts impostors who should be rejected, it is called a Type II error (False Acceptance Rate (FAR) or False Accept Rate (FAR)). Type II errors are the most dangerous and thus the most important to avoid.
The goal is to obtain low numbers for each type of error, but When comparing different biometric systems, many different variables are used, but one of the most important metrics is the crossover error rate (CER).
The accuracy of any biometric method is measured in terms of Failed Acceptance Rate (FAR) and Failed Rejection Rate.
(FRR). Both are expressed as percentages. The FAR is the rate at which attempts by unauthorized users are incorrectly accepted as valid. The FRR is just the opposite. It measures the rate at which authorized users are denied access.
The relationship between FRR (Type I) and FAR (Type II) is depicted in the graphic below . As one rate increases, the other decreases. The Cross-over Error Rate (CER) is sometimes considered a good indicator of the overall accuracy of a biometric system. This is the point at which the FRR and the FAR have the same value. Solutions with a lower CER are typically more accurate.
See graphic below from Biometria showing this relationship. The Cross-over Error Rate (CER) is also called the Equal Error Rate (EER), the two are synonymous.
Cross Over Error Rate
The other answers are incorrect:
Type I error is also called as False Rejection Rate where a valid user is rejected by the system. Type III error : there is no such error type in biometric system
CISSP QUESTION 5 – (Topic 1)
A host-based IDS is resident on which of the following?
A. On each of the critical hosts
B. decentralized hosts
C. central hosts
D. bastion hosts
Explanation: A host-based IDS is a resident on a host and reviews the system and event logs in order to detect an attack on the host and to determine if the attack was successful. All critical serves should have a Host Based Intrusion Detection System (HIDS) installed. As you are well aware, network-based IDS cannot make sense or detect a pattern of attacks within encrypted traffic. A HIDS might be able to detect such an attack after the traffic has been decrypted on the host. This is why critical servers should have both NIDS and HIDS.
A HIDS will monitor all or part of the dynamic behavior and of the state of a computer system. Much as a NIDS will dynamically inspect network packets, a HIDS might detect which program accesses what resources and assure that (say) a word-processor hasn\’t suddenly and inexplicably started modifying the system password-database. Similarly, a HIDS might look at the state of a system, its stored information, whether in RAM, in the file-system, or elsewhere; and check that the contents of these appear as expected.
CISSP QUESTION 6 – (Topic 1)
The following is NOT a security characteristic we need to consider while choosing a biometric identification system:
A. data acquisition process
C. enrollment process
D. speed and user interface
Explanation: Cost is a factor when considering Biometrics but it is not a security characteristic. All the other answers are incorrect because they are security characteristics related to Biometrics.
The data acquisition process can cause a security concern because if the process is not fast and efficient it can discourage individuals from using the process.
The enrollment process can cause a security concern because the enrollment process has to be quick and efficient. This process captures data for authentication.
Speed and user interface can cause a security concern because this also impacts the users’ acceptance rate of biometrics. If they are not comfortable with the interface and speed they might sabotage the devices or otherwise attempt to circumvent them.
CISSP QUESTION 7 – (Topic 1)
Which of the following best describes an exploit?
A. An intentional hidden message or feature in an object such as a piece of software or a movie.
B. A chunk of data, or sequence of commands that take advantage of a bug, glitch or vulnerability in order to cause unintended or unanticipated behavior to occur on the computer
C. An anomalous condition where a process attempts to store data beyond the boundaries of a fixed-length buffer
D. A condition where a program (either an application or part of the operating system) stops performing its expected function and also stops responding to other parts of the system
Explanation: The following answers are incorrect:
An intentional hidden message or feature in an object such as a piece of software or a movie.
This is the definition of an “Easter Egg” which is code within code. A good example of this was a small flight simulator that was hidden within Microsoft Excel. If you know which cell to go to on your spreadsheet and the special code to type in that cell, you were able to run the flight simulator.
An anomalous condition where a process attempts to store data beyond the boundaries of a fixed-length buffer
This is the definition of a “Buffer Overflow”. Many pieces of exploit code may contain some buffer overflow code but considering all the choices presented this was not the best choice. It is one of the vulnerabilities that the exploit would take care of if no data input validation is taking place within the software that you are targeting.
A condition where a program (either an application or part of the operating system) stops performing its expected function and also stops responding to other parts of the system This is the definition of a “System Crash”. Such behavior might be the result of the exploit code being launched against the target.
CISSP QUESTION 8 – (Topic 1)
Which type of password token involves time synchronization?
A. Static password tokens
B. Synchronous dynamic password tokens
C. Asynchronous dynamic password tokens
D. Challenge-response tokens
Explanation: Synchronous dynamic password tokens generate a new unique password value at fixed time intervals, so the server and token need to be synchronized for the password to be accepted
CISSP QUESTION 9 – (Topic 1)
Technical controls such as encryption and access control can be built into the operating system, be software applications, or can be supplemental hardware/software units. Such controls, also known as logical controls, represent which pairing?
A. Preventive/Administrative Pairing
B. Preventive/Technical Pairing
C. Preventive/Physical Pairing
D. Detective/Technical Pairing Answer: B
Explanation: Preventive/Technical controls are also known as logical controls and can be built into the operating system, be software applications, or can be supplemental hardware/software units
CISSP QUESTION 10 – (Topic 1)
This is a common security issue that is extremely hard to control in large environments. It occurs when a user has more computer rights, permissions, and access than what is required for the tasks the user needs to fulfill. What best describes this scenario?
A. Excessive Rights
B. Excessive Access
C. Excessive Permissions
D. Excessive Privileges
Explanation: Even thou all 4 terms are very close to each other, the best choice is Excessive Privileges which would include the other three choices presented.
CISSP QUESTION 11 – (Topic 1)
In Mandatory Access Control, sensitivity labels attached to objects contain what information?
A. The item’s classification
B. The item’s classification and category set
C. The item’s category
D. The items’ need to know Answer: B
Explanation: The following is the correct Answer: the item’s classification and category set.
A Sensitivity label must contain at least one classification and one category set.
Category set and Compartment set are synonyms, they mean the same thing. The sensitivity label must contain at least one Classification and at least one Category. It is common in some environments for a single item to belong to multiple categories. The list of all the categories to which an item belongs is called a compartment set or category set.
The following answers are incorrect:
The item’s classification. Is incorrect because you need a category set as well.
The item’s category. Is incorrect because category set and classification would be both be required.
The item’s need to know. Is incorrect because there is no such thing. The need to know is indicated by the categories the object belongs to. This is NOT the best answer.
CISSP QUESTION 12 – (Topic 1)
Which of the following tools is less likely to be used by a hacker?
D. John the Ripper
Explanation: Tripwire is an integrity checking product, triggering alarms when important files (e.g. system or configuration files) are modified.
This is a tool that is not likely to be used by hackers, other than for studying its workings in order to circumvent it. Other programs are password-cracking programs and are likely to be used by security administrators as well as by hackers.
CISSP QUESTION 13 – (Topic 1)
In biometric identification systems, the parts of the body conveniently available for identification are:
A. neck and mouth
B. hands, face, and eyes
C. feet and hair
D. voice and neck
Explanation: Today implementation of fast, accurate, reliable, and user-acceptable biometric identification systems are already underway. Because most identity authentication takes place when people are fully clothed (neck to feet and wrists), the parts of the body conveniently available for this purpose are hands, face, and eyes.
CISSP QUESTION 14 – (Topic 1)
Which access control model is also called Non-Discretionary Access Control (NDAC)?
A. Lattice-based access control
B. Mandatory access control
C. Role-based access control
D. Label-based access control
Explanation: RBAC is sometimes also called non-discretionary access control (NDAC) (as Ferraiolo says “to distinguish it from the policy-based specifics of MAC”). Another model that fits within the NDAC category is Rule-Based Access Control (RuBAC or RBAC). Most of the CISSP books use the same acronym for both models but NIST tends to use a lowercase “u” in between R and B to differentiate the two models.
You can certainly mimic MAC using RBAC but true MAC makes use of Labels that contains the sensitivity of the objects and the categories they belong to. No labels mean MAC is not being used.
One of the most fundamental data access control decisions an organization must make is the amount of control it will give system and data owners to specify the level of access users of that data will have. In every organization, there is a balancing point between the access controls enforced by the organization and system policy and the ability for information owners to determine who can have access based on specific business requirements. The process of translating that balance into a workable access control model can be defined by three general access frameworks:
Discretionary access control Mandatory access control Nondiscretionary access control
A role-based access control (RBAC) model bases the access control authorizations on the roles (or functions) that the user is assigned within an organization. The determination of what roles have to access to a resource can be governed by the owner of the data, as with DACs, or applied based on policy, as with MACs.
Access control decisions are based on job function, previously defined and governed by policy, and each role (job function) will have its own access capabilities. Objects associated with a role will inherit privileges assigned to that role. This is also true for groups of users, allowing administrators to simplify access control strategies by assigning users to
groups and groups to roles.
There are several approaches to RBAC. As with many system controls, there are variations on how they can be applied within a computer system.
There are four basic RBAC architectures:
Non-RBAC is simply user-granted access to data or an application by traditional mappings, such as with ACLs. There are no formal “roles” associated with the mappings, other than any identified by the particular user.
2 Limited RBAC:
Limited RBAC is achieved when users are mapped to roles within a single application rather than through an organization-wide role structure. Users in a limited RBAC system are also able to access non-RBAC-based applications or data. For example, a user may be assigned to multiple roles within several applications and, in addition, have direct access to another application or system independent of his or her assigned role. The
key attribute of limited RBAC is that the role for that user is defined within an application and not necessarily based on the user’s organizational job function.
3 Hybrid RBAC:
Hybrid RBAC introduces the use of a role that is applied to multiple applications or systems based on a user’s specific role within the organization. That role is then applied to applications or systems that subscribe to the organization’s role-based model. However, as the term “hybrid” suggests, there are instances where the subject may also be assigned to roles defined solely within specific applications, complimenting (or, perhaps, contradicting) the larger, more encompassing organizational role used by other systems.
4 Full RBAC:
Full RBAC systems are controlled by roles defined by the organization’s policy and access control infrastructure and then applied to applications and systems across the enterprise. The applications, systems, and associated data apply permissions based on that enterprise definition, and not one defined by a specific application or system.
Be careful not to try to make MAC and DAC opposites of each other — they are two different access control strategies with RBAC being a third strategy that was defined later to address some of the limitations of MAC and DAC.
The other answers are not correct because:
Mandatory access control is incorrect because though it is by definition not discretionary, it is not called “non- discretionary access control.” MAC makes use of labels to indicate the sensitivity of the object and it also makes use of categories to implement the need to know.
Label-based access control is incorrect because this is not a name for a type of access control but simply a bogus detractor.
Lattice-based access control is not adequate either. A lattice is a series of levels and a subject will be granted an upper and lower bound within the series of levels. These levels could be sensitivity levels or they could be confidentiality levels or they could be integrity levels.
QUESTION 15 – (Topic 1)
Which access control model provides upper and lower bounds of access capabilities for a subject?
A. Role-based access control
B. Lattice-based access control
C. Biba access control
D. Content-dependent access control
Explanation: In the lattice model, users are assigned security clearances and the data is classified. Access decisions are made based on the clearance of the user and the classification of the object. Lattice-based access control is an essential ingredient of formal security models such as Bell-LaPadula, Biba, Chinese Wall, etc.
The bounds concept comes from the formal definition of a lattice as a “partially ordered set for which every pair of elements has a greatest lower bound and a least upper bound.” To see the application, consider a file classified as “SECRET” and a user Joe with a security clearance of “TOP SECRET.” Under Bell-LaPadula, Joe’s “least upper bound” access to the file is “READ” and his least lower bound is “NO WRITE” (star property).
Role-based access control is incorrect. Under RBAC, the access is controlled by the permissions assigned to a role and the specific role assigned to the user.
Biba access control is incorrect. The Biba integrity model is based on a lattice structure but the context of the question disqualifies it as the best answer.
Content-dependent access control is incorrect. In content-dependent access control, the actual content of the information determines access as enforced by the arbiter.
QUESTION 16 – (Topic 1)
Which of the following is used to create and modify the structure of your tables and other objects in the database?
A. SQL Data Definition Language (DDL)
B. SQL Data Manipulation Language (DML)
C. SQL Data Relational Language (DRL)
D. SQL Data Identification Language (DIL)
The SQL Data Definition Language (DDL) is used to create, modify, and delete views and relations (tables). Data Definition Language
The Data Definition Language (DDL) is used to create and destroy databases and database objects. These commands will primarily be used by database administrators during the setup and removal phases of a database project. Let’s take a look at the structure and usage of four basic DDL commands:
Installing a database management system (DBMS) on a computer allows you to create and manage many independent databases. For example, you may want to maintain a database of customer contacts for your sales department and a personnel database for your HR department.
The CREATE command can be used to establish each of these databases on your platform. For example, the command:
CREATE DATABASE employees
creates an empty database named “employees” on your DBMS. After creating the database, your next step is to create tables that will contain data. (If this doesn’t make sense, you might want to read the article Microsoft Access Fundamentals for an overview of tables and databases.) Another variant of the CREATE command can be used for this purpose. The command:
CREATE TABLE personal_info (first_name char(20) not null, last_name char(20) not null, employee_id int not null)
Alternatively, users may want to limit the attributes that are retrieved from the database. For example, the Human Resources department may require a list of the last names of all employees in the company. The following SQL command would retrieve only that information:
SELECT last_name FROM personal_info
Finally, the WHERE clause can be used to limit the records that are retrieved to those that meet specified criteria. The CEO might be interested in reviewing the personnel records of all highly paid employees. The following command retrieves all of the data contained within personal_info for records that have a salary value greater than $50,000: SELECT *
FROM personal_info WHERE salary > $50000 UPDATE
The UPDATE command can be used to modify the information contained within a table, either in bulk or individually. Each year, our company gives all employees a 3% cost-of-living increase in their salary. The following SQL command could be used to quickly apply this to all of the employees stored in the database:
UPDATE personal_info SET salary = salary * 103
On the other hand, our new employee Bart Simpson has demonstrated performance above and beyond the call of duty. The management wishes to recognize his stellar accomplishments with a $5,000 raise. The WHERE clause could be used to single out Bart for this raise:
UPDATE personal_info SET salary = salary + $5000
WHERE employee_id = 12345 DELETE
Finally, let’s take a look at the DELETE command. You’ll find that the syntax of this command is similar to that of the other DML commands. Unfortunately, our latest corporate earnings report didn’t quite meet expectations and poor Bart has been laid off. The DELETE command with a WHERE clause can be used to remove his record from the personal_info table:
DELETE FROM personal_info WHERE employee_id = 12345 JOIN Statements
Now that you’ve learned the basics of SQL, it’s time to move on to one of the most powerful concepts the language has to offer – the JOIN statement. Quite simply, these statements allow you to combine data in multiple tables to quickly and efficiently process large quantities of data. These statements are where the true power of a database resides.
We’ll first explore the use of a basic JOIN operation to combine data from two tables. In future installments, we’ll explore the use of outer and inner joins to achieve added power.
We’ll continue with our example using the PERSONAL_INFO table, but first, we’ll need to add an additional table to the mix. Let’s assume we have a table called DISCIPLINARY_ACTION that was created with the following statement: CREATE TABLE disciplinary_action (action_id int, not null, employee_id int not null, comments char(500))
This table contains the results of disciplinary actions on company employees. You’ll notice that it doesn’t contain any information about the employee other than the employee number. It’s then easy to imagine many scenarios where we might want to combine information from the DISCIPLINARY_ACTION and PERSONAL_INFO tables.
Assume we’ve been tasked with creating a report that lists the disciplinary actions taken against all employees with a salary greater than $40,000 The use of a JOIN operation, in this case, is quite straightforward. We can retrieve this information using the following command:
SELECT personal_info.first_name, personal_info.last_name, disciplinary_action.comments FROM personal_info,
WHERE personal_info.employee_id = disciplinary_action.employee_id AND personal_info.salary > 40000
As you can see, we simply specified the two tables that we wished to join in the FROM clause and then included a statement in the WHERE clause to limit the results to records that had matching employee IDs and met our criteria of a salary greater than $40,000
Another term you must be familiar with as a security mechanism in Databases is: VIEW What is a view?
In database theory, a view is a virtual or logical table composed of the result set of a query. Unlike ordinary tables (base tables) in a relational database, a view is not part of the physical schema: it is a dynamic, virtual table computed or collated from data in the database. Changing the data in a table alters the data shown in the view.
The result of a view is stored in a permanent table whereas the result of a query is displayed in a temporary table. Views can provide advantages over tables;
They can subset the data contained in a table
They can join and simplify multiple tables into a single virtual table
Views can act as aggregated tables, where aggregated data (sum, average etc.) are calculated and presented as part of the data
Views can hide the complexity of data, for example a view could appear as Sales2000 or Sales2001, transparently partitioning the actual underlying table
Views take very little space to store; only the definition is stored, not a copy of all the data they present
Depending on the SQL engine used, views can provide extra security. Limit the exposure to which a table or tables are exposed to outer world
Just like functions (in programming) provide abstraction, views can be used to create abstraction. Also, just like functions, views can be nested, thus one view can aggregate data from other views. Without the use of views it would be much harder to normalise databases above second normal form. Views can make it easier to create lossless join decomposition.
Rows available through a view are not sorted. A view is a relational table, and the relational model states that a table is a set of rows. Since sets are not sorted – per definition – the
rows in a view are not ordered either. Therefore, an ORDER BY clause in the view definition is meaningless and the SQL standard (SQL:2003) does not allow this for the subselect in a CREATE VIEW statement.
QUESTION 17 – (Topic 1)
Which of the following testing method examines the internal structure or working of an application?
A. White-box testing
B. Parallel Test
C. Regression Testing
D. Pilot Testing
Explanation: White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) is a method of testing software that tests internal structures or workings of an application, as opposed to its
functionality (i.e. black-box testing). In white-box testing, an internal perspective of the system, as well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT).
White-box testing can be applied at the unit, integration, and system levels of the software testing process. Although traditional testers tended to think of white-box testing as being done at the unit level, it is used for integration and system testing more frequently today. It
can test paths within a unit, paths between units during integration, and between subsystems during a system-level test. Though this method of test design can uncover many errors or problems, it has the potential to miss unimplemented parts of the specification or missing requirements.
For your exam you should know the information below:
Alpha and Beta Testing – An alpha version is early version is an early version of the application system submitted to the internal use for testing. The alpha version may not contain all the features planned for the final version. Typically software goes to two stages of testing before it considers finished. The first stage is called alpha testing is often performed only by the user within the organization developing the software. The second stage is called beta testing, a form of user acceptance testing generally involves a limited number of external users. Beta testing is the last stage of testing and normally involves real-world exposure, sending the beta version of the product to independent beta test sites, or offering it free to interested users.
Pilot Testing – A preliminary test that focuses on the specific and predefined aspects of a system. It is not meant to replace other testing methods, but rather to provide a limited evaluation of the system. Proof of concept is early pilot tests – usually over the interim platform and with only basic functionalities.
White box testing – Assess the effectiveness of a software program logic. Specifically, test data are used in determining procedural accuracy or conditions of a program’s specific logic path. However testing all possible logical paths in a large information system is not feasible and would be cost-prohibitive, and therefore is used on a selective basis only.
Black Box Testing – An integrity-based form of testing associated with testing components of an information system’s “functional” operating effectiveness without regard to any specific internal program structure. Applicable to integration and user acceptance testing.
Function/validation testing – It is similar to system testing but it is often used to test the functionality of the system against the detailed requirements to ensure that the software that has been built is traceable to customer requirements. Regression Testing – The process of rerunning a portion of a test scenario or test plan to ensure that changes or corrections have not introduced new errors. The data used in regression testing should be the same as the original data.
Parallel Testing – This is the process of feeding test data into two systems – the modified system and an alternative system and comparing the result.
Sociability Testing – The purpose of these tests is to confirm that a new or modified system can operate in its target environment without adversely impacting the existing system. This should cover not only platform that will perform primary application processing and interface with other systems but, in a client-server and web development, changes to the desktop environment. Multiple applications may run on the users desktop, potentially simultaneously, so it is important to test the impact of installing new dynamic link libraries (DLLs), making operating system registry or configuration file modification, and possibly extra memory utilization.
The following answers are incorrect:
Parallel Testing – This is the process of feeding test data into two systems – the modified system and an alternative system and comparing the result.
Regression Testing – The process of rerunning a portion of a test scenario or test plan to ensure that changes or corrections have not introduced new errors. The data used in regression testing should be the same as the original data.
Pilot Testing – A preliminary test that focuses on the specific and predefined aspects of a system. It is not meant to replace other testing methods, but rather to provide a limited evaluation of the system. Proof of concept is early pilot tests – usually over the interim platform and with only basic functionalities.
QUESTION 18 – (Topic 1)
Attributes that characterize an attack are stored for reference using which of the following Intrusion Detection System (IDS)?
A. signature-based IDS
B. statistical anomaly-based IDS
C. event-based IDS
D. inferent-based IDS
Explanation: Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 49
QUESTION 19 – (Topic 1)
Who developed one of the first mathematical models of a multilevel-security computer system?
A. Diffie and Hellman.
B. Clark and Wilson.
C. Bell and LaPadula.
D. Gasser and Lipner.
Explanation: In 1973 Bell and LaPadula created the first mathematical model of a multi-level security system. The following answers are incorrect:
Diffie and Hellman. This is incorrect because Diffie and Hellman were involved with cryptography.
Clark and Wilson. This is incorrect because Bell and LaPadula was the first model. The Clark-Wilson model came later, 1987
Gasser and Lipner. This is incorrect, it is a distractor. Bell and LaPadula was the first model.
QUESTION 20 – (Topic 1)
Suppose you are a domain administrator and are choosing an employee to carry out backups. Which access control method do you think would be best for this scenario?
A. RBAC – Role-Based Access Control
B. MAC – Mandatory Access Control
C. DAC – Discretionary Access Control
D. RBAC – Rule-Based Access Control
Explanation: RBAC – Role-Based Access Control permissions would fit best for a backup job for the employee because the permissions correlate tightly with permissions granted to a backup operator.
A role-based access control (RBAC) model, bases the access control authorizations on the roles (or functions) that the
user is assigned within an organization. The determination of what roles have to access to a resource can be governed by the owner of the data, as with DACs, or applied based on policy, as with MACs. Access control decisions are based on job function, previously defined and governed by policy, and each role (job function) will have its own access capabilities. Objects associated with a role will inherit privileges assigned to that role. This is also true for groups of users, allowing administrators to simplify access control strategies by assigning users to groups and groups to roles.
Specifically, in the Microsoft Windows world, there is a security group called “Backup Operators” in which you can place the users to carry out the duties. This way you could assign the backup privilege without the need to grant the Restore privilege. This would prevent errors or a malicious person from overwriting the current data with an old copy for example.
The following answers are incorrect:
- MAC – Mandatory Access Control: This isn’t the right answer. The role of Backup administrator fits perfectly with the access control Role-Based access control.
- DAC – Discretionary Access Control: This isn’t the correct answer because DAC relies on data owners/creators to determine who has access to information.
- RBAC – Rule-Based Access Control: If you got this wrong it may be because you didn’t read past the RBAC part. Be very careful to read the entire question and answers before proceeding.
QUESTION 21 – (Topic 1)
Logical or technical controls involve the restriction of access to systems and the protection of information. Which of the following statements pertaining to these types of controls is correct?
A. Examples of these types of controls include policies and procedures, security awareness training, background checks, work habit checks but do not include a review of vacation history, and also do not include increased supervision.
B. Examples of these types of controls do not include encryption, smart cards, access lists, and transmission protocols.
C. Examples of these types of controls are encryption, smart cards, access lists, and transmission protocols.
D. Examples of these types of controls include policies and procedures, security awareness training, background checks, work habit checks, a review of vacation history, and increased supervision.
Explanation: Logical or technical controls involve the restriction of access to systems and the protection of information. Examples of these types of controls are encryption, smart cards, access lists, and transmission protocols.
QUESTION 22 – (Topic 1)
What kind of certificate is used to validate a user identity?
A. Public key certificate
B. Attribute certificate
C. Root certificate
D. Code signing certificate
Explanation: In cryptography, a public key certificate (or identity certificate) is an electronic document that incorporates a digital signature to bind together a public key with an identity — information such as the name of a person or an organization, their address, and so forth. The certificate can be used to verify that a public key belongs to an individual.
In a typical public key infrastructure (PKI) scheme, the signature will be of a certificate
authority (CA). In a web of trust scheme, the signature is of either the user (a self-signed certificate) or other users (“endorsements”). In either case, the signatures on a certificate are attestations by the certificate signer that the identity information and the public key belong together.
In computer security, an authorization certificate (also known as an attribute certificate) is a digital document that describes written permission from the issuer to use a service or a resource that the issuer controls or has access to use. The permission can be delegated.
Some people constantly confuse PKCs and ACs. An analogy may make the distinction clear. A PKC can be considered to be like a passport: it identifies the holder, tends to last for a long time, and should not be trivial to obtain. An AC is more like an entry visa: it is typically issued by a different authority and does not last for as long a time. As acquiring an entry visa typically requires presenting a passport, getting a visa can be a simpler process.
A real-life example of this can be found in the mobile software deployments by large service providers and are typically applied to platforms such as Microsoft Smartphone (and related), Symbian OS, J2ME, and others.
In each of these systems a mobile communications service provider may customize the mobile terminal client distribution (ie. the mobile phone operating system or application environment) to include one or more root certificates each associated with a set of capabilities or permissions such as “update firmware”, “access address book”, “use radio interface”, and the most basic one, “install and execute”. When a developer wishes to enable distribution and execution in one of these controlled environments they must acquire a certificate from an appropriate CA, typically a large commercial CA, and in the process they usually have their identity verified using out-of-band mechanisms such as a combination of a phone call, validation of their legal entity through government and commercial databases, etc., similar to the high assurance SSL certificate vetting process, though often there are additional specific requirements imposed on would-be developers/publishers.
Once the identity has been validated they are issued an identity certificate they can use to sign their software; generally, the software signed by the developer or publisher’s identity certificate is not distributed but rather it is submitted to the processor to possibly test or profile the content before generating an authorization certificate which is unique to the particular software release. That certificate is then used with an ephemeral asymmetric key-pair to sign the software as the last step of preparation for distribution. There are many advantages to separating the identity and authorization certificates especially relating to risk mitigation of new content being accepted into the system and key management as well
as recovery from errant software which can be used as attack vectors.
QUESTION 23 – (Topic 1)
Which of the following access control models requires security clearance for subjects?
A. Identity-based access control
B. Role-based access control
C. Discretionary access control
D. Mandatory access control
Explanation: With mandatory access control (MAC), the authorization of a subject’s access to an object is dependant upon labels, which indicate the subject’s clearance. Identity-based access control is a type of discretionary access control. Role-based access control is a type of non-discretionary access control.
QUESTION 24 – (Topic 1)
Which of the following protocol was used by the INITIAL version of the Terminal Access Controller Access Control System TACACS for communication between clients and servers?
Explanation: The original TACACS, developed in the early ARPANET days, had very limited functionality and used the UDP transport. In the early 1990s, the protocol was extended to include additional functionality and the transport changed to TCP.
TACACS is defined in RFC 1492 and uses (either TCP or UDP) port 49 by default. TACACS allows a client to accept a username and password and send a query to a TACACS authentication server, sometimes called a TACACS daemon or simply TACACSD. TACACSD uses TCP and usually runs on port 49 It would determine whether to accept or deny the authentication request and send a response back.
TACACS+ and RADIUS have generally replaced TACACS and XTACACS in more recently built or updated networks. TACACS+ is an entirely new protocol and is not compatible with TACACS or XTACACS. TACACS+ uses the Transmission Control Protocol (TCP) and RADIUS uses the User Datagram Protocol (UDP). Since TCP is a connection-oriented protocol, TACACS+ does not have to implement transmission control. RADIUS, however, does have to detect and correct transmission errors like packet loss, timeout, etc. since it rides on UDP which is connectionless.
RADIUS encrypts only the users’ password as it travels from the RADIUS client to the RADIUS server. All other information such as the username, authorization, accounting is transmitted in cleartext. Therefore it is vulnerable to different types of attacks. TACACS+ encrypts all the information mentioned above and therefore does not have the vulnerabilities present in the RADIUS protocol.
RADIUS and TACACS + are client/ server protocols, which means the server portion cannot send unsolicited commands to the client portion. The server portion can only speak when spoken to. Diameter is a peer-based protocol that allows either end to initiate communication. This functionality allows the Diameter server to send a message to the access server to request the user to provide another authentication credential if she is attempting to access a secure resource.
QUESTION 25 – (Topic 1)
RADIUS incorporates which of the following services?
A. Authentication server and PIN codes.
B. Authentication of clients and static passwords generation.
C. Authentication of clients and dynamic passwords generation.
D. Authentication server as well as support for Static and Dynamic passwords
Explanation: According to RFC 2865:
A Network Access Server (NAS) operates as a client of RADIUS. The client is responsible for passing user information to
designated RADIUS servers, and then acting on the response which is returned.
RADIUS servers are responsible for receiving user connection requests, authenticating the user, and then returning all configuration information necessary for the client to deliver service to the user.
RADIUS authentication is based on provisions of simple username/password credentials. These credentials are encrypted by the client using a shared secret between the client and the RADIUS server. OIG 2007, Page 513
RADIUS incorporates an authentication server and can make uses of both dynamic and static passwords. Since it uses the PAP and CHAP protocols, it also includes static passwords.
RADIUS is an Internet protocol. RADIUS carries authentication, authorization, and configuration information between a Network Access Server and a shared Authentication Server. RADIUS features and functions are described primarily in the IETF (International Engineering Task Force) document RFC2138
The term ” RADIUS” is an acronym that stands for Remote Authentication Dial-In User Service.
The main advantage of using a RADIUS approach to authentication is that it can provide a stronger form of authentication. RADIUS is capable of using a strong, two-factor form of authentication, in which users need to possess both a user ID and hardware or software
token to gain access.
Token-based schemes use dynamic passwords. Every minute or so, the token generates a unique 4-, 6- or 8-digit access number that is synchronized with the security server. To gain entry into the system, the user must generate both this one- time number and provide his or her user ID and password.
Although protocols such as RADIUS cannot protect against theft of an authenticated session via some realtime attacks, such as wiretapping, using unique, unpredictable authentication requests can protect against a wide range of active attacks.
RADIUS: Key Features and Benefits Features Benefits
RADIUS supports dynamic passwords and challenge/response passwords. Improved system security due to the fact that passwords are not static.
It is much more difficult for a bogus host to spoof users into giving up their passwords or password-generation algorithms.
RADIUS allows the user to have a single user ID and password for all computers in a network. Improved usability due to the fact that the user has to remember only one login combination.
RADIUS is able to:
Prevent RADIUS users from logging in via login (or FTP). Require them to log in via login (or FTP)
Require them to log in to a specific network access server (NAS); Control access by time of day.
Provides very granular control over the types of logins allowed, on a per-user basis.
The time-out interval for failing over from an unresponsive primary RADIUS server to a backup RADIUS server is site- configurable.
RADIUS gives System Administrator more flexibility in managing which users can log in from which hosts or devices.
Write into comments if you want more and more questions and answers….