Quiz-summary
0 of 20 questions completed
Questions:
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
Information
Premium Practice Questions
You have already completed the quiz before. Hence you can not start it again.
Quiz is loading...
You must sign in or sign up to start the quiz.
You have to finish following quiz, to start this quiz:
Results
0 of 20 questions answered correctly
Your time:
Time has elapsed
Categories
- Not categorized 0%
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- Answered
- Review
-
Question 1 of 20
1. Question
During a routine audit of a federal network’s intrusion detection system, a security analyst identifies unauthorized SSH connections from a compromised workstation. The attacker is attempting to move laterally into a sensitive database segment. The analyst must act within the initial response window to mitigate the threat while adhering to federal incident handling guidelines.
Correct
Correct: Isolating the segment via VLAN steering effectively contains the breach without destroying evidence. This approach aligns with NIST SP 800-61 Rev. 2. It prioritizes containment and evidence preservation for federal systems.
Incorrect: Performing a hard reboot is problematic because it wipes volatile memory. This memory often contains critical forensic artifacts like encryption keys. Implementing a global SSH block is an overbroad response. It could disrupt legitimate administrative functions across the agency. Choosing to delay containment for observation is highly dangerous. This risks the exfiltration of classified data in a national security context.
Takeaway: Effective incident response requires balancing immediate threat containment with the preservation of volatile evidence for forensic investigation.
Incorrect
Correct: Isolating the segment via VLAN steering effectively contains the breach without destroying evidence. This approach aligns with NIST SP 800-61 Rev. 2. It prioritizes containment and evidence preservation for federal systems.
Incorrect: Performing a hard reboot is problematic because it wipes volatile memory. This memory often contains critical forensic artifacts like encryption keys. Implementing a global SSH block is an overbroad response. It could disrupt legitimate administrative functions across the agency. Choosing to delay containment for observation is highly dangerous. This risks the exfiltration of classified data in a national security context.
Takeaway: Effective incident response requires balancing immediate threat containment with the preservation of volatile evidence for forensic investigation.
-
Question 2 of 20
2. Question
A Cybersecurity Analyst is supporting a United States federal agency during the deployment of a new cloud-based intelligence sharing platform. The system has been categorized as High-Impact under FIPS 199, and the security control baseline from NIST SP 800-53 has been implemented. Before the Authorizing Official (AO) can grant an Authority to Operate (ATO), which action must be completed to ensure compliance with the Risk Management Framework (RMF)?
Correct
Correct: Under the NIST Risk Management Framework (RMF) used by United States federal agencies, the Assess step is mandatory. It provides the evidence needed for the Authorizing Official to make a risk-based decision. This involves verifying that the controls selected and implemented are actually effective and operating as intended. This step ensures that the security posture of the system is documented and validated before any data is processed.
Incorrect: Moving directly to authorization without a formal assessment fails to provide the necessary evidence of control effectiveness required by FISMA. The strategy of re-categorizing a system based on budgetary constraints rather than the actual impact on the mission violates federal standards for information security. Relying solely on continuous monitoring before an initial baseline assessment is established skips the critical verification step needed for the initial Authority to Operate.
Takeaway: A formal assessment of security controls is a prerequisite for authorization to ensure that risks are documented and understood by the official.
Incorrect
Correct: Under the NIST Risk Management Framework (RMF) used by United States federal agencies, the Assess step is mandatory. It provides the evidence needed for the Authorizing Official to make a risk-based decision. This involves verifying that the controls selected and implemented are actually effective and operating as intended. This step ensures that the security posture of the system is documented and validated before any data is processed.
Incorrect: Moving directly to authorization without a formal assessment fails to provide the necessary evidence of control effectiveness required by FISMA. The strategy of re-categorizing a system based on budgetary constraints rather than the actual impact on the mission violates federal standards for information security. Relying solely on continuous monitoring before an initial baseline assessment is established skips the critical verification step needed for the initial Authority to Operate.
Takeaway: A formal assessment of security controls is a prerequisite for authorization to ensure that risks are documented and understood by the official.
-
Question 3 of 20
3. Question
A network security engineer is tasked with hardening a sensitive United States government data center network. The objective is to minimize the risk of advanced persistent threats (APTs) while ensuring that internal lateral movement is detected and blocked. Which of the following deployment strategies offers the most comprehensive protection according to defense-in-depth principles?
Correct
Correct: Deploying a perimeter NGFW combined with an internal inline IPS ensures that threats are filtered at the entry point and that any traffic bypassing the edge or originating internally is actively mitigated. This multi-layered approach aligns with federal security standards for protecting information systems by providing both boundary protection and internal active monitoring. The inline IPS is specifically critical for blocking lateral movement, which is a hallmark of advanced persistent threats.
Incorrect: The strategy of using passive detection and router ACLs fails to provide real-time prevention of active exploits, leaving the network vulnerable to rapid-fire attacks that require immediate mitigation. Choosing to consolidate all security functions into a single UTM appliance introduces a single point of failure and often results in performance bottlenecks that compromise the depth of packet inspection. Relying solely on Layer 2 security measures like VLANs and MAC filtering is insufficient because these methods do not inspect the payload of packets and can be easily bypassed by sophisticated network-layer attacks.
Takeaway: Robust defense-in-depth requires both perimeter filtering and internal active prevention to secure against external and internal threat vectors.
Incorrect
Correct: Deploying a perimeter NGFW combined with an internal inline IPS ensures that threats are filtered at the entry point and that any traffic bypassing the edge or originating internally is actively mitigated. This multi-layered approach aligns with federal security standards for protecting information systems by providing both boundary protection and internal active monitoring. The inline IPS is specifically critical for blocking lateral movement, which is a hallmark of advanced persistent threats.
Incorrect: The strategy of using passive detection and router ACLs fails to provide real-time prevention of active exploits, leaving the network vulnerable to rapid-fire attacks that require immediate mitigation. Choosing to consolidate all security functions into a single UTM appliance introduces a single point of failure and often results in performance bottlenecks that compromise the depth of packet inspection. Relying solely on Layer 2 security measures like VLANs and MAC filtering is insufficient because these methods do not inspect the payload of packets and can be easily bypassed by sophisticated network-layer attacks.
Takeaway: Robust defense-in-depth requires both perimeter filtering and internal active prevention to secure against external and internal threat vectors.
-
Question 4 of 20
4. Question
A cybersecurity analyst at a United States federal agency discovers an unidentified executable on a restricted internal server. To determine the file’s intent, the analyst prepares a dynamic analysis session within a secure sandbox. The analyst suspects the malware may contain logic to detect virtualization or automated analysis environments. Which approach is most likely to yield an accurate assessment of the malware’s behavior?
Correct
Correct: Modern malware often employs environmental keying or anti-VM techniques to remain dormant when it detects a laboratory setting. By tailoring the sandbox to include specific artifacts like recent files, registry entries, and unique hardware strings, the analyst tricks the malware into believing it is on a high-value target system. This alignment with federal cybersecurity best practices for malware analysis ensures the full range of malicious capabilities is observed during the session.
Incorrect: Simply providing more processing power or memory does not address the logic-based checks malware uses to identify virtualized environments. Restricting all network access often prevents the malware from downloading secondary payloads or receiving instructions, which results in an incomplete behavioral profile. Relying on a generic, unmodified virtual machine is ineffective because these environments possess predictable signatures that advanced threats are specifically programmed to recognize and avoid.
Takeaway: Effective dynamic analysis requires masking the sandbox’s identity by simulating the specific environmental characteristics of a legitimate production system.
Incorrect
Correct: Modern malware often employs environmental keying or anti-VM techniques to remain dormant when it detects a laboratory setting. By tailoring the sandbox to include specific artifacts like recent files, registry entries, and unique hardware strings, the analyst tricks the malware into believing it is on a high-value target system. This alignment with federal cybersecurity best practices for malware analysis ensures the full range of malicious capabilities is observed during the session.
Incorrect: Simply providing more processing power or memory does not address the logic-based checks malware uses to identify virtualized environments. Restricting all network access often prevents the malware from downloading secondary payloads or receiving instructions, which results in an incomplete behavioral profile. Relying on a generic, unmodified virtual machine is ineffective because these environments possess predictable signatures that advanced threats are specifically programmed to recognize and avoid.
Takeaway: Effective dynamic analysis requires masking the sandbox’s identity by simulating the specific environmental characteristics of a legitimate production system.
-
Question 5 of 20
5. Question
A security analyst at a United States federal agency identifies a series of unauthorized access attempts to a restricted repository during non-standard hours. The system logs indicate that the credentials used belong to a senior developer who is currently on a documented leave of absence. Which strategy represents the most effective immediate response to mitigate the risk while preserving the integrity of a potential internal investigation?
Correct
Correct: Suspending the account provides immediate containment of the threat while User and Entity Behavior Analytics (UEBA) allows for the identification of patterns that distinguish between a compromised external actor and a malicious insider. This approach aligns with NIST SP 800-53 standards for incident response and account management, ensuring that forensic evidence is preserved for a formal investigation without alerting the perpetrator prematurely.
Incorrect: The strategy of issuing a department-wide memorandum is counterproductive as it risks tipping off a potential insider and compromising the confidentiality of the investigation. Relying solely on network segment isolation or honeypots may fail to address the root cause of the credential compromise and could disrupt legitimate services. Opting for the permanent deletion of the account and immediate restoration from backup is an overreaction that destroys critical forensic artifacts necessary for a thorough post-incident analysis.
Takeaway: Effective insider threat response requires balancing immediate containment with forensic preservation through behavioral analytics and controlled account management.
Incorrect
Correct: Suspending the account provides immediate containment of the threat while User and Entity Behavior Analytics (UEBA) allows for the identification of patterns that distinguish between a compromised external actor and a malicious insider. This approach aligns with NIST SP 800-53 standards for incident response and account management, ensuring that forensic evidence is preserved for a formal investigation without alerting the perpetrator prematurely.
Incorrect: The strategy of issuing a department-wide memorandum is counterproductive as it risks tipping off a potential insider and compromising the confidentiality of the investigation. Relying solely on network segment isolation or honeypots may fail to address the root cause of the credential compromise and could disrupt legitimate services. Opting for the permanent deletion of the account and immediate restoration from backup is an overreaction that destroys critical forensic artifacts necessary for a thorough post-incident analysis.
Takeaway: Effective insider threat response requires balancing immediate containment with forensic preservation through behavioral analytics and controlled account management.
-
Question 6 of 20
6. Question
During a security review of a United States federal agency network, a cybersecurity analyst identifies a series of suspicious TCP packets targeting a sensitive internal server. The packet headers reveal that both the SYN and FIN flags are set simultaneously. The analyst must determine the nature of this traffic to update the intrusion detection system rules. Based on standard TCP/IP protocol behavior and common exploitation techniques, what does this specific flag combination most likely indicate?
Correct
Correct: The TCP protocol specification defines SYN for initiating connections and FIN for terminating them, making their simultaneous use an illegal state. Attackers utilize these malformed packets in stealth scans because some legacy packet filters or poorly configured firewalls only inspect for the SYN flag to identify new connections. By including the FIN flag, the attacker may bypass simple rules that do not explicitly block invalid flag combinations, allowing them to probe ports behind the filter.
Incorrect: The strategy of associating this behavior with TCP Fast Open is incorrect because that extension focuses on including data within the initial handshake rather than using conflicting flags. Attributing the traffic to routine load balancer probes is inaccurate as professional networking equipment follows RFC standards and would not generate malformed headers for health checks. Focusing on session resumption is also a misunderstanding of the protocol because legitimate restarts involve a new SYN packet after a previous connection has fully closed, never a hybrid packet.
Takeaway: Identifying illegal TCP flag combinations like SYN-FIN is essential for detecting stealthy reconnaissance and potential firewall evasion attempts.
Incorrect
Correct: The TCP protocol specification defines SYN for initiating connections and FIN for terminating them, making their simultaneous use an illegal state. Attackers utilize these malformed packets in stealth scans because some legacy packet filters or poorly configured firewalls only inspect for the SYN flag to identify new connections. By including the FIN flag, the attacker may bypass simple rules that do not explicitly block invalid flag combinations, allowing them to probe ports behind the filter.
Incorrect: The strategy of associating this behavior with TCP Fast Open is incorrect because that extension focuses on including data within the initial handshake rather than using conflicting flags. Attributing the traffic to routine load balancer probes is inaccurate as professional networking equipment follows RFC standards and would not generate malformed headers for health checks. Focusing on session resumption is also a misunderstanding of the protocol because legitimate restarts involve a new SYN packet after a previous connection has fully closed, never a hybrid packet.
Takeaway: Identifying illegal TCP flag combinations like SYN-FIN is essential for detecting stealthy reconnaissance and potential firewall evasion attempts.
-
Question 7 of 20
7. Question
A lead developer at a United States federal agency is overseeing the deployment of a new internal portal that processes sensitive intelligence data. During a security audit, a vulnerability is identified where user-supplied input is directly concatenated into a database query string. To align with NIST Special Publication 800-53 and secure coding best practices, which remediation strategy should the developer prioritize to mitigate this risk?
Correct
Correct: Implementing parameterized queries with prepared statements is the most effective defense against SQL injection. This technique ensures that the database engine treats user-supplied input strictly as data and never as executable code, regardless of the characters provided. This approach directly addresses the root cause of the vulnerability and aligns with NIST guidelines for secure software development and input validation.
Incorrect: Relying solely on a Web Application Firewall provides a perimeter defense but fails to address the underlying vulnerability within the application code itself, leaving the system vulnerable to internal threats or WAF bypasses. Simply conducting client-side validation is insufficient because attackers can easily bypass browser-based controls using proxy tools or direct API calls to the server. The strategy of using a blacklist for SQL keywords is inherently flawed as it can be circumvented through encoding tricks, case variations, or alternative syntax that the filter does not recognize.
Takeaway: Secure coding requires server-side parameterization to prevent injection attacks by strictly separating executable code from user-supplied data.
Incorrect
Correct: Implementing parameterized queries with prepared statements is the most effective defense against SQL injection. This technique ensures that the database engine treats user-supplied input strictly as data and never as executable code, regardless of the characters provided. This approach directly addresses the root cause of the vulnerability and aligns with NIST guidelines for secure software development and input validation.
Incorrect: Relying solely on a Web Application Firewall provides a perimeter defense but fails to address the underlying vulnerability within the application code itself, leaving the system vulnerable to internal threats or WAF bypasses. Simply conducting client-side validation is insufficient because attackers can easily bypass browser-based controls using proxy tools or direct API calls to the server. The strategy of using a blacklist for SQL keywords is inherently flawed as it can be circumvented through encoding tricks, case variations, or alternative syntax that the filter does not recognize.
Takeaway: Secure coding requires server-side parameterization to prevent injection attacks by strictly separating executable code from user-supplied data.
-
Question 8 of 20
8. Question
During a security assessment of a web-based portal used for processing sensitive information at a federal agency in the United States, a cybersecurity analyst identifies a vulnerability in the search functionality. The application directly incorporates user-provided strings into database queries without sufficient processing. Which approach provides the most robust defense against injection attacks while maintaining data integrity?
Correct
Correct: Using parameterized queries is the primary defense against SQL injection as it ensures the database treats input strictly as data rather than executable commands. Combining this with allow-list validation ensures that only data conforming to expected formats is processed, aligning with NIST SP 800-53 security controls for information integrity and system protection.
Incorrect: Relying solely on a black-list approach is prone to failure because attackers frequently find ways to bypass filters using different character encodings or unexpected syntax. Simply conducting validation on the client side is ineffective because an attacker can bypass the browser entirely to send malicious payloads directly to the server. The strategy of encoding all input into Base64 does not solve the fundamental issue of how the application interprets data and can complicate data retrieval and indexing without providing true security against logic-based attacks.
Takeaway: Robust input security requires server-side parameterized queries and strict allow-list validation to prevent injection and ensure data integrity.
Incorrect
Correct: Using parameterized queries is the primary defense against SQL injection as it ensures the database treats input strictly as data rather than executable commands. Combining this with allow-list validation ensures that only data conforming to expected formats is processed, aligning with NIST SP 800-53 security controls for information integrity and system protection.
Incorrect: Relying solely on a black-list approach is prone to failure because attackers frequently find ways to bypass filters using different character encodings or unexpected syntax. Simply conducting validation on the client side is ineffective because an attacker can bypass the browser entirely to send malicious payloads directly to the server. The strategy of encoding all input into Base64 does not solve the fundamental issue of how the application interprets data and can complicate data retrieval and indexing without providing true security against logic-based attacks.
Takeaway: Robust input security requires server-side parameterized queries and strict allow-list validation to prevent injection and ensure data integrity.
-
Question 9 of 20
9. Question
A development team at a United States federal agency is designing a cloud-based system to handle sensitive mission data. During the initial design phase, the lead security engineer mandates a formal threat modeling session to identify architectural weaknesses. Which methodology and application would most effectively ensure that the team identifies risks such as unauthorized data modification and service unavailability?
Correct
Correct: The STRIDE methodology is a comprehensive approach for identifying threats across different categories, including Tampering (unauthorized modification) and Denial of Service (service unavailability). By applying this during the design phase, the agency ensures that security controls are integrated into the architecture rather than added as an afterthought, aligning with NIST SP 800-160 guidelines for systems security engineering.
Incorrect: Focusing only on historical logs from legacy systems is ineffective because cloud-native architectures introduce new attack vectors that legacy data cannot predict. The strategy of relying on edge identity verification ignores the necessity of analyzing how data moves within the application, leaving internal logic flaws unaddressed. Choosing to wait until post-deployment for vulnerability scanning is a reactive approach that increases the cost and complexity of fixing fundamental design flaws that should have been identified during the initial development cycle.
Takeaway: Proactive threat modeling using the STRIDE framework identifies architectural vulnerabilities early, ensuring robust security integration within the software development lifecycle.
Incorrect
Correct: The STRIDE methodology is a comprehensive approach for identifying threats across different categories, including Tampering (unauthorized modification) and Denial of Service (service unavailability). By applying this during the design phase, the agency ensures that security controls are integrated into the architecture rather than added as an afterthought, aligning with NIST SP 800-160 guidelines for systems security engineering.
Incorrect: Focusing only on historical logs from legacy systems is ineffective because cloud-native architectures introduce new attack vectors that legacy data cannot predict. The strategy of relying on edge identity verification ignores the necessity of analyzing how data moves within the application, leaving internal logic flaws unaddressed. Choosing to wait until post-deployment for vulnerability scanning is a reactive approach that increases the cost and complexity of fixing fundamental design flaws that should have been identified during the initial development cycle.
Takeaway: Proactive threat modeling using the STRIDE framework identifies architectural vulnerabilities early, ensuring robust security integration within the software development lifecycle.
-
Question 10 of 20
10. Question
A security architect at a United States federal agency is evaluating methods to reduce false positives in the vulnerability management lifecycle for a new cloud-based Java application. The agency requires a solution that integrates into the CI/CD pipeline and provides deep visibility into the execution path of custom code. Which implementation strategy best utilizes Interactive Application Security Testing (IAST) to achieve this goal?
Correct
Correct: IAST is distinguished by its use of software instrumentation to monitor an application’s inner workings during execution. By placing an agent inside the runtime environment, it can observe how data moves through the code in real-time. This allows the tool to correlate web requests with backend code execution, confirming whether a vulnerability is actually reachable and exploitable. This internal visibility significantly reduces false positives compared to external scanning methods and provides developers with the exact line of code requiring remediation.
Incorrect: Analyzing code in a non-running state through static analysis fails to account for environmental configurations and dynamic data handling, often leading to a high volume of false positives. Relying on external black-box testing prevents the security team from seeing the internal state of the application or the specific code path triggered by a request, making it difficult to pinpoint the root cause of a vulnerability. Choosing manual testing for every update is not scalable within a modern CI/CD pipeline and does not provide the continuous, automated feedback loop that an instrumented runtime solution offers.
Takeaway: IAST improves detection accuracy by using runtime instrumentation to analyze code execution and data flow from within the application.
Incorrect
Correct: IAST is distinguished by its use of software instrumentation to monitor an application’s inner workings during execution. By placing an agent inside the runtime environment, it can observe how data moves through the code in real-time. This allows the tool to correlate web requests with backend code execution, confirming whether a vulnerability is actually reachable and exploitable. This internal visibility significantly reduces false positives compared to external scanning methods and provides developers with the exact line of code requiring remediation.
Incorrect: Analyzing code in a non-running state through static analysis fails to account for environmental configurations and dynamic data handling, often leading to a high volume of false positives. Relying on external black-box testing prevents the security team from seeing the internal state of the application or the specific code path triggered by a request, making it difficult to pinpoint the root cause of a vulnerability. Choosing manual testing for every update is not scalable within a modern CI/CD pipeline and does not provide the continuous, automated feedback loop that an instrumented runtime solution offers.
Takeaway: IAST improves detection accuracy by using runtime instrumentation to analyze code execution and data flow from within the application.
-
Question 11 of 20
11. Question
A defense contractor based in Virginia is refining its cybersecurity incident response playbooks following a simulated ransomware attack. The Chief Information Security Officer (CISO) notes that while technical steps for malware eradication are well-documented, the transition from detection to external notification remains ambiguous. To align with federal guidelines and the Cybersecurity & Infrastructure Security Agency (CISA) recommendations, the team must ensure the playbook addresses specific legal and operational obligations. Which element should be prioritized in the playbook to facilitate an effective transition between the containment and recovery phases while meeting federal reporting expectations?
Correct
Correct: This approach ensures that the organization meets its regulatory obligations under United States federal frameworks while maintaining the integrity of evidence for potential criminal investigations. By establishing clear triggers for CISA notification and following federal evidence standards, the organization facilitates a coordinated national response to threats while protecting its own legal standing.
Incorrect: Simply conducting a total system wipe for every incident is often an overreaction that destroys evidence and causes unnecessary downtime. The strategy of blocking all foreign IP addresses is often impractical for global business operations and does not address threats originating from domestic infrastructure. Relying on legal counsel to perform technical forensic tasks is inappropriate as they typically lack the specialized technical training required for proper data preservation and chain of custody.
Takeaway: Incident response playbooks must balance technical containment with legal preservation and mandatory federal reporting requirements to be effective.
Incorrect
Correct: This approach ensures that the organization meets its regulatory obligations under United States federal frameworks while maintaining the integrity of evidence for potential criminal investigations. By establishing clear triggers for CISA notification and following federal evidence standards, the organization facilitates a coordinated national response to threats while protecting its own legal standing.
Incorrect: Simply conducting a total system wipe for every incident is often an overreaction that destroys evidence and causes unnecessary downtime. The strategy of blocking all foreign IP addresses is often impractical for global business operations and does not address threats originating from domestic infrastructure. Relying on legal counsel to perform technical forensic tasks is inappropriate as they typically lack the specialized technical training required for proper data preservation and chain of custody.
Takeaway: Incident response playbooks must balance technical containment with legal preservation and mandatory federal reporting requirements to be effective.
-
Question 12 of 20
12. Question
A network security engineer at a United States federal facility is tasked with hardening the internal infrastructure following a security assessment. The audit revealed that the current flat network architecture allows a compromised workstation in the public-facing lobby to potentially access sensitive servers in the restricted data zone. To align with the Department of Defense Zero Trust Strategy, which implementation strategy provides the most effective defense against lateral movement?
Correct
Correct: Micro-segmentation is a fundamental pillar of Zero Trust architecture as defined by NIST SP 800-207. By breaking the network into small, isolated sections down to the workload level, it prevents an attacker from moving laterally even if they gain an initial foothold. This approach ensures that security policies are tied directly to the identity and function of the asset rather than its physical location, effectively neutralizing the risks associated with a flat network.
Incorrect: Relying on a centralized legacy firewall at the edge is ineffective against lateral movement because it primarily monitors traffic crossing the network boundary rather than internal east-west traffic. The strategy of using broad department-level VLANs provides some separation but is often too coarse, allowing an attacker who compromises one device to access everything else within that large segment. Opting for an air-gapped backup system is a critical recovery measure but does not proactively prevent or contain the lateral movement of an active threat within the primary network.
Takeaway: Micro-segmentation minimizes the blast radius of a breach by enforcing granular, identity-based access controls between individual network workloads.
Incorrect
Correct: Micro-segmentation is a fundamental pillar of Zero Trust architecture as defined by NIST SP 800-207. By breaking the network into small, isolated sections down to the workload level, it prevents an attacker from moving laterally even if they gain an initial foothold. This approach ensures that security policies are tied directly to the identity and function of the asset rather than its physical location, effectively neutralizing the risks associated with a flat network.
Incorrect: Relying on a centralized legacy firewall at the edge is ineffective against lateral movement because it primarily monitors traffic crossing the network boundary rather than internal east-west traffic. The strategy of using broad department-level VLANs provides some separation but is often too coarse, allowing an attacker who compromises one device to access everything else within that large segment. Opting for an air-gapped backup system is a critical recovery measure but does not proactively prevent or contain the lateral movement of an active threat within the primary network.
Takeaway: Micro-segmentation minimizes the blast radius of a breach by enforcing granular, identity-based access controls between individual network workloads.
-
Question 13 of 20
13. Question
A cybersecurity analyst at a United States federal agency is tasked with performing a vulnerability assessment on a network segment containing legacy systems. To identify active services without overwhelming the fragile network stack of older devices, the analyst must select an appropriate Nmap configuration. Which approach balances the need for accurate enumeration with the requirement to maintain system stability during the 48-hour audit window?
Correct
Correct: The TCP SYN scan is a stealthy, half-open scanning technique that does not complete the full three-way handshake, making it less resource-intensive for the target. By applying a polite timing template, the analyst ensures that the frequency of probes remains low, which prevents the exhaustion of connection tables or CPU cycles on legacy hardware.
Incorrect: Relying on aggressive scanning modes triggers multiple resource-heavy operations like script scanning and OS fingerprinting that can crash sensitive services. Simply conducting a full TCP Connect scan forces the target system to allocate resources for every connection attempt, increasing the risk of a denial-of-service condition. Choosing to scan all UDP ports is inefficient and generates high volumes of ICMP traffic that can saturate low-bandwidth network segments.
Takeaway: Prioritize half-open scans and throttled timing templates when enumerating sensitive or legacy network environments to ensure operational continuity.
Incorrect
Correct: The TCP SYN scan is a stealthy, half-open scanning technique that does not complete the full three-way handshake, making it less resource-intensive for the target. By applying a polite timing template, the analyst ensures that the frequency of probes remains low, which prevents the exhaustion of connection tables or CPU cycles on legacy hardware.
Incorrect: Relying on aggressive scanning modes triggers multiple resource-heavy operations like script scanning and OS fingerprinting that can crash sensitive services. Simply conducting a full TCP Connect scan forces the target system to allocate resources for every connection attempt, increasing the risk of a denial-of-service condition. Choosing to scan all UDP ports is inefficient and generates high volumes of ICMP traffic that can saturate low-bandwidth network segments.
Takeaway: Prioritize half-open scans and throttled timing templates when enumerating sensitive or legacy network environments to ensure operational continuity.
-
Question 14 of 20
14. Question
A security lead at a United States federal facility is designing a penetration test for a sensitive internal database. The goal is to evaluate the potential impact of a compromised staff account. The testers will be provided with standard user-level permissions but will not have access to the database schema or the underlying operating system configuration. Which penetration testing methodology is most appropriate for this scenario?
Correct
Correct: Gray box testing is the most suitable approach because it provides the tester with partial information or limited access, such as user credentials, to simulate an attacker who has already bypassed perimeter defenses. This aligns with the objective of assessing the risk posed by a compromised internal account while maintaining a realistic level of ignorance regarding the system’s internal structure.
Incorrect: Relying solely on a zero-knowledge approach would require the testers to first spend significant time attempting to breach the perimeter, which does not directly address the goal of testing internal privilege escalation. Simply conducting a white box test with complete access to the database schema would result in a comprehensive audit that exceeds the realism of an adversary’s perspective. Choosing to perform a passive scan for known weaknesses focuses on identifying flaws rather than actively exploiting them to understand the impact of a compromised account.
Takeaway: Gray box testing effectively simulates an authenticated attacker by providing limited internal access without revealing the full system architecture.
Incorrect
Correct: Gray box testing is the most suitable approach because it provides the tester with partial information or limited access, such as user credentials, to simulate an attacker who has already bypassed perimeter defenses. This aligns with the objective of assessing the risk posed by a compromised internal account while maintaining a realistic level of ignorance regarding the system’s internal structure.
Incorrect: Relying solely on a zero-knowledge approach would require the testers to first spend significant time attempting to breach the perimeter, which does not directly address the goal of testing internal privilege escalation. Simply conducting a white box test with complete access to the database schema would result in a comprehensive audit that exceeds the realism of an adversary’s perspective. Choosing to perform a passive scan for known weaknesses focuses on identifying flaws rather than actively exploiting them to understand the impact of a compromised account.
Takeaway: Gray box testing effectively simulates an authenticated attacker by providing limited internal access without revealing the full system architecture.
-
Question 15 of 20
15. Question
During a security review of a United States federal agency’s network infrastructure, an analyst identifies a series of low-bandwidth, encrypted outbound connections originating from a sensitive database server. These connections occur at randomized intervals over a period of three months, consistently bypassing the agency’s automated threshold-based alerting systems. The traffic is directed to an external IP address that has no prior history of interaction with the agency. Which tactic commonly associated with Advanced Persistent Threats (APTs) is most likely being employed in this scenario?
Correct
Correct: The scenario describes beaconing, a technique where compromised systems periodically communicate with an external command-and-control (C2) server to receive instructions or signal they are still active. By incorporating ‘jitter’—the randomization of the time interval between check-ins—the threat actor ensures the traffic does not create a predictable pattern. This allows the APT to maintain a persistent presence within the United States federal network while remaining below the detection thresholds of traditional security monitoring tools that look for rhythmic or high-frequency anomalies.
Incorrect: Focusing only on high-volume data exfiltration bursts is incorrect because the scenario specifies low-bandwidth, randomized traffic, whereas exfiltration bursts are typically characterized by large spikes in data transfer. The strategy of using self-replicating worms for lateral movement does not fit here, as the scenario focuses on outbound communication to an external server rather than internal propagation between hosts. Opting for a logic bomb explanation is also misplaced, as logic bombs are dormant code triggered by specific conditions to cause damage, rather than a method for ongoing external communication and persistence.
Takeaway: APTs use randomized beaconing intervals to maintain persistent command-and-control links while evading detection by signature-based and threshold-based security systems.
Incorrect
Correct: The scenario describes beaconing, a technique where compromised systems periodically communicate with an external command-and-control (C2) server to receive instructions or signal they are still active. By incorporating ‘jitter’—the randomization of the time interval between check-ins—the threat actor ensures the traffic does not create a predictable pattern. This allows the APT to maintain a persistent presence within the United States federal network while remaining below the detection thresholds of traditional security monitoring tools that look for rhythmic or high-frequency anomalies.
Incorrect: Focusing only on high-volume data exfiltration bursts is incorrect because the scenario specifies low-bandwidth, randomized traffic, whereas exfiltration bursts are typically characterized by large spikes in data transfer. The strategy of using self-replicating worms for lateral movement does not fit here, as the scenario focuses on outbound communication to an external server rather than internal propagation between hosts. Opting for a logic bomb explanation is also misplaced, as logic bombs are dormant code triggered by specific conditions to cause damage, rather than a method for ongoing external communication and persistence.
Takeaway: APTs use randomized beaconing intervals to maintain persistent command-and-control links while evading detection by signature-based and threshold-based security systems.
-
Question 16 of 20
16. Question
During a high-priority investigation into unauthorized data exfiltration from a federal network, a digital forensics specialist is tasked with capturing live network traffic. To ensure the evidence is admissible in a United States federal court and maintains its integrity, which action is most critical when initiating the packet capture process?
Correct
Correct: In the United States federal legal system, digital evidence must be authenticated to be admissible. Documenting the system time relative to a reliable external clock and maintaining a strict chain of custody ensures the evidence’s integrity and chronological accuracy, which is vital for correlating events across different logs and systems.
Incorrect: The strategy of compressing traffic logs before transfer risks data corruption or loss of metadata that could be vital for deep packet analysis. Choosing to modify firewall rules during an active investigation can alter the state of the system and potentially alert an adversary, which violates the principle of minimizing changes to the environment. Opting for a full antivirus scan before capturing traffic can overwrite volatile data in memory and change file timestamps, effectively destroying the original state of the evidence source.
Takeaway: Maintaining evidence integrity through time synchronization and chain of custody is essential for legal admissibility in federal investigations.
Incorrect
Correct: In the United States federal legal system, digital evidence must be authenticated to be admissible. Documenting the system time relative to a reliable external clock and maintaining a strict chain of custody ensures the evidence’s integrity and chronological accuracy, which is vital for correlating events across different logs and systems.
Incorrect: The strategy of compressing traffic logs before transfer risks data corruption or loss of metadata that could be vital for deep packet analysis. Choosing to modify firewall rules during an active investigation can alter the state of the system and potentially alert an adversary, which violates the principle of minimizing changes to the environment. Opting for a full antivirus scan before capturing traffic can overwrite volatile data in memory and change file timestamps, effectively destroying the original state of the evidence source.
Takeaway: Maintaining evidence integrity through time synchronization and chain of custody is essential for legal admissibility in federal investigations.
-
Question 17 of 20
17. Question
A federal contractor is modernizing a legacy data-at-rest encryption module that previously relied on the Data Encryption Standard (DES). The new system must adhere to National Institute of Standards and Technology (NIST) requirements for protecting sensitive information. When evaluating the transition to the Advanced Encryption Standard (AES), which technical factor most significantly justifies the replacement of DES in high-security environments?
Correct
Correct: AES is the current federal standard (FIPS 197) because it offers robust security through key lengths of 128, 192, or 256 bits. Unlike the Feistel network used in DES, AES employs a substitution-permutation network. This design, combined with larger key sizes, makes it computationally infeasible to crack using modern hardware, whereas the 56-bit key of DES is considered obsolete and vulnerable.
Incorrect: The strategy of claiming AES uses a Feistel network is incorrect because AES is built on a substitution-permutation network. Focusing on 64-bit blocks is a mistake since AES uses 128-bit blocks while DES uses the smaller 64-bit blocks. Relying on the idea that AES includes public-key exchange is a fundamental misunderstanding of symmetric versus asymmetric cryptography. Opting for the view that users can modify encryption rounds during a handshake misrepresents the fixed-round architecture defined in the NIST standard.
Takeaway: AES replaced DES as the US federal standard due to its superior substitution-permutation architecture and significantly longer, more secure key lengths.
Incorrect
Correct: AES is the current federal standard (FIPS 197) because it offers robust security through key lengths of 128, 192, or 256 bits. Unlike the Feistel network used in DES, AES employs a substitution-permutation network. This design, combined with larger key sizes, makes it computationally infeasible to crack using modern hardware, whereas the 56-bit key of DES is considered obsolete and vulnerable.
Incorrect: The strategy of claiming AES uses a Feistel network is incorrect because AES is built on a substitution-permutation network. Focusing on 64-bit blocks is a mistake since AES uses 128-bit blocks while DES uses the smaller 64-bit blocks. Relying on the idea that AES includes public-key exchange is a fundamental misunderstanding of symmetric versus asymmetric cryptography. Opting for the view that users can modify encryption rounds during a handshake misrepresents the fixed-round architecture defined in the NIST standard.
Takeaway: AES replaced DES as the US federal standard due to its superior substitution-permutation architecture and significantly longer, more secure key lengths.
-
Question 18 of 20
18. Question
A United States federal contractor is tasked with migrating a sensitive data processing application to a cloud environment to meet modern scalability requirements. The contractor’s security team specifies that the agency must retain full administrative control over the operating system, middleware, and installed applications to comply with internal hardening standards. However, the agency prefers the cloud provider to manage the underlying physical servers, storage, and hypervisor layer. Which cloud service model best aligns with these specific management and security requirements?
Correct
Correct: Infrastructure as a Service (IaaS) provides the consumer with the highest level of control over the virtualized computing resources. In this model, the provider manages the physical infrastructure and virtualization layer, while the consumer is responsible for the operating system, middleware, and applications. This allows the agency to implement specific OS hardening and configuration standards required for national security environments.
Incorrect: Selecting Platform as a Service would be inappropriate because the provider typically manages the operating system and middleware, which would prevent the agency from maintaining the required administrative control over those layers. Opting for Software as a Service would shift nearly all responsibility to the provider, including the application itself, making it impossible to meet custom hardening requirements. The strategy of using Function as a Service is also incorrect as it abstracts the entire server environment to execute code snippets, offering no control over the underlying operating system or middleware.
Takeaway: IaaS is the optimal choice when an organization requires administrative control over the operating system and middleware while outsourcing physical infrastructure management.
Incorrect
Correct: Infrastructure as a Service (IaaS) provides the consumer with the highest level of control over the virtualized computing resources. In this model, the provider manages the physical infrastructure and virtualization layer, while the consumer is responsible for the operating system, middleware, and applications. This allows the agency to implement specific OS hardening and configuration standards required for national security environments.
Incorrect: Selecting Platform as a Service would be inappropriate because the provider typically manages the operating system and middleware, which would prevent the agency from maintaining the required administrative control over those layers. Opting for Software as a Service would shift nearly all responsibility to the provider, including the application itself, making it impossible to meet custom hardening requirements. The strategy of using Function as a Service is also incorrect as it abstracts the entire server environment to execute code snippets, offering no control over the underlying operating system or middleware.
Takeaway: IaaS is the optimal choice when an organization requires administrative control over the operating system and middleware while outsourcing physical infrastructure management.
-
Question 19 of 20
19. Question
A federal agency is modernizing its internal network to better protect sensitive intelligence data from lateral movement by potential intruders. The current infrastructure relies on a traditional perimeter-based defense with internal traffic largely unmonitored. To align with the federal mandate for Zero Trust Architecture, how should the network security team implement segmentation to ensure the highest level of protection for high-value assets?
Correct
Correct: In accordance with NIST SP 800-207 and federal cybersecurity mandates, implementing a Zero Trust Architecture requires moving beyond simple network-level isolation. Micro-segmentation allows for security policies to be applied to individual workloads or applications, significantly reducing the attack surface. By using Next-Generation Firewalls and identity-based proxies, the agency can verify every request based on user identity, device health, and context rather than just network location.
Incorrect: The strategy of relying on a single hardened demilitarized zone creates a significant risk if the perimeter or the VPN gateway is compromised. Simply using flat network topologies with basic Access Control Lists lacks the stateful inspection and granular visibility needed to stop sophisticated lateral movement. Choosing to focus on MAC-based port security is insufficient because MAC addresses are easily spoofed and this method does not provide any protection against threats already present on authorized devices.
Takeaway: Zero Trust requires granular micro-segmentation and identity-based access controls to prevent lateral movement and protect sensitive federal data assets.
Incorrect
Correct: In accordance with NIST SP 800-207 and federal cybersecurity mandates, implementing a Zero Trust Architecture requires moving beyond simple network-level isolation. Micro-segmentation allows for security policies to be applied to individual workloads or applications, significantly reducing the attack surface. By using Next-Generation Firewalls and identity-based proxies, the agency can verify every request based on user identity, device health, and context rather than just network location.
Incorrect: The strategy of relying on a single hardened demilitarized zone creates a significant risk if the perimeter or the VPN gateway is compromised. Simply using flat network topologies with basic Access Control Lists lacks the stateful inspection and granular visibility needed to stop sophisticated lateral movement. Choosing to focus on MAC-based port security is insufficient because MAC addresses are easily spoofed and this method does not provide any protection against threats already present on authorized devices.
Takeaway: Zero Trust requires granular micro-segmentation and identity-based access controls to prevent lateral movement and protect sensitive federal data assets.
-
Question 20 of 20
20. Question
During a security review at a United States federal agency, an analyst discovers a persistent outbound connection to an unknown external server originating from a high-privilege workstation. The Incident Response Team (IRT) has successfully isolated the workstation from the network and verified that no other systems are currently communicating with the suspicious IP address. Given that the threat has been contained, which action should the IRT prioritize next to adhere to the NIST Incident Response Lifecycle?
Correct
Correct: According to the NIST SP 800-61 Rev. 2 framework, the Eradication phase follows Containment. This phase is essential for identifying the root cause of the breach, removing all traces of the attacker’s presence, and remediating the specific vulnerabilities that were exploited to prevent the incident from recurring.
Incorrect: The strategy of restoring the system from a backup immediately is premature because it may reintroduce the same vulnerability or even the malware if the backup was taken after the initial infection. Focusing only on the post-incident debriefing is a mistake at this stage as it belongs to the Lessons Learned phase, which occurs only after the system is fully recovered. Opting for firewall and IDS adjustments is a useful mitigation step but does not address the necessary cleanup of the compromised host itself.
Takeaway: The eradication phase must focus on removing the root cause and all malicious remnants before moving to system recovery.
Incorrect
Correct: According to the NIST SP 800-61 Rev. 2 framework, the Eradication phase follows Containment. This phase is essential for identifying the root cause of the breach, removing all traces of the attacker’s presence, and remediating the specific vulnerabilities that were exploited to prevent the incident from recurring.
Incorrect: The strategy of restoring the system from a backup immediately is premature because it may reintroduce the same vulnerability or even the malware if the backup was taken after the initial infection. Focusing only on the post-incident debriefing is a mistake at this stage as it belongs to the Lessons Learned phase, which occurs only after the system is fully recovered. Opting for firewall and IDS adjustments is a useful mitigation step but does not address the necessary cleanup of the compromised host itself.
Takeaway: The eradication phase must focus on removing the root cause and all malicious remnants before moving to system recovery.