It is a vulnerability related to user authorization.
Access control maintains policy by preventing users from acting beyond their specified permissions. Failures result in unauthorized information disclosure, data modification or destruction, or performing a business function beyond the user’s capabilities.
Scenario #1: In an SQL call to retrieve account information, the application uses unverified data:
In this scenario, the attacker can simply change the browser’s ‘acct’ parameter to send whatever account number according to their wish. And if this is not correctly verified, the attacker can access any user’s account.
Scenario #2: An attacker can simply force the browser to target the URL by changing the API endpoint. In fact, it requires Admin rights to access the admin page.
The attacker can alter the data, user’s record and manipulate access to other accounts. They can escalate the privilege and act as admin or user even without being logged in.
Using a central application component to verify access control.
Drive all the access control decisions from a lower privileged user’s sessions.
It is a vulnerability related to failure in data encryption in transit or at rest.
It happens when sensitive data is not stored securely. Also, it isn’t a single vulnerability, but a collection of vulnerabilities.
Scenario #1: In an automatic database encryption-enabled server, a program encrypts credit card numbers in a database.
However, when this data is retrieved, it is automatically decrypted by which they can retrieve credit card details in clear-text by SQL injection.
Scenario #2: A website does not implement or enforce TLS on all pages, and it only offers weak encryption.
An attacker observes network traffic (for example, on an unprotected wireless network), converts HTTPS connections to HTTP, intercepts queries, and takes the user’s session cookie.
The attacker repeats this cookie and hijacks the user’s (authenticated) session, gaining access to or altering the user’s confidential data. Instead of the foregoing, they might change all transmitted data, such as the beneficiary of a money transfer.
The attacker could steal sensitive data, such as credit card numbers, passwords, health records, personal information, and business secrets.
And for a company, it may wreak-a-havoc on organizations since it violates policies of data compliance for security regulations.
Classify the data processed, stored, or transmitted by an application. And classify the data according to the protocols and standards of encryption.
Don’t store sensitive data unnecessarily.
Encrypt all sensitive data at rest.
Ensure proper key management with the most updated and strong standard algorithms, protocols, and keys.
Encrypt the in-transit data with security protocols such as TLS with FS ciphers, cipher prioritization by the server, and secure perimeters.
Enforce encryption using directives like HSTS.
Disable caching for the response, which contains sensitive data.
Apply required security controls as per the data classification.
Don’t use FTP and SMTP like legacy protocols for transporting sensitive data.
Store passwords using strong adaptive and salted hashing functions with a delay factor
Choose an initialization vector (IV) appropriate for the mode of operation.
Use authenticated encryption rather than simple encryption.
Generate keys cryptographically in randomness and ensure that are stored in memory as byte arrays. And if passwords are used, then they must be converted to a key via an appropriate password base key derivation function.
Ensure that cryptographic randomness is used where appropriate and that it has not been seeded predictably or with low entropy.
Avoid deprecated cryptographic functions and padding schemes.
Verify independently the effectiveness of configuration and settings.
It is a vulnerability related to failure in user input sanitization of the applications.
An application is vulnerable to attack where:
The application does not check, filter, or sanitize user-supplied data.
In the interpreter, dynamic queries or non-parametrized calls without context-aware escaping are used directly.
To extract additional, sensitive records, hostile data is employed within object-relational mapping (ORM) search criteria.
Data that is hostile is immediately used or concatenated. In dynamic queries, commands, or stored procedures, the SQL or command contains the structure and harmful data.
This occurs when user data is combined with the interpreter (such as a database engine, shell, or template engine), resulting in attacks such as
Server-Side Template Injection
Content Injection (HTML injection, CSS injection, XSS)
Scenario #1: An application uses untrusted data in the construction of the following vulnerable SQL call
Scenario #2: An application’s blind trust in frameworks may cause queries that are still vulnerable (e.g. Hibernate Query Language)
In both cases, the attacker changes the ‘id’ parameter value in their browser to send: ‘ or ‘1’=’1.
http://example.com/app/accountView?id=' or '1'='1
This changes the semantics of both queries, causing them to return all records from the accounts table. More serious assaults could alter or remove data, as well as call stored routines.
The attacker could gain full or partial access to the vulnerable application or the server by which they could control the response generated by the application and maybe even control the servers.
Source code review is the best method of detecting if applications are vulnerable to injections. Automated testing of all parameters, headers, URL, cookies, JSON, SOAP, and XML data inputs also has to be adopted.
Sanitize, filter, and escape (depending on the context) the user input before processing it
Encode (depending on the context) the output before sending it back
Enforce limits on the output being sent back. This would avoid your applications sending back millions of database records if SQLi/NoSQLi were there in your apps. Also, a timeout on the connection would avoid any persistent shell sessions.
It is a vulnerability related to design and architectural flaws, with a call for more use of threat modeling, secure design patterns, and reference architectures.
The primary source of this vulnerability is a lack of or inefficient control design. This is because of a lack of business risk profile in the software or system being produced, which leads to a failure to select the level of security architecture.
Secure design is a way for continuous threat evaluation and ensuring that code is robustly designed and tested to avoid known attack methods. In order to refine sessions and search for changes in data flows, access control, or other security measures. We should integrate threat modeling into the redesigning sessions.
Scenario #1: The e-commerce website of a large retailer is vulnerable to bots used by the wholesale buyer who buys high-end video cards to resell on auction sites.
This generates negative publicity for video card manufacturers and retail chain owners, as well as long-term animosity among enthusiasts who cannot get these cards at any price.
Anti-bot design and domain logic rules, such as purchases made within a few seconds of availability, could identify and reject inauthentic purchases.
Scenario #2: A cinema chain allows group booking discounts and has a maximum of fifteen attendees before requiring a deposit.
Attackers could threat-model this flow and test if they could book six hundred seats and all cinemas at once in a few requests, causing a massive loss of income.
The attacker can access the sensitive data stored in the vulnerable system and server, or alter the function of the application.
Establish and use a secure development lifecycle with AppSec professionals to help evaluate and design security and privacy-related controls
Use threat modeling for critical authentication, access control, business logic, and key flows
Establish and use a library of secure design patterns or paved road ready-to-use components
Integrate security language and controls into user stories
Validate all critical flows which are resistant to the threat model and compile use-cases and misuse-cases for each tier of your application.
It is an application related to a vulnerability that occurs because of inappropriate security configuration or improperly configured permissions on cloud services.
The application might be vulnerable if:
Unnecessary features(ports, services, pages, accounts, or privileges) are enabled or installed
Unchanged default account credentials
Improper configuration of the latest security features
Servers do not send security headers or directives, or not set to secure values
Outdated or vulnerable software
Scenario #1: Suppose an application server comes with sample applications that have not been removed from the production server, and those sample applications have known security flaws with which an attacker can compromise the server.
And if one of those applications is the admin console, and they did not change the default accounts, the attacker can log in with the default credential and take over the entire server in a brief span of time.
Scenario #2: Suppose it kept the directory listing feature enabled in a server, with which an attacker can simply find the directories listed within by which they can fetch the Java classes stored in it. This can be reverse engineered to view the code and analyze the access control flaws in the application.
Scenario #3: The application server’s configuration allows detailed error messages to be returned to users, which might expose sensitive information or underlying flaws such as component versions that are vulnerable.
An attacker can access admin control over the server and manipulate the functionality.
Implement secure installation hardening processes including:
The repeatable hardening process makes deploying another environment that is appropriately locked down quick and easy. The development, QA, and production environments should all be set up in the same way, with different credentials for each. To reduce the time and effort required to set up a new secure environment, this process should be automated.
Minimal platform without any unnecessary features, components, documentation, and samples. Remove or do not install unused features and frameworks.
A task to review and update the configurations appropriate to all security notes, updates, and patches as part of the patch management process. Also, review the cloud storage permissions.
Sending security directives to clients.
An automated process to verify the effectiveness of the configurations and settings in all environments.
Vulnerable components are issues in which we struggle to test and assess risks.
The application is vulnerable if:
The version of all the used components is unknown. This includes components you directly use as well as dependencies.
Outdated, unsupported, or vulnerable software is used. This includes OS, servers(web/application), database management system, APIs and all components, runtime environments, and libraries.
The compatibility of updated, upgraded, or patched libraries are not tested.
If you didn’t secure the configuration of the component.
Scenario #1: Components typically run with the same privileges as the application itself, so flaws in any component can result in serious impact. Such flaws can be accidental or intentional.
An attacker can find unpatched or misconfigured systems with the advent of automated tools, with which they can access the total control and possibly rootkit the entire system.
Patch management process should be made to:
Remove unnecessary features, components and files, and documentation.
Remove unused dependencies.
Conduct dependency checks using ambient tools and continuously monitor CVE and NVD for vulnerabilities in the components.
Only rely on components from sources over secure links and prefer signed packages to reduce the chance of including a modified, malicious component.
It is a vulnerability due to a lack of confirmation of the user’s identity, authentication, and session management.
There may be authentication weaknesses if the application does:
Permits automated attacks such as credential stuffing, where the attacker has a list of valid usernames and passwords
Permits brute force or other automated attacks
Permits default, weak, or well-known passwords
Uses weak or inefficient and unsafe credential recovery processes such as “knowledge-based answers”.
Using plant text or weak hashing methods to store passwords.
Exposes session identifier in the URLs.
Reuse session identifier after successful login.
Improperly invalidate Session IDs.
User sessions or authentication tokens aren’t properly invalidated during logout or a period of inactivity.
Scenario #1: Credential stuffing, the use of lists of known passwords, is a common attack. Suppose an application does not implement automated threat or credential stuffing protection. In that case, the application can be used as a password oracle to determine if the credentials are valid.
Scenario #2: Most authentication attacks occur due to the continued use of passwords as a sole factor. Once considered, best practices, password rotation, and complexity requirements encourage users to use and reuse weak passwords. Organizations are recommended to stop these practices per NIST 800-63 and use multi-factor authentication.
Scenario #3: Incorrect application session timeouts. A user uses a public computer to access an application and left without logging out. An attacker can utilize this and access the account from the same public computer since the user is still authenticated.
An attacker could make use of the authentification failure to access the user accounts.
Implement multi-factor authentication to prevent credential stuffing, brute force, and stolen credential reuse attacks.
Don’t deploy with any default credentials, particularly for admin users.
Implement weak password warnings.
Align password policies with NIST 800-63b’s guidelines in section 5.1.1 for memorized secrets or other modern, evidence-based password policies.
Adopt methods to harden against account enumeration attacks.
Limit or increasingly delay failed login attempts, and beware not to create a DOS scenario.
Log all failures and alert administrators on the occurrence of attempts to breach.
Implement server-side, secure, built-in session manager which generates a new random session ID with higher entropy after login.
Session identifier should not be accessible with the URL and should be stored securely, and invalidated after logout, or idle, or absolute timeouts.
This is a vulnerability related to the lack of protection against integrity violations in code and the infrastructure of the software used.
Software and data integrity failures occur:
The failure to comply with the software and data integrity which occur in the application that relies upon plugins, libraries, or modules from untrusted sources, repositories, and CDNs.
An insecure CI/CD pipeline could also allow unauthorized access, malicious code, or system compromise.
On failure to conduct integrity checks on software updates.
Scenario #1: Many home routers, set-top boxes, device firmware, and other devices do not use signed firmware to verify updates. Unsigned firmware is becoming a more popular target for hackers, and it’s only going to get worse. This is a major concern because, in many cases, there is no way to fix the problem other than to fix it in a future version and wait for older versions to become obsolete.
Scenario #2: Nation-states have been known to attack update mechanisms, with the SolarWinds Orion attack being a recent example. The software’s developer had secure build and update integrity processes in place. Despite this, the firm was able to circumvent them, and for several months, it distributed a highly targeted malicious update to over 18,000 organizations, with only about 100 of them being affected. This is one of the most significant and far-reaching breaches of this kind in history.
Scenario #3: A set of Spring Boot microservices is called by a React application. They tried to make their code immutable because they were functional programmers. Serializing the user state and passing it back and forth with each request is the solution they came up with. The “rO0” Java object signature (in base64) is discovered by an attacker, who then uses the Java Serial Killer tool to gain remote code execution on the application server.
Attackers could potentially upload their own updates to be distributed and run on all installations.
Use digital signatures or similar mechanisms to verify the software or data is from the expected source and has not been altered.
Ensure libraries and dependencies, such as npm or Maven, are consuming trusted repositories.
Ensure that a software supply chain security tool, such as OWASP Dependency-Check or OWASP CycloneDX, is used to verify that components do not contain known vulnerabilities.
Ensure that there is a review process for code and configuration changes to minimize the chance that malicious code or configuration could be introduced into your software pipeline.
Ensure that your CI/CD pipeline has proper segregation, configuration, and access control to ensure the integrity of the code flowing through the build and deploy processes.
Ensure that unsigned or unencrypted serialized data is not sent to untrusted clients without some form of integrity check or digital signature to detect tampering or replay of the serialized data
This category is to help detect, escalate, and respond to active breaches. Without logging and monitoring, breaches can’t be detected.
Insufficient logging, detection, monitoring, and active response occurs in times of:
Not logging auditable events (logins, failed logins, and high-value transactions) Warnings and errors generating no, or inadequate, or unclear log messages.
Logs of applications and APIs are not monitored in cases of suspicious activities.
Logs are not stored locally.
Appropriate alerting thresholds and response escalation processes are not in the correct place or effective.
Penetration testing and scans by DAST(Dynamic Application Security Testing) tools do not trigger alerts.
Application can’t detect, escalate, or alert real-time active attacks
Scenario #1: Due to a lack of monitoring and logging, the website operator for a children’s health plan provider was unable to detect a breach. An attacker had accessed and modified thousands of sensitive health records of more than 3.5 million children, according to an external party who informed the health plan provider. The website developers had not addressed significant vulnerabilities, according to a post-incident review. The data breach could have been ongoing since 2013, a period of more than seven years because there was no logging or monitoring of the system.
Scenario #2: A major Indian airline suffered a data breach involving millions of passengers’ personal information dating back over ten years, including passport and credit card information. The data breach occurred at a third-party cloud hosting provider, which notified the airline after some time had passed.
Scenario #3: A GDPR-reportable data breach occurred at a major European airline. Payment application security vulnerabilities were reportedly exploited by attackers, who harvested more than 400,000 customer payment records. As a result, the privacy regulator fined the airline 20 million pounds.
On the occurrence of an incident or an attack, the source and the depth of the breach cannot be detected, since the logs are not available.
Controls should be implemented by developers depending on the risk of the application.
All logins, access control, and server-side input validation failures are logged with sufficient user context. This will be helpful to identify suspicious or malicious accounts and hold for enough time to allow delayed forensic analysis.
Ensure log data is encoded correctly to prevent injections or attacks on the logging or monitoring systems.
Ensure high-value transactions have an audit trail with integrity controls to prevent tampering or deletion, such as append-only database tables or similar.
DevSecOps teams should establish effective monitoring and alerting such that suspicious activities are detected and responded quickly.
Establish an incident response and recovery plan, such as the National Institute of Standards and Technology (NIST) 800-61r2 or later.
SSRF flaws occur whenever a web application is fetching a remote resource without validating the user-supplied URL.
SSRF allows an attacker to send a crafted request to an unexpected destination, even when protected by a firewall, VPN, or another type of network access control list.
The incidence frequency of SSRF is rapidly increasing due to the expanse of modern web applications which provide end-users with convenient features, and due to cloud services and the complexity of architectures.
Attackers can utilize SSRF to compromise systems protected by web application firewalls, firewalls, or network ACLs, for example:
Scenario #1: Port scan internal servers – If the network architecture is unsegmented, attackers can map out internal networks and determine if ports are open or closed on internal servers from connection results or elapsed time to connect or reject SSRF payload connections.
Scenario #2: Sensitive data exposure – Attackers can access local files such as or internal services to gain sensitive information such as
Scenario #3: Access metadata storage of cloud services – Most cloud providers have metadata storage such as
http://169.254.169.254/. An attacker can read the metadata to gain sensitive information.
Scenario #4: Compromise internal services – The attacker can abuse internal services to conduct further attacks such as Remote Code Execution (RCE) or Denial of Service (DoS).
The attacker can compromise the server’s internal services to perform further attacks such as RCE, DOS, and even collect sensitive data.
Some or all of the following in-depth defense controls can be implemented by developers to prevent SSRF.
Do not use a deny list or regular expression to prevent SSRF. To get around deny lists, attackers have payload lists, tools, and talents.
Use “deny-by-default” firewall settings or network access control rules to block all except essential intranet traffic,
To reduce the impact, isolate the remote resource access capability into its own network. All client-supplied input data should be sanitized and validated.
With a positive allow list, enforce the URL schema, port, and destination.
Do not send raw responses to clients
Disable HTTP redirections
Other security-related services should not be installed on front-end systems and also control local traffic on these systems.
Use network encryption (e.g. VPNs) on independent systems for frontends with dedicated and managed user groups to consider very high protection demands.