Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
3 views29 pages

SCTunit 3

Sct 3

Uploaded by

alonewalker3125
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views29 pages

SCTunit 3

Sct 3

Uploaded by

alonewalker3125
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

UNIT III

Secure coding practices and OWASP Top 10


Declarative Security

When developing an application security can represented in two ways


1. Declarative security
2. Imperative Security
In this Declarative Security the security is part of the application or what we refer to as
the container. With imperative security, the security is part of the code. So the developers write
the security statements within the applications code.
Declarative security defines the security aspects of the application with respect to the
container. When implementing declarative security, we create a configuration file typically in
XML format.
The config file defines the security rules for the application, including all the
authentication, and authorization settings for the application itself.
As an example of declarative security, let's consider an ASP.NET web application. With
ASP.NET web apps, we have a web configuration file that contains all the authentication and
authorizations settings. Within this file, we can change the authentication method. We can also
specify rules relating to who is authorized to access the web application, and who is not.
Since the security rules are centrally located like this, if we want to modify those rules,
all the settings are right here in the configuration file, quick and easy. A significant benefit with
declarative security is in its flexibility. As we just learned, security rules are centrally located in
the config file so it's easy to modify that file without ever touching the code. The security rules
are defined as part of the deployment, not the code itself. Which means that the security rules can
be changed for each deployment as required.
let’s consider a deployment scenario as an example. Say we're developing an application,
and we're deploying the application on server one. Well, we have an existing config file. We can
open the existing config file, and modify the rules that apply to server one. Later on say we want
to deploy the application over to server two. We can apply a totally different set of rules within
the configuration file and the new deployment to server two.
We write the security rules as code, we lose that flexibility. The rules are compiled into
the app. And so it is significantly more effort to change the security rules, recompile, and then
redeploy. Because security rules are configured as part of the deployment, it means that security
is managed by operations personnel, not the development team.

Programmatic Security

Programmatic or imperative security sees the security rules implemented within the
application code. One convenient way to do this is to place all the rules in a component which
can then be called by other applications as well. Either way, the security rules are embedded in
the code itself and the application code is known as the container. So let’s compare these two
approaches.
With imperative security those rules are defined as part of the code, whereas, with
declarative security, those rules are defined inside a configuration file. So with imperative
security, we have a lot less flexibility. The rules are the rules regardless of where the application
is deployed with declarative security; we can alter those rules with each deployment of the
application. So more flexibility.
With an imperative approach, we can enable the enforcement of complex business rules
within the code itself, which are not possible under the declarative security approach. These
complex business rules are going to be written inside a component that's going to be reused with
different applications. Enforcing security rules as part of the code means that every
implementation of that code will have the same security rules enforced.
So it's a little bit more not flexible, the code a little less portable since there are specific
business rules built into that code that don't necessarily apply everywhere. Choosing imperative
or declarative security is a design consideration that we make when planning the security model
for the application at design time.
Once we've decided on how we will implement the security rules, then we design the
system based on the chosen security approach. Then, based on that design, we can build the
required protections as part of the secure development lifecycle.

Concurrency

We will be able to recognize how to use defensive coding practices to address


concurrency issues leading to race conditions. Firstly, we need to know what defensive coding is.
Defensive coding practice ensures that the software system being developed incorporates a
series of controls so that we'll continue to operate despite changes (without being affected by).
Concurrency involves two threads or control flows executing at the same time.
Concurrency is actually one of the primary properties in race condition and time-of-check time-
of-use attacks. Recall that a race condition is when two threads work against each other by
attempting to modify the same shared object concurrently. Defensive coding practice involves
adding controls to address the potential of race conditions occurring.
There are multiple ways to prevent race conditions. First, we will see how to avoid race
windows. It is the time during which two threads are racing against each other in an attempt to
alter the same shared object. So it's important that we first review the code to identify areas
where race conditions can occur and then address them within the code to avoid the race
condition.
Another way to avoid race conditions is by using atomic operations. Atomic operations
involve ensuring that an operation is 100% complete. That is, it's an all or nothing proposition.
So the entire process is completed within a single flow of control, while at the same time
disallowing concurrent threads or control flows against the same shared object.
For example, consider an e-commerce transaction when you're paying with your credit
card. An atomic operation would involve debiting the credit card, the amount that you're paying
the vendor, but ensuring that it's not just debited and not credited to the vendor's account. In fact,
the entire transaction has to complete, otherwise the transaction is rolled back entirely. So all or
nothing.
Another method that can be used to prevent race conditions is mutual exclusion or mutex.
Mutex involves making conflicting processes or race windows mutually exclusive. This can be
realized by having the code place a lock on a resource so that no other code can make changes to
that resource. This is a very common technique used with database resources. For example, a
lock is placed on a record as the data is being committed to the database. So no other code can
access that same record thereby preventing conflicts.

Configuration

It is an important consideration when developing an application is the environment that


the application will be installed on, When the application is installed on an operating system, that
operating system may be misconfigured, allowing a hacker to access that operating system and
potentially access all the applications on it.
Consider as an example a web application.Web application is installed on a web server,
those web applications are going to access data from a database server. It's very important to
acknowledge that, those systems could be configured in an unsecure state, which could also
allow the hacker to access our web application and its data. How can we help to prevent these
types of attacks and address the vulnerabilities? We should evaluate the security configuration of
the system. We should establish a process to harden the operating system and the applications
installed on it.
Processes there are a number of tasks that we need to perform. It will review the software
installed on the operating system by default. We may find that there is other software installed by
default that's not necessary, it increase the attack surface. So we want to remove any unnecessary
software. For example, since we're considering a web server, all we want is the web server
software installed, So we discovered that the operating system included a web server and a mail
server, but we don't need the mail server. Well, we would remove it. Similarly, we would want to
review the services that are installed and are running by default. If the services aren't needed,
then we disable them. In addition, we want to make sure that all software and services on our
system are up-to-date.
Another step we can take to protect the system is to implement a firewall. We can use the
firewall to control network traffic on the system. For example, we could create firewall rules that
would only allow web traffic into our web server, reducing the chances of an attack and
hardening the system. Let's consider the default configuration of the application and the
operating system. Hackers understand defaults and typically use them to compromise systems
and applications. For example, all Windows servers have an administrator account, so it's a good
idea to rename the administrator account. In addition, any default passwords on the system
should be altered as well. We'll also want to consider the authentication method being used.
Another default consideration includes folder locations. As part of the overall hardening
process, we should always review all default settings for applications and the operating system.
And see if there are opportunities to change them. The security configuration also includes using
configuration files for our applications that define application settings like connection strings as
well as encryption keys. We need to ensure that those values are encrypted in the configuration
files, never stored in plain text. As part of a secure development life cycle, we should as an
organization device and put into practice a standard process including all of these and any other
steps that we recognize in order to harden the system.

Cryptography
Cryptography involves the use of cryptographic functions within an application.
Cryptographic functions can perform a number of different actions, such as hashing functions.
Which hash the data, generating a hash value, which can be used to verify the integrity of the
data. There are encryption functions to encrypt the data, as well as decryption that can decrypt
the data later when required.
Cryptographic vulnerabilities (state of being exposed to the attackers) is a significant
security issue, it expose the organization's sensitive data to the hacker or other unauthorized
parties. That's why it's so vital to make every consideration relating to how the cryptographic
functions are being implemented. So we need to scrutinize the implementation of both the
hashing functions and the encryption functions.
Typical issues that lead to cryptographic vulnerabilities include simply not encrypting
sensitive data. All sensitive data within an application should be encrypted, including any
sensitive user data, passwords, as well as the applications configuration data, like connection
strings and encryption keys.
Encryption keys are so crucial and central to the cryptographic function, we need to make
sure that we're securing the encryption keys. And if we're storing the encryption key inside a
config file, then we need to make sure the config file itself is encrypted. Another common issue
is using dated cryptographic APIs.
Upgrade often, being alert to any updates to cryptographic APIs. Developer has become
comfortable with an older, dated cryptographic API and may be tempted to use that same older
API with a newer application. So that's another important issue to be aware of. It's just good
practice to use the newer APIs.
Let’s consider some mitigation techniques that can help secure cryptography within our
application environment. We need to protect all our sensitive data at rest by encrypting that
information. We need to also pay attention to trust boundaries, do not allow sensitive data to
cross trust boundaries. For example, in larger organizations, you'll have different types of
sensitive information and different network segmentations. So in this type of setting, we
definitely want to make sure that no unprotected sensitive data travels from a very secure
network segment into one that is not very secure.
Another mitigation technique is to make sure that you're using standard encryption and
hashing algorithms. It is not best practice to create customized logic to perform an encryption or
hashing function. We need to use standard algorithms like, for example, AES as a standard
encryption algorithm. In a secure development life cycle, we want to make sure that our software
has cryptographic flexibility. So we know that over time, weaknesses in algorithms are
discovered, and so we need to replace those algorithms.
Therefore, cryptographic flexibility refers to making sure that our code is designed to
allow for changing those algorithms. So we need to make sure that algorithms are not hardcoded
into the application. Rather, it should be to design our software so that it can be reconfigured
quickly and easily. This involves making library calls to invoke our cryptographic functions, that
we can manage them via a configuration file.

Input and Output Sanitization

Sanitization is the process of converting information from a format that may be harmful
to a format that is not harmful. There are two different ways to implement sanitization.
 First is input sanitization. With input sanitization, we're going to sanitize any information
as the data is input before actually attempting to process that information.
 Output Sanitization where we sanitize the information after it's been processed, but prior
to it being presented to users.
Typically, we perform Output Sanitization by encoding the information. For example,
consider a web application. We would convert a greater than sign to the HTML equivalent, an
encoded ampersand gt symbol. The user's browser receives the data and it interprets the
ampersand gt symbol and renders it as a greater than symbol. One thing to keep in mind is that as
a developer when implementing sanitization, we always want to maintain the integrity of the data
that's being input. You don't want to change the value of the data, and you definitely don't want
to change the meaning of the data.
With Input sanitization, one technique involves stripping information out. So as the data
is input into the application, we parse the information, and we remove any unwanted or harmful
characters from the user input. So let's consider some examples of these unwanted characters.
Characters typically used in injection attacks include the apostrophe or single quote, and double
dashes. So we would strip those out.
Consider cross-site scripting attacks. With cross-site scripting attacks, malicious content
is sent to a browser, often taking the form of a segment of JavaScript. So we would parse the
input and remove any script tags, as well as any other potentially harmful characters like the
forward slash.
Another sanitization technique is substitution. Instead of removing unwanted characters,
we could replace unwanted characters. For example, the apostrophe or single quote could be
replaced with a double quote to prevent potential sequel injection attacks. Literalization is
another sanitization technique.

Error Handling
There are two very important protective actions that developers can take in the
development of secure applications. We have input validation and output error handling.
We concentrate on output error handling, we're controlling the error messages displayed
by our application so that those error messages don't accidentally display sensitive information to
the user, when an error does occur. We know that errors do happen and when we're handling
errors, we need to ensure that no sensitive information is included in the error message. For
example, database names, server names, or even usernames that are being used to make a
database connection.
Input validation also plays an important part in error handling, because we know that
attackers will purposely inject malicious input to try to cause errors. One way to prevent error
messages from exposing sensitive information is to prevent errors from happening in the first
place. And this may be accomplished by performing input validation to avoid invalid data.
We're going to check for invalid, or harmful characters, or invalid information before
attempting to process the data. That alone will help reduce the number of errors and therefore
prevent some error messages from occurring at all. Then we can look at those errors that do
legitimately occur. Let's consider some error handling best practices. First, we want to use non-
verbose error messages. One good example of that is instead of displaying a message indicating
that the username was invalid, or that the password was invalid in a user login form error. We
would specify something a little more general, a message like credentials are invalid or
something along those lines.
The information should not be detailed. If we were to indicate that it was the username
that was invalid or the password was invalid. Well, we're telling the hacker, something valuable,
they have some valuable information. Maybe they got the username right if we tell them that the
password is invalid, that's not good practice. So it's crucial to provide only generic non-detailed
error messages.
Another step to take specifically looking at error handling, when an error occur some sort
of action must be taken. We want to make sure that the action taken is always taken such that the
application fails in a secure state. Let's say for example, someone is attempting to login to our
web application and they're typing the wrong username and password. Depending on the
situation, we may consider allowing only a limited number of attempts maybe, for example,
three attempts, after the third attempt, we would take action. A secure approach to failure would
be to make sure after the third attempt that the action we're taking is to lock the account.
An important term to understand in this context is clipping level. This is the number of
errors before action is taken. Clipping level is treated as a threshold after which we take secure
action. So, in our example, the example of the failed login, the clipping level is three

Input Validation
As a general rule, when developing an application, all data input should be treated as
malicious input. Let's say that you're developing a public facing web application. We would be
wise to assume that every time someone fills some form data that the information is coming from
a malicious hacker, and the input is malicious. Accordingly, it's important that we validate all
information passed into the application before processing the information. So before we send that
data to the database server, we'll perform a rigorous validation on it. Let's consider some
examples of techniques that we can use to validate the information.
We can verify the data type. Say we are expecting to have some date data type input. We
can first verify that it's a date data type and that it's a valid date. Maybe we're expecting a
numeric value and we want that to fall within a range of values. So for example, say we're asking
for the year of a car model for an automotive parts website. Let's ensure that it's a numeric value
and that it falls within the range of say, 1970 to the current year. It's also important to check for
illegal or harmful characters. For example, we could check the input for the presence of an
apostrophe, the single quote or double dashes. These are the types of characters that are typed
into a field when someone is attempting to launch a SQL injection attack.
We should always verify the data length, both the maximum length of the data and the
minimum length of the data. Regarding the maximum, let's say that we're asking for a user's
address, so probably 60 characters max should suffice for a street address and a number. We'll
therefore specify 60 characters as the maximum length, which may help prevent a malicious
actor from using the input form to launch some type of injection attack like a SQL injection
attack.
Some common tools and techniques we can use for validating data include regular
expressions. We can use regular expressions to verify both the input format and the input values
by checking for patterns of characters. Let's consider for example, an application where the user
is supposed to type in a product ID. Typically, product IDs exhibits some type of pattern. We can
use a reg-x to verify that pattern. For example, that the first six characters are letters and that the
next three are numbers. We can also do the same thing for email addresses, something very
common in web applications. They are regular expressions to ensure that the email address is
specified in the proper format. For example, we may see a group of characters followed by an @
symbol, another group of characters a dot or period, and then another group of characters. So
that's the pattern for this type of email address. Something else we can do is verify the input
against a whitelist or a blacklist.
The whitelist is a list of characters that are allowed to be specified, while the blacklist
specifies a list of characters that are disallowed. So we verify against these lists each time we
elicit input from the users. We can validate input at the client and at the server, or at both ends if
necessary depending on the situation. For example, quite often, web applications will implement
validation at the client and the benefit of validating at the client is that the user sees the results
immediately. As they are navigating from one field to the next in an input form, they're going to
see error messages updated in real time. And so they can fix the problem and having done so, the
web app software removes the error message accordingly.
The point is that the visual feedback in real time is one of the excellent things about client
side validation, but that's not all. It also serves to reduce network traffic. So rather than
submitting that input data across the network over the server, we're validating on the client side,
saving perhaps many round trips with invalid data as the payload. This can be significant for
busy sites with tens of thousands of users. We can also validate at the server site. This is actually
recommended in scenarios where we have an application passing data to the server, like a web or
mobile application. We should always check data passed to the server. Many implementations
will actually perform validation at the client and at the server.

Logging and auditing


A key for security concept is having the ability to keep logs and audit an application. We
need to be able to log application events, actions, and users. The objective is to ensure that when
something happens, some event within the application. We need to be able to find out who
performed that action. Essentially we monitor for changes and log what happened, when it
happened and who did it. There are different forms of logging and auditing.
First, let's consider general error logging. Many times when an application error occurs,
we examine the information in some form of log file. Typically, the log file will collect enough
information so that we can diagnose the error. And troubleshoot the application, allowing us to
get to the root cause of that error. So, it's important to log errors so that we have enough
information to get to the heart of the problem and address it.
The next type of logging is user action logging. This is where we log what a user does
within the application. So for example, as a user performs different actions, like choosing
different menu items, performing create, read, update or delete actions. We need to make sure
that we're logging those kinds of activities. Because at some point, we will encounter a situation
where someone is going to need to know who altered a specific record.
Not only do we need to log and audit user actions, it's very important to log
administrative actions. So let's consider a web application that's used to view product details and
related information. We may want to track user actions, that is, who is accessing the site and
viewing the different products. But then there is the question of administrative actions.
Administrators can access administrative functions. For example, they can create users that are
allowed to access the site. They can upload some product information, create a new product line.
Change configuration settings. These are the types of administrative functions that we need to
make sure we're logging as well. For example, management might want to know who added
certain data, who altered certain data, or who deleted that data. So it's absolutely critical that
we're logging any administrative types of actions within the application environment. Typically
writing those to a file or to the database.
One benefit of writing this information to a database is that it's relatively easy to generate
reports when the data is written to a database. Let's consider the database platforms. Any
enterprise class database platform will have logging and auditing capabilities. For example,
Oracle and SQL Server both have pretty robust and powerful audit features. Allowing us to audit
changes to database tables, changes to the database environment, and so on. So for example, if
someone creates a new user account, or maybe a new record in the database. That data is logged
in the database as part of the database logging feature.
Lastly, Audit and Logging Control. It's important that our application is designed so that
the administrator or a user with administrative privileges can control the audit and logging level.
Enabling different levels of logging and auditing. And so it's a good idea to provide the
organization using the application with an option to turn features on or off. And specify, at least
to a certain degree, the amount of logging that's being performed.

Session Management

To start the log in process, users authenticate to the application. Access is then granted
and at this point, having been successfully authenticated, a session is established and the user is
provided with a session ID. Something that's important to understand that the hackers can obtain
the session information and use that to impersonate the user there are different types of session
attacks.
First, a session hijack attack occurs when a hacker takes over the conversation,
leveraging the session information to impersonate the user. Another common session attack is
the man in the middle attack.
In this scenario, the hacker places themselves between the two parties that are
communicating. As an example, consider someone surfing the web at an Internet cafe. The
hacker could place themselves between the user surfing the net and the actual Internet website
itself. If successful, the hacker could gain access to confidential information. Once the hacker
obtains the session information and is able to impersonate the user.
The hacker gets their session ID, and begins sending a request to the web application,
impersonating the user, gaining access to all the information that the user would typically have
access to. It's vitally important that we code in some mechanism within our application to make
it possible for the application to distinguish between an impersonated session and the actual
session. So how can we do that? Fortunately, there are a number of different techniques available
to help us. Let's consider the security token. A security token can be used together with the
session information to ensure that the request is actually coming from the valid user and not from
the hacker.
So once again, let's consider a web application, an ASP.netMVC application. In the
HTML for the app, we could write a statement that generates an anti forgery token. On the client
side for every payload or form data that's sent up to the server, a token is generated. This token
represents part of the session ID. The pivotal aspect here is that it is generated on the client. So,
when the form data is sent up to the web server, the token is sent along with the form data. The
server verifies the token against the user's session ID. If there's a match, then the server accepts
the form data. So, in this context, it's important to understand that you also have to force the
server to verify the token or check the token.
If we forced the server to check for the existence of the token along with the session ID,
then a hacker may be able to successfully retrieve this session information. But when they
attempt to submit the information, they would just have the session ID. The hacker wouldn't also
have the correct token. So the server would not accept the form data.

Exception Management
Exceptions are errors that occur due to unexpected actions within the application, these
are different than typical syntax or logic errors. For example, consider a web application. And in
your code, you're fetching data from a database. If not handled correctly, you may end up
disclosing connection information with the exception message. When an unhandled exception
occurs, typically an exception message is displayed. And it could potentially disclose
information about the application that you really don't want a hacker to know.
The objective should be to catch those exceptions, which means that you need to make
sure that you're testing the application for all possible outcomes. Imagine our web app is
prompting a user for information. We need to ensure, for example, that under testing we use
different erroneous types of data. We want to cause exceptions to occur so that we can
understand those exceptions. Then, we'll be in position to be able to handle them.
A typical example of exception handling is when someone enters inappropriate erroneous
data into an application, or the wrong data type. This typically results in an invalid cast
exception. When testing the app, we see this exception or error occur. And we make note of it,
noting for example, that it is an invalid cast exception.
Once we've tested the app thoroughly, and we've compiled a list of all the different
possible exceptions, then we can implement appropriate exception handling. We'll create some
logic to catch the exceptions and display user-friendly messages that avoid disclosing sensitive
information. When an exception occurs, as we learned, they may disclose sensitive data or
sensitive information about the code itself. If the exception occurs in relation to a call to a
database, sensitive information like a username that the application is using to make the
connection to the database may be displayed. The exceptions may disclose information about file
locations, or for which file the exception occurred or the exception may even display a stored
procedure name from the database. This is all tremendously valuable information to a hacker
trying to compromise an application. The goal is to generate exceptions as we're testing the app
and catch those exceptions. When catching the exceptions, instead of allowing system defined
error messages that disclose sensitive information, we'll create custom error messages with data
that's not valuable to hackers. An important exception management technique involves using
try/catch blocks. If the programming environment supports try/catch blocks, we'll use them to
catch the errors and display some type of user friendly error message.

Safe APIs
An application programming interface, or API, is a library of code that developers can
call upon in order to access some specific type of functionality. For example, there are seemingly
endless number of APIs available providing different types of functionality, like Microsoft's
Crypto API and Python's pycrypto, which both provide cryptographic functions. And then there's
social media platforms like Facebook, Google and Twitter providing their own API's that
programmers can tap into while incorporating aspects of those services and there are countless
others.
There are also numerous cryptographic libraries that we can call upon to leverage hashing
functions and cryptographic functions, like encryption functions and decryption functions. With
all the APIs available, it is critically important to ensure that the APIs that we are calling are
considered secure. For example, older or dated APIs may not have followed secure coding
practices. So it's important that an organization assess the APIs being called from within their
applications and make sure that those APIs are considered secure or safe. When identifying the
threats to our application architecture, we need to make sure that we are examining any APIs
being called. Some security considerations with respect to APIs include banned APIs. We need
to ensure that our code is not calling banned APIs.
The same goes for deprecated APIs. Banned and deprecated APIs are those that have
already been identified as either being old APIs that should no longer be used or unsecure APIs.
So they have been banned. We need to review our application's code, and compile a list of any
APIs that the organization possibly used in the past and replace band or deprecated APIs with
similar newer secure APIs. If the organization is creating its own API as an interface into our
application, then there are certain secure practices we need to follow.
First, we need to ensure that any requests sent to the APIs are authenticated. If the
functionality isn't something for public consumption, there has to be some sort of authentication
method for each of the calls to the API itself. We also need to make sure that we audit access to
the API, and any of the calls being made throughout that API, since auditing is a critical
component of secure coding practice. In addition, if we need to maintain confidentiality, we need
to make sure that our API is encrypting any sensitive data, especially if the API is exposed across
the web. We need to make sure that we encrypt passwords and all authentication traffic, as well
as any other related sensitive information, for example, credit card data.

Type Safety
Type Safety is a feature of many programming languages that can help prevent data type
errors. A data type error occurs when a developer treats a data element as a different data type
than what it actually is. For example, a developer may treat an integer value as a float value. This
produces an error, since float values can be much larger than integer values and consequently an
integer value cannot store a float value. Attempting to store a float value in an integer data type
would result in a data type error. There are different implementation methods for type safety.
There are static methods and dynamic methods. Static type safety involves assigning the
data type at design time. So when we create a variable, we assign the data type at that time. The
compiler catches any type errors that exist at compile time. So for example, if we declared a
variable as an integer design time, and then attempted to store a float value in that variable, the
compiler would catch it as an error and we would have to address the problem before the
application can be compiled successfully. With dynamic type safety, we're assigning the data
type at runtime. In this case, the compiler won't be able to catch any type errors at compile time.
For this reason, when we use dynamic types, it's critical that the application is tested thoroughly,
verifying that there are no type errors at runtime.

Memory Management
Memory management is a programming concept involving the management of resources
residing in-memory. When managing memory resources, we're responsible for ensuring that
resources do not stay in memory if they're no longer used or no longer needed. Memory
management is a pretty complex undertaking due to the dynamic nature of memory. Items are
constantly being loaded in and removed from memory. So memory management and allocation
is a shared responsibility between the operating system and the applications running on top of the
operating system. In the context of memory management, we classify code as one of two types.

We classify code as managed code or unmanaged code.


In a managed code environment, memory management happens automatically. There are
several different processes built in that help clean memory and keep it from being overused.
Processes run in the background like garbage collection. Whenever it's needed, it runs and frees
up memory. Essentially, any objects that are no longer being referenced in code can be cleaned
out. And garbage collection removes memory blocks originally allocated to those objects. So
these types of processes, garbage collection, are all performed automatically and transparently in
managed code environments.

With unmanaged code, it's the responsibility of the programmer to manage and clean up
memory. For example, garbage collection operations, thread pooling and similar processes, they
are all manual processes. When selecting a programming language environment, that is one
consideration to take into account. Because we'll want to know whether it's a managed code or
unmanaged code environment. So let's consider an example. Managed code handles memory
management transparently. It's basically a function of the runtime. In the .NET environment for
example, the CLR or Common Language Runtime takes care of operations like garbage
collection.
Type safety is another memory management concept that's important to understand. Type
safety is directly related to memory safety. And memory safety means that a process or an
application, can only access memory that's been allocated to it. It cannot access memory
locations that it hasn't been authorized to access. This is another consideration that's important
when deciding on a programming language, whether or not we want type safety.

Benefits of type safe environments include the fact that type safe environments prevent
data type errors.Type safe code remains within the expected memory range, so it can only access
memory that's been allocated to it. In addition, type safe environments isolate assemblies from
each other. For example a .NET assembly is a file that contains compiled code which can
execute within the common language run time. Assembly isolation results in more reliable code,
more reliable applications. Another key memory management concept is locality.
Locality is the principle that defines the predictability of memory references. For
example, when an application writes to a memory address, let's say it writes to memory blocks
A, B, and C. Well, the next memory block that it's going to reference will be memory block D,
and then E. So it's easy to predict the memory address that applications are going to use, it's
predictable. And this is actually not a great thing, because it means that sophisticated hackers can
also predict memory address locations.
These types of applications are prone to buffer overflow attacks. Some application
environments support address space layout randomization which helps defend against locality
attacks by ensuring that memory addresses that are going to be referenced are randomized. So
they're not so predictable.
Tokenizing
Tokenization involves taking sensitive data and replacing it with unique symbols in order
to retain the information. To understand this, let's look at a scenario. Imagine we're creating a
web app that sells products online. At some point, a customer initiates a purchase. And in that
transaction, we need to accept sensitive information, like a credit card number. Also imagine that
whether due to regulation or policy, we're not allowed to store that credit card number once the
transaction has completed.

No problem, except that we do want to store the transaction related information for our
own records. So what do we do? Tokenizing provides us with a solution. We're not going to store
the entire credit card number. We're going to replace some of the characters on the credit card
number with strings of characters and other symbols retaining only the last four digits of the
credit card number.
So for that reason, what we're doing, instead of storing the entire credit card number,
we're creating a token. A random value, or a value based on the transaction information is then
generated. And so again, based on that transaction data, we'll take that generated token and then
also include the last four digits of the credit card number.
So we're storing this long string that represents the credit card number used with the transaction.
It's not the actual credit card number. And therefore, if someone were to gain access to that
token, we've mitigated the risk successfully. Because outside of the context of the transaction
that's taken place, it has no relevance at all.

Sandboxing

In a secure development life-cycle, sandboxing provides an application boundary that


prevents the application from accessing the host operating system and its resources. This results
in a more secure environment, since code cannot perform malicious actions outside of the
application boundary or sandbox. A sandbox works because it's designed to give any untrusted
or unverified code or software a place to run or execute in a safe environment.
It keeps the system safe from that untrusted code. The sandbox environment works by
limiting access to resources on the host system and its underlying operating system. Effectively
restricting software running in the sandbox from freely accessing system resources. And by
system resources we're including memory, as well as network resources. It also includes program
or device resources, as well as the host operating systems filesystem. The level of protection
provided by the sandbox varies depending on the implementation. However, in some scenarios,
explicit permissions may be granted to an application if required. So it's still a pretty flexible
approach.

Static and dynamic testing

A crucial part of developing and managing web applications is periodic testing. So there
are a number of different types of software testing that we should be aware of, one of those is
Dynamic Application Security Testing, otherwise called DAST. This means that we are
observing the application while it's actually executing, so it's runtime execution testing. Maybe at
that time we will interact with the application, and if it has a user front-end we might type in
things and submit them just to make sure that things are working correctly in a secured manner
and that we can't input special characters or malicious scripts in fields or pass it through as URL
parameters.
Static Application Security Testing, otherwise called SAST, SAST is often done with a
code review, so we're looking for any programming flaws or logic flaws related to security that
might not show up until before applications are actually compiled. If that's the type of code
where it needs to be compiled, it might be interpreted too, but either way, code reviews are very
important to detect security issues at this level.
Sandbox Testing is a type of testing where we have a safe environment. It's not a
production environment. It's a testing environment where we can test things like configuration
changes that might be at the operating system level, maybe config changes to a web-server stack
and of course changes that are made to code, or a web application. Safe testing environments
also allow us to test malware. We might want to see how our application deals with the latest
worm malware that propagates itself over the network.
So all of these things are considered to be Sandbox Testing, but how do we actually
implement that? So it sounds great. What do I do so that I have a sandbox to test these things in,
even to detonate malware? Well, you could have an isolated network for starters, whether it's a
physical VLAN not connected to another network, like a production network or whether you are
doing that through virtual networking in the cloud or even on-premises. Using Virtual machines
that again do not have a network link to a production network could be a great way to perform
sandbox testing.
In these days, a lot of larger applications are built using one or more application
containers, and so each application container then can be tested unto itself independent of other
application containers.
Unit Testing is a type of testing that applies to one unit or one chunk of code. Now this
lends itself to microservice decoupling. What does that mean? What is a microservice? Well,
instead of having one entire large application, these days more complex apps consist of smaller
modules and each of those modules are function-specific and they're called microservices. So by
decoupling them what we're really saying is we have independent code chunks that can be
changed or modified independent of other ones. They can be tested or updated independent of
other ones. So code execution then is isolated from other app functions. So imagine that within a
larger app, we have one microservice that is focused on one task like printing and another
microservice or task might be focused on things like rendering images to thumbnails, so those
would be two application containers, presumably, if we're using application container
environments.
We can make changes to those application containers one at a time or at the same time
different developers, they are independent of one another. So you could say then that testing,
security, and high availability are all isolated in our example at the microservice or application
container level. Another type of software testing is Integration Testing. So imagine that we have
two modules or they could be application containers for a larger app module A & B. Well, they
might work just fine under themselves, and we might have performed unit testing on each and
everything checks out and succeeds. But what about when those two components work together?
Maybe the functionality of those two modules or microservices such that they periodically need
to interact or share information, and that's where the integration testing comes in. Do those
components work correctly together? So we're checking the functionality of those units if you
will or containers among one another.
Regression Testing is a type of testing where we kind of want to look back. Now we
might have made new changes, but have these changes introduced bugs. Maybe they are new
bugs, but maybe they are bugs that were triggered by older code, so we want to make sure that a
new change has not introduced problems. And then of course we have application Fuzzing. This
is a type of testing where we flood an app like a web application with data that it just simply isn't
designed for or doesn't expect. We might even do some load and stress testing to see how much
of this it can handle before the app simply doesn't respond, or perhaps even crashes. So we need
to observe the behavior of the app when we feed it this unanticipated data. And fuzzing can be
done with a number of different tools like the OWASP ZAP tool, now that's a web app
vulnerability scanning tool. It's also a proxy tool which can modify transmissions between clients
and web apps to observe the behavior, but also OWASP ZAP and tools like Burp Suite will also
allow you to perform fuzzing against an application to observe the results.

Vulnerability scanning and penetration testing

Vulnerability scanning for web applications and perhaps ultimately running pen tests,
first begins with Reconnaissance. Reconnaissance is really about information gathering what's on
the network and how is it configured. So discovering active network hosts, perhaps even
determining if there's Malware infections on host. Or looking at IP addresses that are in use or
DNS records, that might imply by name the purpose of a server. For example, a server called
accounting dot local might be an accounting server.Also, scanning ports looking for what
services are in a listening state on machines. For example, how many servers on the local subnet
have a listening HTTP web server stack.

Another aspect of scanning for vulnerabilities is Enumeration. Now, Enumeration is


something that not only we might take note of as security technicians, but the malicious users
can also enact. Dumpster diving might allow them to discover documents that contain sensitive
information like names or IP addresses of web servers or configuration details. Shoulder surfing
is the observation, for example, over a short distance of what perhaps people are doing what's on
their screens, what kind of passcodes they're typing in. Social media scraping is another way that,
for example, we might want to determine the names or email addresses of technicians at an
organization. Now this would be from a malicious user standpoint so that we can perhaps try to
use that as breaking into user accounts. We already know what a user name might be for
technicians at the company.
If we were to scrape social media, including LinkedIn profiles, for instance, we might
learn that an organization has a newly hired expert in Apache web server configurations and so
that tells us that more than likely they're using Apache Web servers as their web hosting
environment. Enumerating user accounts. In other words, going through a list of user accounts,
perhaps in a compromised host and then using dictionary-based password attacks to try to figure
out the valid username and password combination that would allow attackers to gain access to
that account.

Now there are a lot of tools out there that will allow us to work with scanning like Nmap.
Nmap stands for Network mapping and so it allows us to map out on a network which hosts are
active and responding at that point in time when the scan was run. And Nmap also allows us to
identify which services are running on those hosts, so if they're running SMTP servers or FTP or
HTTP. Nmap even has an OS fingerprinting mode where it can try to determine what kind of an
operating system is running on the discovered host.
Whether it's, for instance, Linux based or Windows based? Nessus is another common
scanning tool, but this one is more than just network mapping like Nmap. It can do what Nmap
does, but more because it's actually a vulnerability scanning tool. Now vulnerability scanning
tools means that they discover not only active hosts and listening services, but can also drill
down a little bit to see if there are any known vulnerabilities with those running services. Other
types of vulnerability scanning tools would include things like OpenVAS and the Commercial
Languard Vulnerability scanning tool.
You might even write your own custom Bash scripts in Linux or PowerShell scripts, for
example in Windows with loops that are designed to scan through active hosts on a network,
maybe to identify running services and maybe even check on vulnerabilities. But the thing about
all these is that they're very loud. In other words, these scanning tools can be detected on a
network with intrusion detection sensors and so that's good to know. We have a level of
protection at the network potentially, and that individual hosts that can detect that these scanning
tools are being used.
Vulnerability Scanning requires an up-to-date asset inventory. So you have to know what
you have before you can scan it, and then some kind of a baseline comparison of what's normal,
such as devices we expect to appear on a network from our first vulnerability scan and how
they're configured, and even details about specific web applications and any vulnerabilities that
might have been there previously that we have hardened and no longer show up when we
conduct a new vulnerability scan. Vulnerability Scanning might allow us to determine
weaknesses in business processess, network environments, whether wired or wireless, hosts,
devices. There could be physical security weaknesses like unlocked doors to server rooms and of
course, to web applications.
Now in this screenshot we have a vulnerability scan result that shows us that we have a
couple of medium types of vulnerabilities that are showing up according to the legend on a
couple of the hosts which are shown on the left in the screenshot listed by IP address. So
Vulnerability Scanning can be done for a network device or a host. It can identify running
services and of course vulnerabilities that are related those services. So we need to make sure our
vulnerability database is always kept up to date.
Vulnerability scanning tools will often have a function built in where you can update the
vulnerability scanning database so that when you conduct a scan you're looking for the latest
known vulnerabilities. You can configure a credentialed scan, which means you're providing
credentials. Maybe the log in to remote web servers so you can fully examine the configuration
on that web server host and of course, the apps that its running. Or you can configure non-
credentialed scans which gives you more of a sense of what. Attackers that might not know
anything about your environment or the web server or the web app might be able to determine
when running a vulnerability scan. So these should be run periodically so we can schedule them
because we know that threats change periodically.

So we know that vulnerability scanning is passive. It identifies weaknesses, but


exploiting those discover weaknesses falls under the realm of Penetration or Pentesting. With
Pentesting, there needs to be rules of engagement agreed on by both parties. Now those parties
are the pen test team and the system owners. Such as there might be some production systems
that even though the pen test team might discover vulnerabilities should not be exploited because
it might affect business processes. Often Pentesting means signing a Non-disclosure Agreement,
an NDA by the Pentesters in case they come across sensitive customer or trade secret
information within the organization. So this is invasive then, as opposed to a vulnerability scan
which is passive. Now with Pentesting, the red team is the pen test team, they are the offensive.
Where the blue team is referred to as being defensive. This is often the security team within the
organization that responds to incidents that are generated by the red team during Pentesting.

Penetration testing can come in many different forms. Putting through social engineering
by sending email messages to employees to try to trick them into clicking links or open files as
part of a pen test or making phone calls and trying to trick people into divulging sensitive details
such as web application login credentials or even in person. Someone dressing the part showing
up in an office environment and somehow tricking people into perhaps divulging sensitive
information. Or allowing them access to a server room where we might have unencrypted data
from a web app stored in a disk array. So periodic vulnerability scanning and Pentesting is
absolutely crucial for the utmost in security for web applications.

You might also like