SCTunit 3
SCTunit 3
Programmatic Security
Programmatic or imperative security sees the security rules implemented within the
application code. One convenient way to do this is to place all the rules in a component which
can then be called by other applications as well. Either way, the security rules are embedded in
the code itself and the application code is known as the container. So let’s compare these two
approaches.
With imperative security those rules are defined as part of the code, whereas, with
declarative security, those rules are defined inside a configuration file. So with imperative
security, we have a lot less flexibility. The rules are the rules regardless of where the application
is deployed with declarative security; we can alter those rules with each deployment of the
application. So more flexibility.
With an imperative approach, we can enable the enforcement of complex business rules
within the code itself, which are not possible under the declarative security approach. These
complex business rules are going to be written inside a component that's going to be reused with
different applications. Enforcing security rules as part of the code means that every
implementation of that code will have the same security rules enforced.
So it's a little bit more not flexible, the code a little less portable since there are specific
business rules built into that code that don't necessarily apply everywhere. Choosing imperative
or declarative security is a design consideration that we make when planning the security model
for the application at design time.
Once we've decided on how we will implement the security rules, then we design the
system based on the chosen security approach. Then, based on that design, we can build the
required protections as part of the secure development lifecycle.
Concurrency
Configuration
Cryptography
Cryptography involves the use of cryptographic functions within an application.
Cryptographic functions can perform a number of different actions, such as hashing functions.
Which hash the data, generating a hash value, which can be used to verify the integrity of the
data. There are encryption functions to encrypt the data, as well as decryption that can decrypt
the data later when required.
Cryptographic vulnerabilities (state of being exposed to the attackers) is a significant
security issue, it expose the organization's sensitive data to the hacker or other unauthorized
parties. That's why it's so vital to make every consideration relating to how the cryptographic
functions are being implemented. So we need to scrutinize the implementation of both the
hashing functions and the encryption functions.
Typical issues that lead to cryptographic vulnerabilities include simply not encrypting
sensitive data. All sensitive data within an application should be encrypted, including any
sensitive user data, passwords, as well as the applications configuration data, like connection
strings and encryption keys.
Encryption keys are so crucial and central to the cryptographic function, we need to make
sure that we're securing the encryption keys. And if we're storing the encryption key inside a
config file, then we need to make sure the config file itself is encrypted. Another common issue
is using dated cryptographic APIs.
Upgrade often, being alert to any updates to cryptographic APIs. Developer has become
comfortable with an older, dated cryptographic API and may be tempted to use that same older
API with a newer application. So that's another important issue to be aware of. It's just good
practice to use the newer APIs.
Let’s consider some mitigation techniques that can help secure cryptography within our
application environment. We need to protect all our sensitive data at rest by encrypting that
information. We need to also pay attention to trust boundaries, do not allow sensitive data to
cross trust boundaries. For example, in larger organizations, you'll have different types of
sensitive information and different network segmentations. So in this type of setting, we
definitely want to make sure that no unprotected sensitive data travels from a very secure
network segment into one that is not very secure.
Another mitigation technique is to make sure that you're using standard encryption and
hashing algorithms. It is not best practice to create customized logic to perform an encryption or
hashing function. We need to use standard algorithms like, for example, AES as a standard
encryption algorithm. In a secure development life cycle, we want to make sure that our software
has cryptographic flexibility. So we know that over time, weaknesses in algorithms are
discovered, and so we need to replace those algorithms.
Therefore, cryptographic flexibility refers to making sure that our code is designed to
allow for changing those algorithms. So we need to make sure that algorithms are not hardcoded
into the application. Rather, it should be to design our software so that it can be reconfigured
quickly and easily. This involves making library calls to invoke our cryptographic functions, that
we can manage them via a configuration file.
Sanitization is the process of converting information from a format that may be harmful
to a format that is not harmful. There are two different ways to implement sanitization.
First is input sanitization. With input sanitization, we're going to sanitize any information
as the data is input before actually attempting to process that information.
Output Sanitization where we sanitize the information after it's been processed, but prior
to it being presented to users.
Typically, we perform Output Sanitization by encoding the information. For example,
consider a web application. We would convert a greater than sign to the HTML equivalent, an
encoded ampersand gt symbol. The user's browser receives the data and it interprets the
ampersand gt symbol and renders it as a greater than symbol. One thing to keep in mind is that as
a developer when implementing sanitization, we always want to maintain the integrity of the data
that's being input. You don't want to change the value of the data, and you definitely don't want
to change the meaning of the data.
With Input sanitization, one technique involves stripping information out. So as the data
is input into the application, we parse the information, and we remove any unwanted or harmful
characters from the user input. So let's consider some examples of these unwanted characters.
Characters typically used in injection attacks include the apostrophe or single quote, and double
dashes. So we would strip those out.
Consider cross-site scripting attacks. With cross-site scripting attacks, malicious content
is sent to a browser, often taking the form of a segment of JavaScript. So we would parse the
input and remove any script tags, as well as any other potentially harmful characters like the
forward slash.
Another sanitization technique is substitution. Instead of removing unwanted characters,
we could replace unwanted characters. For example, the apostrophe or single quote could be
replaced with a double quote to prevent potential sequel injection attacks. Literalization is
another sanitization technique.
Error Handling
There are two very important protective actions that developers can take in the
development of secure applications. We have input validation and output error handling.
We concentrate on output error handling, we're controlling the error messages displayed
by our application so that those error messages don't accidentally display sensitive information to
the user, when an error does occur. We know that errors do happen and when we're handling
errors, we need to ensure that no sensitive information is included in the error message. For
example, database names, server names, or even usernames that are being used to make a
database connection.
Input validation also plays an important part in error handling, because we know that
attackers will purposely inject malicious input to try to cause errors. One way to prevent error
messages from exposing sensitive information is to prevent errors from happening in the first
place. And this may be accomplished by performing input validation to avoid invalid data.
We're going to check for invalid, or harmful characters, or invalid information before
attempting to process the data. That alone will help reduce the number of errors and therefore
prevent some error messages from occurring at all. Then we can look at those errors that do
legitimately occur. Let's consider some error handling best practices. First, we want to use non-
verbose error messages. One good example of that is instead of displaying a message indicating
that the username was invalid, or that the password was invalid in a user login form error. We
would specify something a little more general, a message like credentials are invalid or
something along those lines.
The information should not be detailed. If we were to indicate that it was the username
that was invalid or the password was invalid. Well, we're telling the hacker, something valuable,
they have some valuable information. Maybe they got the username right if we tell them that the
password is invalid, that's not good practice. So it's crucial to provide only generic non-detailed
error messages.
Another step to take specifically looking at error handling, when an error occur some sort
of action must be taken. We want to make sure that the action taken is always taken such that the
application fails in a secure state. Let's say for example, someone is attempting to login to our
web application and they're typing the wrong username and password. Depending on the
situation, we may consider allowing only a limited number of attempts maybe, for example,
three attempts, after the third attempt, we would take action. A secure approach to failure would
be to make sure after the third attempt that the action we're taking is to lock the account.
An important term to understand in this context is clipping level. This is the number of
errors before action is taken. Clipping level is treated as a threshold after which we take secure
action. So, in our example, the example of the failed login, the clipping level is three
Input Validation
As a general rule, when developing an application, all data input should be treated as
malicious input. Let's say that you're developing a public facing web application. We would be
wise to assume that every time someone fills some form data that the information is coming from
a malicious hacker, and the input is malicious. Accordingly, it's important that we validate all
information passed into the application before processing the information. So before we send that
data to the database server, we'll perform a rigorous validation on it. Let's consider some
examples of techniques that we can use to validate the information.
We can verify the data type. Say we are expecting to have some date data type input. We
can first verify that it's a date data type and that it's a valid date. Maybe we're expecting a
numeric value and we want that to fall within a range of values. So for example, say we're asking
for the year of a car model for an automotive parts website. Let's ensure that it's a numeric value
and that it falls within the range of say, 1970 to the current year. It's also important to check for
illegal or harmful characters. For example, we could check the input for the presence of an
apostrophe, the single quote or double dashes. These are the types of characters that are typed
into a field when someone is attempting to launch a SQL injection attack.
We should always verify the data length, both the maximum length of the data and the
minimum length of the data. Regarding the maximum, let's say that we're asking for a user's
address, so probably 60 characters max should suffice for a street address and a number. We'll
therefore specify 60 characters as the maximum length, which may help prevent a malicious
actor from using the input form to launch some type of injection attack like a SQL injection
attack.
Some common tools and techniques we can use for validating data include regular
expressions. We can use regular expressions to verify both the input format and the input values
by checking for patterns of characters. Let's consider for example, an application where the user
is supposed to type in a product ID. Typically, product IDs exhibits some type of pattern. We can
use a reg-x to verify that pattern. For example, that the first six characters are letters and that the
next three are numbers. We can also do the same thing for email addresses, something very
common in web applications. They are regular expressions to ensure that the email address is
specified in the proper format. For example, we may see a group of characters followed by an @
symbol, another group of characters a dot or period, and then another group of characters. So
that's the pattern for this type of email address. Something else we can do is verify the input
against a whitelist or a blacklist.
The whitelist is a list of characters that are allowed to be specified, while the blacklist
specifies a list of characters that are disallowed. So we verify against these lists each time we
elicit input from the users. We can validate input at the client and at the server, or at both ends if
necessary depending on the situation. For example, quite often, web applications will implement
validation at the client and the benefit of validating at the client is that the user sees the results
immediately. As they are navigating from one field to the next in an input form, they're going to
see error messages updated in real time. And so they can fix the problem and having done so, the
web app software removes the error message accordingly.
The point is that the visual feedback in real time is one of the excellent things about client
side validation, but that's not all. It also serves to reduce network traffic. So rather than
submitting that input data across the network over the server, we're validating on the client side,
saving perhaps many round trips with invalid data as the payload. This can be significant for
busy sites with tens of thousands of users. We can also validate at the server site. This is actually
recommended in scenarios where we have an application passing data to the server, like a web or
mobile application. We should always check data passed to the server. Many implementations
will actually perform validation at the client and at the server.
Session Management
To start the log in process, users authenticate to the application. Access is then granted
and at this point, having been successfully authenticated, a session is established and the user is
provided with a session ID. Something that's important to understand that the hackers can obtain
the session information and use that to impersonate the user there are different types of session
attacks.
First, a session hijack attack occurs when a hacker takes over the conversation,
leveraging the session information to impersonate the user. Another common session attack is
the man in the middle attack.
In this scenario, the hacker places themselves between the two parties that are
communicating. As an example, consider someone surfing the web at an Internet cafe. The
hacker could place themselves between the user surfing the net and the actual Internet website
itself. If successful, the hacker could gain access to confidential information. Once the hacker
obtains the session information and is able to impersonate the user.
The hacker gets their session ID, and begins sending a request to the web application,
impersonating the user, gaining access to all the information that the user would typically have
access to. It's vitally important that we code in some mechanism within our application to make
it possible for the application to distinguish between an impersonated session and the actual
session. So how can we do that? Fortunately, there are a number of different techniques available
to help us. Let's consider the security token. A security token can be used together with the
session information to ensure that the request is actually coming from the valid user and not from
the hacker.
So once again, let's consider a web application, an ASP.netMVC application. In the
HTML for the app, we could write a statement that generates an anti forgery token. On the client
side for every payload or form data that's sent up to the server, a token is generated. This token
represents part of the session ID. The pivotal aspect here is that it is generated on the client. So,
when the form data is sent up to the web server, the token is sent along with the form data. The
server verifies the token against the user's session ID. If there's a match, then the server accepts
the form data. So, in this context, it's important to understand that you also have to force the
server to verify the token or check the token.
If we forced the server to check for the existence of the token along with the session ID,
then a hacker may be able to successfully retrieve this session information. But when they
attempt to submit the information, they would just have the session ID. The hacker wouldn't also
have the correct token. So the server would not accept the form data.
Exception Management
Exceptions are errors that occur due to unexpected actions within the application, these
are different than typical syntax or logic errors. For example, consider a web application. And in
your code, you're fetching data from a database. If not handled correctly, you may end up
disclosing connection information with the exception message. When an unhandled exception
occurs, typically an exception message is displayed. And it could potentially disclose
information about the application that you really don't want a hacker to know.
The objective should be to catch those exceptions, which means that you need to make
sure that you're testing the application for all possible outcomes. Imagine our web app is
prompting a user for information. We need to ensure, for example, that under testing we use
different erroneous types of data. We want to cause exceptions to occur so that we can
understand those exceptions. Then, we'll be in position to be able to handle them.
A typical example of exception handling is when someone enters inappropriate erroneous
data into an application, or the wrong data type. This typically results in an invalid cast
exception. When testing the app, we see this exception or error occur. And we make note of it,
noting for example, that it is an invalid cast exception.
Once we've tested the app thoroughly, and we've compiled a list of all the different
possible exceptions, then we can implement appropriate exception handling. We'll create some
logic to catch the exceptions and display user-friendly messages that avoid disclosing sensitive
information. When an exception occurs, as we learned, they may disclose sensitive data or
sensitive information about the code itself. If the exception occurs in relation to a call to a
database, sensitive information like a username that the application is using to make the
connection to the database may be displayed. The exceptions may disclose information about file
locations, or for which file the exception occurred or the exception may even display a stored
procedure name from the database. This is all tremendously valuable information to a hacker
trying to compromise an application. The goal is to generate exceptions as we're testing the app
and catch those exceptions. When catching the exceptions, instead of allowing system defined
error messages that disclose sensitive information, we'll create custom error messages with data
that's not valuable to hackers. An important exception management technique involves using
try/catch blocks. If the programming environment supports try/catch blocks, we'll use them to
catch the errors and display some type of user friendly error message.
Safe APIs
An application programming interface, or API, is a library of code that developers can
call upon in order to access some specific type of functionality. For example, there are seemingly
endless number of APIs available providing different types of functionality, like Microsoft's
Crypto API and Python's pycrypto, which both provide cryptographic functions. And then there's
social media platforms like Facebook, Google and Twitter providing their own API's that
programmers can tap into while incorporating aspects of those services and there are countless
others.
There are also numerous cryptographic libraries that we can call upon to leverage hashing
functions and cryptographic functions, like encryption functions and decryption functions. With
all the APIs available, it is critically important to ensure that the APIs that we are calling are
considered secure. For example, older or dated APIs may not have followed secure coding
practices. So it's important that an organization assess the APIs being called from within their
applications and make sure that those APIs are considered secure or safe. When identifying the
threats to our application architecture, we need to make sure that we are examining any APIs
being called. Some security considerations with respect to APIs include banned APIs. We need
to ensure that our code is not calling banned APIs.
The same goes for deprecated APIs. Banned and deprecated APIs are those that have
already been identified as either being old APIs that should no longer be used or unsecure APIs.
So they have been banned. We need to review our application's code, and compile a list of any
APIs that the organization possibly used in the past and replace band or deprecated APIs with
similar newer secure APIs. If the organization is creating its own API as an interface into our
application, then there are certain secure practices we need to follow.
First, we need to ensure that any requests sent to the APIs are authenticated. If the
functionality isn't something for public consumption, there has to be some sort of authentication
method for each of the calls to the API itself. We also need to make sure that we audit access to
the API, and any of the calls being made throughout that API, since auditing is a critical
component of secure coding practice. In addition, if we need to maintain confidentiality, we need
to make sure that our API is encrypting any sensitive data, especially if the API is exposed across
the web. We need to make sure that we encrypt passwords and all authentication traffic, as well
as any other related sensitive information, for example, credit card data.
Type Safety
Type Safety is a feature of many programming languages that can help prevent data type
errors. A data type error occurs when a developer treats a data element as a different data type
than what it actually is. For example, a developer may treat an integer value as a float value. This
produces an error, since float values can be much larger than integer values and consequently an
integer value cannot store a float value. Attempting to store a float value in an integer data type
would result in a data type error. There are different implementation methods for type safety.
There are static methods and dynamic methods. Static type safety involves assigning the
data type at design time. So when we create a variable, we assign the data type at that time. The
compiler catches any type errors that exist at compile time. So for example, if we declared a
variable as an integer design time, and then attempted to store a float value in that variable, the
compiler would catch it as an error and we would have to address the problem before the
application can be compiled successfully. With dynamic type safety, we're assigning the data
type at runtime. In this case, the compiler won't be able to catch any type errors at compile time.
For this reason, when we use dynamic types, it's critical that the application is tested thoroughly,
verifying that there are no type errors at runtime.
Memory Management
Memory management is a programming concept involving the management of resources
residing in-memory. When managing memory resources, we're responsible for ensuring that
resources do not stay in memory if they're no longer used or no longer needed. Memory
management is a pretty complex undertaking due to the dynamic nature of memory. Items are
constantly being loaded in and removed from memory. So memory management and allocation
is a shared responsibility between the operating system and the applications running on top of the
operating system. In the context of memory management, we classify code as one of two types.
With unmanaged code, it's the responsibility of the programmer to manage and clean up
memory. For example, garbage collection operations, thread pooling and similar processes, they
are all manual processes. When selecting a programming language environment, that is one
consideration to take into account. Because we'll want to know whether it's a managed code or
unmanaged code environment. So let's consider an example. Managed code handles memory
management transparently. It's basically a function of the runtime. In the .NET environment for
example, the CLR or Common Language Runtime takes care of operations like garbage
collection.
Type safety is another memory management concept that's important to understand. Type
safety is directly related to memory safety. And memory safety means that a process or an
application, can only access memory that's been allocated to it. It cannot access memory
locations that it hasn't been authorized to access. This is another consideration that's important
when deciding on a programming language, whether or not we want type safety.
Benefits of type safe environments include the fact that type safe environments prevent
data type errors.Type safe code remains within the expected memory range, so it can only access
memory that's been allocated to it. In addition, type safe environments isolate assemblies from
each other. For example a .NET assembly is a file that contains compiled code which can
execute within the common language run time. Assembly isolation results in more reliable code,
more reliable applications. Another key memory management concept is locality.
Locality is the principle that defines the predictability of memory references. For
example, when an application writes to a memory address, let's say it writes to memory blocks
A, B, and C. Well, the next memory block that it's going to reference will be memory block D,
and then E. So it's easy to predict the memory address that applications are going to use, it's
predictable. And this is actually not a great thing, because it means that sophisticated hackers can
also predict memory address locations.
These types of applications are prone to buffer overflow attacks. Some application
environments support address space layout randomization which helps defend against locality
attacks by ensuring that memory addresses that are going to be referenced are randomized. So
they're not so predictable.
Tokenizing
Tokenization involves taking sensitive data and replacing it with unique symbols in order
to retain the information. To understand this, let's look at a scenario. Imagine we're creating a
web app that sells products online. At some point, a customer initiates a purchase. And in that
transaction, we need to accept sensitive information, like a credit card number. Also imagine that
whether due to regulation or policy, we're not allowed to store that credit card number once the
transaction has completed.
No problem, except that we do want to store the transaction related information for our
own records. So what do we do? Tokenizing provides us with a solution. We're not going to store
the entire credit card number. We're going to replace some of the characters on the credit card
number with strings of characters and other symbols retaining only the last four digits of the
credit card number.
So for that reason, what we're doing, instead of storing the entire credit card number,
we're creating a token. A random value, or a value based on the transaction information is then
generated. And so again, based on that transaction data, we'll take that generated token and then
also include the last four digits of the credit card number.
So we're storing this long string that represents the credit card number used with the transaction.
It's not the actual credit card number. And therefore, if someone were to gain access to that
token, we've mitigated the risk successfully. Because outside of the context of the transaction
that's taken place, it has no relevance at all.
Sandboxing
A crucial part of developing and managing web applications is periodic testing. So there
are a number of different types of software testing that we should be aware of, one of those is
Dynamic Application Security Testing, otherwise called DAST. This means that we are
observing the application while it's actually executing, so it's runtime execution testing. Maybe at
that time we will interact with the application, and if it has a user front-end we might type in
things and submit them just to make sure that things are working correctly in a secured manner
and that we can't input special characters or malicious scripts in fields or pass it through as URL
parameters.
Static Application Security Testing, otherwise called SAST, SAST is often done with a
code review, so we're looking for any programming flaws or logic flaws related to security that
might not show up until before applications are actually compiled. If that's the type of code
where it needs to be compiled, it might be interpreted too, but either way, code reviews are very
important to detect security issues at this level.
Sandbox Testing is a type of testing where we have a safe environment. It's not a
production environment. It's a testing environment where we can test things like configuration
changes that might be at the operating system level, maybe config changes to a web-server stack
and of course changes that are made to code, or a web application. Safe testing environments
also allow us to test malware. We might want to see how our application deals with the latest
worm malware that propagates itself over the network.
So all of these things are considered to be Sandbox Testing, but how do we actually
implement that? So it sounds great. What do I do so that I have a sandbox to test these things in,
even to detonate malware? Well, you could have an isolated network for starters, whether it's a
physical VLAN not connected to another network, like a production network or whether you are
doing that through virtual networking in the cloud or even on-premises. Using Virtual machines
that again do not have a network link to a production network could be a great way to perform
sandbox testing.
In these days, a lot of larger applications are built using one or more application
containers, and so each application container then can be tested unto itself independent of other
application containers.
Unit Testing is a type of testing that applies to one unit or one chunk of code. Now this
lends itself to microservice decoupling. What does that mean? What is a microservice? Well,
instead of having one entire large application, these days more complex apps consist of smaller
modules and each of those modules are function-specific and they're called microservices. So by
decoupling them what we're really saying is we have independent code chunks that can be
changed or modified independent of other ones. They can be tested or updated independent of
other ones. So code execution then is isolated from other app functions. So imagine that within a
larger app, we have one microservice that is focused on one task like printing and another
microservice or task might be focused on things like rendering images to thumbnails, so those
would be two application containers, presumably, if we're using application container
environments.
We can make changes to those application containers one at a time or at the same time
different developers, they are independent of one another. So you could say then that testing,
security, and high availability are all isolated in our example at the microservice or application
container level. Another type of software testing is Integration Testing. So imagine that we have
two modules or they could be application containers for a larger app module A & B. Well, they
might work just fine under themselves, and we might have performed unit testing on each and
everything checks out and succeeds. But what about when those two components work together?
Maybe the functionality of those two modules or microservices such that they periodically need
to interact or share information, and that's where the integration testing comes in. Do those
components work correctly together? So we're checking the functionality of those units if you
will or containers among one another.
Regression Testing is a type of testing where we kind of want to look back. Now we
might have made new changes, but have these changes introduced bugs. Maybe they are new
bugs, but maybe they are bugs that were triggered by older code, so we want to make sure that a
new change has not introduced problems. And then of course we have application Fuzzing. This
is a type of testing where we flood an app like a web application with data that it just simply isn't
designed for or doesn't expect. We might even do some load and stress testing to see how much
of this it can handle before the app simply doesn't respond, or perhaps even crashes. So we need
to observe the behavior of the app when we feed it this unanticipated data. And fuzzing can be
done with a number of different tools like the OWASP ZAP tool, now that's a web app
vulnerability scanning tool. It's also a proxy tool which can modify transmissions between clients
and web apps to observe the behavior, but also OWASP ZAP and tools like Burp Suite will also
allow you to perform fuzzing against an application to observe the results.
Vulnerability scanning for web applications and perhaps ultimately running pen tests,
first begins with Reconnaissance. Reconnaissance is really about information gathering what's on
the network and how is it configured. So discovering active network hosts, perhaps even
determining if there's Malware infections on host. Or looking at IP addresses that are in use or
DNS records, that might imply by name the purpose of a server. For example, a server called
accounting dot local might be an accounting server.Also, scanning ports looking for what
services are in a listening state on machines. For example, how many servers on the local subnet
have a listening HTTP web server stack.
Now there are a lot of tools out there that will allow us to work with scanning like Nmap.
Nmap stands for Network mapping and so it allows us to map out on a network which hosts are
active and responding at that point in time when the scan was run. And Nmap also allows us to
identify which services are running on those hosts, so if they're running SMTP servers or FTP or
HTTP. Nmap even has an OS fingerprinting mode where it can try to determine what kind of an
operating system is running on the discovered host.
Whether it's, for instance, Linux based or Windows based? Nessus is another common
scanning tool, but this one is more than just network mapping like Nmap. It can do what Nmap
does, but more because it's actually a vulnerability scanning tool. Now vulnerability scanning
tools means that they discover not only active hosts and listening services, but can also drill
down a little bit to see if there are any known vulnerabilities with those running services. Other
types of vulnerability scanning tools would include things like OpenVAS and the Commercial
Languard Vulnerability scanning tool.
You might even write your own custom Bash scripts in Linux or PowerShell scripts, for
example in Windows with loops that are designed to scan through active hosts on a network,
maybe to identify running services and maybe even check on vulnerabilities. But the thing about
all these is that they're very loud. In other words, these scanning tools can be detected on a
network with intrusion detection sensors and so that's good to know. We have a level of
protection at the network potentially, and that individual hosts that can detect that these scanning
tools are being used.
Vulnerability Scanning requires an up-to-date asset inventory. So you have to know what
you have before you can scan it, and then some kind of a baseline comparison of what's normal,
such as devices we expect to appear on a network from our first vulnerability scan and how
they're configured, and even details about specific web applications and any vulnerabilities that
might have been there previously that we have hardened and no longer show up when we
conduct a new vulnerability scan. Vulnerability Scanning might allow us to determine
weaknesses in business processess, network environments, whether wired or wireless, hosts,
devices. There could be physical security weaknesses like unlocked doors to server rooms and of
course, to web applications.
Now in this screenshot we have a vulnerability scan result that shows us that we have a
couple of medium types of vulnerabilities that are showing up according to the legend on a
couple of the hosts which are shown on the left in the screenshot listed by IP address. So
Vulnerability Scanning can be done for a network device or a host. It can identify running
services and of course vulnerabilities that are related those services. So we need to make sure our
vulnerability database is always kept up to date.
Vulnerability scanning tools will often have a function built in where you can update the
vulnerability scanning database so that when you conduct a scan you're looking for the latest
known vulnerabilities. You can configure a credentialed scan, which means you're providing
credentials. Maybe the log in to remote web servers so you can fully examine the configuration
on that web server host and of course, the apps that its running. Or you can configure non-
credentialed scans which gives you more of a sense of what. Attackers that might not know
anything about your environment or the web server or the web app might be able to determine
when running a vulnerability scan. So these should be run periodically so we can schedule them
because we know that threats change periodically.
Penetration testing can come in many different forms. Putting through social engineering
by sending email messages to employees to try to trick them into clicking links or open files as
part of a pen test or making phone calls and trying to trick people into divulging sensitive details
such as web application login credentials or even in person. Someone dressing the part showing
up in an office environment and somehow tricking people into perhaps divulging sensitive
information. Or allowing them access to a server room where we might have unencrypted data
from a web app stored in a disk array. So periodic vulnerability scanning and Pentesting is
absolutely crucial for the utmost in security for web applications.