Hakin9 04 2010 EN
Hakin9 04 2010 EN
team
Editor in Chief: Karolina Lesiska [email protected] Advisory Editor: Ewa Dudzic [email protected] Editorial Advisory Board: Matt Jonkman, Rebecca Wynn, Rishi Narang, Shyaam Sundhar, Terron Williams, Steve Lape, Aditya K Sood, Donald Iverson, Flemming Laugaard, Nick Baronian, Michael Munt DTP: Ireneusz Pogroszewski Art Director: Agnieszka Marchocka [email protected] Covers graphic: ukasz Pabian Proofreaders: James Broad, Ed Werzyn, Neil Smith, Steve Lape, Michael Munt, Monroe Dowling, Kevin Mcdonald Contributing editor: James Broad Top Betatesters: Joshua Morin, Michele Orru, Shon Robinson, Brandon Dixon, Stephen Argent, Jason Carpenter, Rishi Narang, Graham Hili, Daniel Bright, Francisco Jess Gmez Rodrguez, Julin Estvez, Michael Sconzo, Laszlo Acs, Bob Folden, Cloud Strife, Marc-Andre Meloche, Robert White, Bob Monroe, Special Thanks to the Beta testers and Proofreaders who helped us with this issue. Without their assistance there would not be a Hakin9 magazine. Senior Consultant/Publisher: Pawe Marciniak CEO: Ewa ozowicka [email protected] Production Director: Andrzej Kuca [email protected] Marketing Director: Karolina Lesiska [email protected] Subscription: Iwona Brzezik Email: [email protected] Publisher: Software Press Sp. z o.o. SK 02-682 Warszawa, ul. Bokserska 1 Phone: 1 917 338 3631 www.hakin9.org/en Whilst every effort has been made to ensure the high quality of the magazine, the editors make no warranty, express or implied, concerning the results of content usage. All trade marks presented in the magazine were used only for informative purposes. All rights to trade marks presented in the magazine are reserved by the companies which own them. To create graphs and diagrams we used program by The editors use automatic DTP system Mathematical formulas created by Design Science MathType
DISCLAIMER! The techniques described in our articles may only be used in private, local networks. The editors hold no responsibility for misuse of the presented techniques or consequent data loss.
4 HAKIN9 4/2010
CONTENTS
REGULARS
BASICS
10 Firewalls for Beginners
ANTONIO FANELLI Firewalls are often overlooked, but are actually one of the best deterrents against unauthorized accesses. Learn how to build a low-cost firewall with iptables. Whenever people ask me how they can be sure no one can have unauthorized remote access to their PC, my first answer is: disconnect your PC!
06 In Brief
Section of short articles from the IT security world Armando Romero & ID Theft Protect
08 Tools
ATTACK
20 Pwning Embedded ADSL Routers
ADITYA K SOOD This paper sheds light on the hierarchical approach of pen testing and finding security related issues in the small embedded devices that are used for local area networks. The paper is restricted to not only testing but also discusses the kinds of software and firmware used and incessant vulnerabilities that should be scrutinized while setting up a local network.
54 Interview
Interviews with: Victor Julien, lead coder for the Open Information Security Foundation by Hakin9 team Ferruh Mavituna, web application penetration tester and security tool developer; creator of a web application scanner - Netsparker.
by Jason Haddix
28
DIDIER STEVENS Shellcode is hard to write. That is why I worked out the method presented here to generate WIN32 shellcode with a C-compiler. To fully benefit from the content of this article, you should have some experience writing WIN32 programs in C/C++ and WIN32 shellcode, and understand the differences between both approaches.
36
SALVATORE FIORILLO This paper is an introduction to flash memory forensic with a special focus on completeness of evidences acquired from mobile phones. Moving through academic papers and industrial documents will be introduced the particular nature of non-volatile memories present in nowadays mobile phones; how they really work and which challenges they pose to forensic investigators.
DEFENSE
44 Threat Modeling Basics
TIMOTHY KULP In the world of software, security is thrown into a system somewhere at the end of the project. For many developers adding security to a system is using a login with SSL/TLS; but sadly, these two are not the security silver bullet developers are led to believe.
4/2010 HAKIN9
IN BRIEF
OWASP TOP 10 2010 RELEASED
Top lists are a great way to justify an expense from the CISO point of view. They also seem to be a good way to focus on the most important threats. Beside the dispute going on in the field regarding the usefulness of top lists, OWASP has changed the way it handled its most important charts: the OWASP Top 10. The newly released TOP 10 takes into account the top ten vulnerabilities bringing the highest risk. This is the main difference by earlier verions. This time a great a small course on dealing with the risks involved by each vulnerability accompanies the top 10 PDF. The change seems to satisfy most of the skeptics who see now a more realistic way of picturing the scene. If you are curious as to what vulnerability has won the league, you will be surprised: with the new risk-based approach, Injection has now surpassed XSS. If you're new to Owasp Top 10, XSS has been the king of application vulnerabilities for the last decade.
bypass DEP protection and steal iPhone's SMS's in 20 seconds, by just having the device visit a simple web page. Although no details are available regarding the exploitation, the method is believed to work on the iPad as well. Nowadays exploitation is more and more about demand and offer. The demand is rising and the offer will come soon.
IPAD EXPLOITS ARE ABOUT TO COME THE STORY OF THE ORPHAN ROOT CA IN MOZILLA
The mystery of the unknown root CA: that is, we don't know who this root CA belongs to. It seems a joke but this is what was going on until a few days ago. No-one knew who this certificate was belonging to. No one including Mozilla. The incredible story could seriously pose at risk the entire trust chain until the issue has been clarified by Mozilla and RSA Security in a blog post: the unknown root CA owner was RSA Security that classified it as no longer needed. Mozilla has confirmed that this was just a miscommunication problem between parties. Bloggers already gave a good coverage of the happenings with no little criticisms about the way Mozilla was handling the problem. Now that everything is clarified, Mozilla has decided to remove the root CA from the Mozilla's NSS Security library.
6 HAKIN9 4/2010
If someone is wondering why there is no publicly available remote ownage for the much hyped iPad the answer is easy: it is just another iPhone. This does not mean it is secure. It means that there has been no much interest into doinig it so far. However the great number of sales, Apple is advertising these days may change someone's mind about it. Charlie Miller has been one of the first and most famous Apple hackers. He admits he didn't even bother trying to break into the iPad, after all the tablet uses the same OS as the iPhone. The same researcher, however, states that most of the features available in Macbooks and iTouch are not available in the iPad. These features had made the Apple-ware much more vulnerable. On the contrary, lack of ASLR into iPhone OS, makes exploitation relatively easier in iPad. At the last Pwn2Own competition, Iozzo and Weinmann managed to
IN BRIEF
part of the message dialog appearing to the end user can be controlled. Foxit reader does not even display any message. Stevens ha managed to find an alternative way to embedded executables, since these have been disallowed, to extract and then execute code. The PoC has been given to Adobe and kept confidential. According to Stevens, disabling Javascript is not a countermeasure and a patch is expected from Adobe. Source: Armando Romeo cracked game to their device may find that it has a malicious Trojan program hidden inside. The Trojan horse file uses Windows Mobile download sites on the web to install its malicious payload. The expensive calls are being made to premium rate phone numbers. The Trojan is called Troj/Terdial-A . is available. Adobe is one of the most widely targeted software due in part to the high number of people that use it. For further information on Adobe security bulletins: http://www.adobe.com/support/security/.
CHINA REPORT MOBILE MALWARE THREAT MOZILLA PATCH FIREFOX WITH 3.6.2
Mozilla took unusual steps last month to release Firefox 3.6.2 a week early after security issues were found in earlier versions. Mozilla had planned to launch the 3.6.2 update at the end of last month (March). Several governments, including France and Germany have issued a warning about security flaws in Firefox 3.6 a similar report from these governments also reported security flaws with Internet Explorer 8.0 back in January. The Firefox vulnerability (which has also been confirmed by Mozilla) could allow a hacker to run malicious programs on a users computer. On a reputable website called gHacks its blog notes (http://www.ghacks.net/2010/ 03/23/firefox-3-6-2-download-available/ ) that more than 100 other bugs had been fixed in this point release, including 21 marked as critical. China reported in April 2010 that a new mobile virus called MMS Bomber has appeared on millions of Chinese mobile phones. Considering how secretive the Chinese government is, this is an interesting revelation and one we can now confirm is real. MMS Bomber is a variant of the Worm.SymbOS.Yxe family of mobile worms which ONLY run on the Symbian S60 3rd Edition and with a valid digital signature (see Symbian Signed for further information). MMS Bomber appears to spread through SMS messages that contain a link to the worm. It harvests the data from the mobile phone back to a server. MMS Bomber also appears to be smart it has defensive mechanisms that stop the user from removing the malware from their phone. The malware will disable the system management program on the mobile phone so the user will be unable to remove the malware. Source: ID Theft Protect Ltd UK http://id-theftprotect.com
4/2010 HAKIN9
TOOLS NTFS Mechanic Disk & Data Recovery for NTFS Drives
40GB External USB HDD that has had an extensive amount of files written to it, and then randomly deleted, approximately 16GB in total and has intermittant connection issues to the point that the local machine doesnt actually register the drive is there. Once I had the software installed it was time to see how it performs. I plugged the external drive in and then powered up the software. It saw my drive straight away, but it didnt actually state what disk format the drive actually was. This might be due to the fact that the operating system didnt actually find the drive itself, so it was a pleasant surprise that this program did indeed find it. You are able to configure what types of files you actually want the program to be searching for during the recovery process, for this test I just left everything as default which means everything was selected. I selected my external USB Drive and it scanned the partitions first to ensure that it can actually see the drive correctly. Once this part of the process has been completed it then requests that you allow it to scan the whole partition that you have selected, this
Items Tested:
Pricing Standard $99.95 Business $199.95 Professional $299.95 Prices are in US Dollars
appears to be a very cpu intensive program so I would suggest to just leave it running on its own if possible. It took just over an hour to scan through a 40GB hard drive. Once it was finished NTFS Mechanic provides all the data thats on the drive, deleted and non-deleted files. You can select in the right hand menu to only see the recovered files, which makes it a lot easier to see what the program has actually found. If you look at the properties of the files and folders that have been listed as being recovered, you can actually see the prognosis of each file if you decided to proceed and recover the file completely. The process for recovery couldnt be much easier, its simply a case of going through the folder list and selecting the files you want to recover and then just say where you want them to be stored. The program performs really well and managed to recover data from a disk that hasnt been seen by any of my machines for a little while now which quite impressed me. I noticed that there were a few areas within the program that could do with some QA work as there were non english characters in use and some screens werent actually needed in my opinion but they arent detrimental to the product. I would gladly have this tool in my toolbox. http://recoverymechanic.com/ntfs_recovery/ ntfs_mechanic.php
Partition Recovery Hard Drive Recovery Recover deleted files
8 HAKIN9 4/2010
TOOLS
http://www.activeundelete.com/undelete.htm Standard $39.99 Professional $54.95 Enterprise $79.95 Comparison http://www.activeundelete.com/ features.htm#compare
4/2010 HAKIN9
BASICS
ANTONIO FANELLI
Firewalls are often overlooked, but are actually one of the best deterrents against unauthorized accesses. Learn how to build a low-cost firewall with iptables.
henever people ask me how they can be sure no one can have unauthorized remote access to their PC, my first answer is: disconnect your PC! In fact any connected PC will have lots of packets passing through it, both authorized and not. Most often they pass in a transparent manner, so users may not even know they are there. Then people ask me if there are any tools which automatically prevent unauthorized accesses. Again the answer is: NO! We can't know if the packets are good or bad before they enter our PC. But there are tools we can use to monitor incoming and outgoing packets, and then decide whether to delete them before they reach their final destination. Here is where the firewall comes in, the really first anti-intrusion system for our networks and PCs. The good news is that you can run a low-cost firewall, while the bad news is that there are no plug-and-play firewalls out there that really protect your PC if you don't keep your hands dirty with them. This means you need to understand the basics of network packets handling before doing anything with your firewall. In the article you first will learn about basics of Ethernet networking, which is the most widespread transmission technology today, and then build your own personal firewall with iptables, and finally, test it with Nmap.
TCP/IP stack
The logical mechanism which underlies the communications among PCs connected through LANs and Internet is called Transmission Control Protocol / Internet Protocol (TCP/IP)stack. We can summarize it as five logical layers in which the outgoing traffic goes from the top to the bottom, and the incoming traffic goes from the bottom to the top. Each layer is a set of tools (hardware or software) specialized in performing a specific task over the packets in transit, and in particular: level 1 is the physical layer and transmits the single bits through the physical lines (cables and network interfaces), level 2 is called the data link layer and is specialized in data packets transmission through multi-hop networks (routers), level 3 is the network layer and is based on the Internet Protocol (IP). It is responsible for delivering data packets from a source computer to a destination system over a network, level 4 is the transport layer. It is responsible for maintaining a continuous data flow between two systems, possibly including systems for the retransmission of lost packets and error correction. It is based on two protocols: Transmission Control Protocol (TCP) and User Datagram Protocol (UDP), level 5 is called the session layer and consists of programs that communicate with each
A TCP communication session can be established with a three-way handshake mechanism, as summarized in Figure 2: Alice sends a synchronization request to Bob, Bob replies with an acknowledgment of Alice's request and also sends its own synchronization request, Alice replies with an acknowledgment of Bob's request and the connection is established.
Because of the establishment of a session, the TCP protocol can retransmit lost packets, thus ensuring the communication. TCP information is applied to the front of Alice's and Bob's data packets, the socalled TCP headers.
Each port is a logical communication channel between two systems so that you can establish multiple parallel TCP sessions among different application
As seen, the TCP/IP stack is a set of protocols which are responsible for transmitting data through the transport and network layers. In this article we will cover the four most important protocols in the TCP/IP stack: TCP and UDP for the transport layer, IP and Internet Control Message Protocol (ICMP) for the network layer. An application program that needs to send data to another machine's application program generates a stream of data packets and passes them to the TCP/IP stack, where the required protocols are applied to the packets before forwarding them to the lower layers. The target machine, as soon as data passes through the TCP/IP stack, detects the protocol in use before delivering them to the final application. A protocol is simply a set of rules that should apply to both systems to communicate over a network. TCP is the most used protocol today, despite being also one of the most insecure protocols. In fact it lacks of: confidentiality: a data packet can be seen by others, integrity: a data packet can be manipulated by others,
BASICS
programs on a single machine. For example, a server machine can run simultaneously a Web server on standard port 80 and a FTP server on standard port 21. While a client machine can use clients simultaneously for Web and FTP connections which use dynamically assigned ports (generally greater then 1,023). The control bits are binary flags which can be 1 or 0 depending on whether their status is active or not. They are: URG: indicates that the packet must be delivered quickly, ACK: indicates the acknowledgment of a previous packet received during a transmission, PSH: tells the stack to transfer data immediately instead of waiting for additional packets, RST: indicates that the connection must be reset because of an error or interruption, SYN: indicates a synchronization request to initiate a new TCP session, FIN: indicates that there are no more packets to be transmitted so that it can close a connection, CWR: indicates that, due to traffic congestion, the queue of outgoing packets has been slowed, ECE: indicates the connection is having problems due to traffic congestion. is much faster compared to TCP, so when performance is more important than reliability the UDP protocol is widely used. An example are the DNS servers which normally receive a UDP packet on port 53 as a request for a domain lookup and they reply with a UDP packet containing the relative IP address. The number of fields in a UDP header is less than in the TCP header, and for our purposes we can only consider the source and destination port fields that are two 16-bit fields which can address up to a maximum of 65,536 logical ports. The TCP or UDP packet (information plus data) is then sent to the network layer for addressing processes. The most widely used network protocol today is the IPv4 protocol, based on a 32-bit addressing that allows the coexistence on the same network of more than 4 billion addresses. In the near future IPv4 will be replaced from IPv6, a 128-bit address field that will handle a disproportionate number of addresses, among other new features. Receiving the packet from the transport layer, the IP layer in turn generates a new header which is added in front of the TCP/UDP packet. So, the final IP packet consists of: a sequence of bits for the IP header plus a sequence of bits for the TCP/UDP header plus a sequence of bits that contains the data. Among the many fields in the IP header, of particular importance are: source IP address: it contains the IP address of the source machine, destination IP address: it contains the IP address of the target machine, protocol: it specifies the protocol of the transport layer (TCP or UDP). An IP address is represented by four octets of bits separated by dots, each of which is represented in decimal form, which can take values from 0 (all eight bits equal to zero) to 255 (all eight bits equal to 1). Theorically, therefore, all IP addresses between 0.0.0.0 and 255.255.255.255 are valid IP addresses, but some classes (10.xyz , 172.16.yz , and 192.168.yz) are intended for private networks, and can not be assigned to public addresses. Each IP address consists of a part representing the network address and a part representing the host address. A subnet mask is a binary number (also 32-bit) that allows us to know which part of an IP address refers to a network address (all bits to 1) and which one to a host address (all bit to zero). So, for example, the IP address 192.168.0.105 with subnet mask of 255.255.255.0 refers to the network address 192.168.0 and to the host address (a single PC) 105. Often the Classless Inter-Domain Routing (CIDR) notation is used to target an entire network, in the form of: <IP/<number of bits equals to 1 in the subnet mask> (for example, 192.168.0.0/ 24). The other important IP protocol is ICMP which is used to transmit control information over a network (for example the Ping utility). It has the same header of an IP packet with the protocol flag set to 1 and the ICMP type field set to a value according to the kind of message. Common values for the ICMP type are: 0 = Echo Reply. It is used to reply to a Ping request, 3 = Destination unreachable. When the IP packet can not be delivered to the destination (for example the router does not find a route to direct the packet), 8 = Echo. It is used to send a Ping request to determine if a system is up.
Figure 3 illustrates an example of data packets exchanged during a three-way handshake session. Another important transport layer protocol is UDP which, unlike TCP, is connectionless so it can't save a connection state and subsequently it can't guarantee a connection. However, it
Now you have some basic knowledge to see in practice what happens at the level of TCP/IP stack when you connect your PC to a network. Just as an example you will see a couple of utilities.
and you will have more information, including the program names that use certain connections. All of this information is useful for a correct tuning of your firewall rules. In Linux you can get the same information with the command:
$ netstat -nap
then you will be presented with a list of all current connections also if apparently you don't have applications running on the PC, as shown in Figure 4. The -n option displays addresses and port numbers in a numerical form. Essentially the information it shows are: protocol: it tells you if the connection uses a TCP or UDP protocol, local address: your PC network interfaces including the loopback, and the logical ports at which the connection is established. Note that many services need connections to the loopback interface to properly work, external address: the external address to which your network interfaces are connected to, and the logical external ports. Also in this case the loopback interface can be used, status: it indicates the connection status at the moment the netstat is run. So, you can have established connections ( ESTABLISHED ), TCP synchronizations (SYN _ SENT), end of a TCP sessions (FIN _ SENT), closed connections ( CLOSE _ WAIT), server listening (LISTENING ), and so on.
Or use the lsof utility which allows you to have the list of active ports used by processes and other useful information. For example the command:
$ lsof -i
displays all the ports used by active processes. If you want to know more about active connections on your PC, in order for example to monitor the data flow passing through it, you can use a really useful tool embedded in Linux operating systems which is called TCPdump. The equivalent for Windows is WinDump and you can download it from: http:// www.winpcap.org/. TCPdump allows you to analyze the entire flow of data packets in transit to and from your PC, with a high level of details (headers and plaintext data). It is a great tool to fine-tune the firewall rules.
For example, if you want to know all the services listening on some ports in your Windows machine, type the following command:
netstat -nao | find "LISTENING"
BASICS
So for example, if you want to display all the TCP packets captured by your network interface eth0, simply type the following command in a Linux shell (as root):
# tcpdump -n -i eth0
a great open source firewall inside: iptables. It is not a plug-and-play firewall (like many ISP routers) and allows you to have a great control of all the traffic to and from your PC. The other machine will be used only to send packets to the firewall. Open a terminal window on the Linux machine and run the command:
# iptables list
It means that your firewall accepts all packets by default. For now just remember what the three chains are:
INPUT: all incoming packets to your
PC,
The -n option displays addresses and port numbers in a numerical form, while the -i option allows you to specify the network interface to monitor. The result can be surprising. You will notice a more or less continuous stream of packets that pass through your PC, even if there are only few programs running. In Figure 5 a sniffed three-way handshake connection between my local machine and the Google server is shown. Keep in mind that everything passing through your network interfaces can be sniffed, and to demonstrate the absolute lack of confidentiality in a TCP packet, try to open a msn session and sniff all of your packets with the command:
# tcpdump -Xx -s 500 -n -i eth0
your PC, but intended for another host on the network, OUTPUT: all outgoing packets from your PC.
to check the firewall default policies. You should get a response like this:
Chain INPUT (policy ACCEPT)
In this article we can't cover iptables in depth, but we will try to explain how to create ad hoc rules for our firewalling needs. See the On the 'Net and Bibliography sections for more references about this topic.
which lets you see the first 500 characters of a plaintext TCP packet. Everyone over your network could possibly read your confidential messages just using their network interfaces in monitor mode.
Now that you know how to monitor data traffic through your network interfaces, it is time to have fun with a firewall configuration. A firewall is primarily used to block unwanted packets, for the following reasons: hiding a PC or a network from portscanning and network mapping, blocking unauthorized access attempts, blocking traffic anomalies which may cause instability.
For this test you need to set up a small laboratory with just two PCs connected at the same LAN. You can also use a virtual machine, if you prefer. One of the machine must be a Linux PC which comes with
14 HAKIN9 4/2010
In fact if you make a new list in iptables you will see that the default policy for the INPUT chain has changed from ACCEPT to DROP. So if you try to ping your Linux machine from the other PC it seems the system is down. Great! But there is a problem. Also if you try to do a ping from the Linux machine to the other PC you still receive the same result. It seems strange because you don't have changed the default policy for the OUTPUT chain that still is ACCEPT. So what's the matter? Well, when you send a ping you are actually sending an ICMP echo-request packet to the target machine, and expect to receive an ICMP echo-reply from it. So if you close all the input ports you can't receive replies, that's why it seems your PC is down. As a consequence of this, if you want to block incoming pings but not the outgoing ones, you must write a firewall rule that blocks all incoming ICMP packets of type echo-request and accepts all the incoming ICMP packets of type echo-reply. Just do it with this command:
# iptables -A INPUT -p icmp --icmptype echo-reply -j ACCEPT
the Google's domain into an IP address. In other words, your PC sends a UDP packet to the DNS server asking: What IP address is www.google.com? and the DNS server replies with another UDP packet saying: www.google.com has the IP address 74.125.43.147. So, if you do not agree to receive incoming UDP packets from a DNS server you will never be able to ping Google. Then add the following new rule:
# iptables -A INPUT -p udp --sport
assuming that 194.20.8.1 is the DNS server IP address. So, delete the first rule and change it with this new one. You can delete rules by their number. To list rules numbers use the command:
# iptables --list line-numbers
and then delete the rule you want to change, for example:
# iptables -D INPUT 2
domain -j ACCEPT
Basically you are saying to the firewall that all the UDP packets coming from the source-port domain (or 53) must be accepted. That's because the DNS servers usually accept requests and sends replies from standard UDP port 53. And here you are exposing your PC to the first security flaw, because not all the UDP packets from port 53 comes from a DNS server. Someone can use port 53 to send you bad packets, so if you want to write a more secure rule, you could accept only packets coming from your DNS server IP address, such as:
# iptables -A INPUT -p udp -s
OK, now you have solved the ping problem, but if you try to open a Web browser and go to www.google.com, you will see that the domain resolution works, but the page is not displayed. What's wrong now? This is because in order to display a web page you need to establish a TCP connection with a web server port (usually port 80) and it is not possible if you do not enable incoming TCP packets. In fact, if you sniff your Web connection with TCPdump, you will see that first your machine asks the DNS server to resolve www.google.com on port 53 and this works thanks to the firewall rule we have just written, but then your machine is trying to start a three-way handshake with the Google servers on port 80, and this can't work because the Google ACK packet is
The -A INPUT indicates an additional new rule to the INPUT chain. The -p icmp option indicates that the rule refers to an ICMP packet. The -icmp-type option indicates that it only applies to packets with ICMP_TYPE set to ICMP echo-reply (or 0). Finally, the -j ACCEPT option says that the target of the rule is to accept this type of packets. OK your Linux machine now pings well the other PC in your LAN, but if you try to ping Google it doesn't work...why? Having blocked all incoming packets except the echo-reply, you will be unable to receive response from the DNS server. When you ping Google, first your machine queries the DNS server asking to resolve
BASICS
blocked by iptables. So your machine try to send a continuous request until it gets a response which won't come and the connection times out. Therefore, your firewall needs a new rule to let you surf the web. Assuming you need to connect only to server on port 80 (in a real scenario you should open also other ports), a rule could be the following:
# iptables -A INPUT -p tcp --sport 80 -j ACCEPT
And yes, it works. In fact you can connect to Google now. But, as you can imagine, you are exposing your PC to a new security flaw, because the firewall accepts every packet coming from a port 80 and you can not restrict the rule to trusted IP address only, as you made before for the DNS server, so what? Well, as you know, all the TCP connection always starts with a SYN request from your machine, so you can instruct your firewall to only accept
packets related to an already established TCP connection. This is possible with iptables because it is a stateful firewall which can remember a TCP state connection, unlike many stateless routers which can not. So change the above rule with the following:
# iptables -A INPUT -p tcp --sport 80 -m state --state RELATED,ESTABLISHED -j ACCEPT
16 HAKIN9 4/2010
top (#1) to the bottom. When one rule is applied the execution stops except for a log rule. So you should always insert a log rule as the last one before dropping all the packets as a default action for each chain. When logging also add a prefix so you can easily filter the log messages in the syslog tables. So for example, if you want to display all the blocked packets from the INPUT chain, type the following command:
# cat /var/log/syslog | grep
"myLogInput:"
Finally, you can save the firewall configuration with the following command:
# iptables-save > /etc/sysconfig/ iptables
And then you can make your firewall configuration bootable with the following command:
# chkconfig iptables on
a router-switch provided by a ISP. On PC1 there are installed a SSH server and a VNC server to allow a user to remotely access the PC. The router features a built-in stateless firewall, so the internal network is too much exposed to external unauthorized access attempts. Your task is to configure a Linux firewall with iptables to protect the internal LAN from external accesses, except the authorized remote PC only, as shown in Figure 7. The firewall should be connected between the router and a switch into the LAN. It has two network interfaces: eth0 is connected to the router and eth1 is connected to the switch. All the VNC traffic from the remote PC must be forwarded to PC1, an it must be tunneled through a SSH server installed on the firewall. For simplicity assume that all outbound traffic must be allowed. Assume that IP addresses and ports to use are: router IP address: 192.168.1.1, eth0 IP address: 192.168.1.2, eth1 IP address: 192.168.0.100, PC1 IP address: 192.168.0.1, PC2 IP addres: 192.168.0.2, firewall's SSH server port: 222, PC1's VNC server port: 5901.
Or maybe you may need to allow traffic from a local network (eth0) to another one (eth1), so you can type the following rule:
# iptables -A FORWARD -s 192.168.0.0/ 24 -d 192.168.1.0/24 -i eth0 -o eth1 -j ACCEPT
Or you can add more ACCEPT rules for incoming packets related to services you think to use like, for example, FTP and E-mail services. And so on. A firewall to be truly effective must always be tuned based on your needs. Iptables can also logs all blocked packets to let you keep track of unauthorized access attempts, and also to make a fine tunings of the rules. To add a logging rule over the INPUT chain, type the following command:
# iptables -A INPUT -j LOG logprefix="myLogInput:"
As an exercise you could try to design a Linux firewall for a small office whose actual infrastructure is shown in Figure 6. As you can see there are two PCs in a LAN connected to Internet through
Try to figure out how to configure the firewall by yourself, but if you need help,
If you don't want to mess with command line scripting you can just set up a IPCop firewall in few minutes. It is a Linux distribution specialized in firewalling (iptables based) which comes with a really user-friendly web-interface that helps make usage easy. Also, it comes with a SSH server so that you can remotely control it in a secure way, or use it for a SSH tunneling to other PCs over the LAN it protects. Other useful services that you can enable after its installation are: DHCP client/server, Dynamic DNS, HTTP/FTP proxy (squid), IDS (snort), Log local or remote, NTP client/server, IPsec VPN.
In this way you are telling iptables that any packets which does not meet any of the preceding rules must be logged in the syslog (var/log/syslog) before being dropped. This is possible due to the fact that iptables rules are applied from the
You can enable up to four interfaces for: inside network, outside network, DMZ, and inside network for WiFi. It seems to work well on outdated hardware, even if you can not use all the services on low-RAM machines. I have installed it over an old PC with 64MB of RAM and worked well only with firewall and SSH enabled. You can download the distribution package and manuals from: http://www.ipcop.org.
4/2010 HAKIN9
17
BASICS
there are some hints for your configuration script in Listing 1. ICMP Port Unreachable it means that the machine is turned on. Let's see how Nmap can help us with the above techniques. To send a ping we can use the following command:
$ nmap -sP <target>
Final testing
Now it's time to conduct some penetration tests to verify your firewall rules. Assuming that your internal network is secure enough, try to bypass the firewall from the outside, using a remote PC make some attempts using a port scanner to map the network, one of the best tools for network mapping is Nmap. You can download it from http://www.nmap.org. Firewall bypassing doesn't necessarily mean to be able to get into the network. Just stealing system information is enough to make the test positive, meaning that your firewall could easily be bypassed. If the port scan recognizes that your system is up and there are one or more open ports, it means you need to refine some firewall rules to better hide this information. To discover whether a machine is up, you can use one of these three techniques: 1. pinging (but a firewall may still filter the ICMP packets, so it will not be answered), 2. sending a TCP packet to a port that is supposed to be open. If it returns a SYN-ACK it indicates that the machine is turned on. 3. sending a UDP packet to a port that is supposed to be closed. If it returns an
If the firewall correctly block the pings, Nmap will tell you that the host seems down. So you can try to send a TCP SYN to ports that you know to be open, for example:
$ nmap -PS222,5901 <target>
In this case, if the firewall blocks outgoing ICMP destination unreachable packets, Nmap should not be able to tell if the machine is turned on. Now you can force Nmap to consider the host is up (using the option -PN option). So, you can test any sort of port scanning against well known open ports, playing with the TCP flag fields. So for example, to try to establish a three-way handshake use a TCP Connect:
$ nmap -sT -PN -p 222,5901 <target>
If it returns a SYN-ACK, Nmap thinks the host is up. Having restricted the incoming packets only from a specific range of IP addresses, you should receive a host seems down message instead. Also try using an ACK packet, and consider port 80 which is usually assumed to be open:
$ nmap -PA80 <target>
If it doesn't work (as desired), try to send not regular packets. For example, you can try with a TCP SYN which sends a SYN packet waiting for a SYN+ACK reply and then immediately sends a RESET to close the connection:
# nmap -sS -PN -p 222,5901 <target>
Or you can try with a TCP ACK so the firewall could think it is a reply to a previous SYN. If Nmap receives a reset it means the port is not filtered:
# nmap -sA -PN -p 222,5901 <target>
but if the firewall blocks all the incoming TCP packets not related to an established connection, you always should receive a host seems down message. Now try an UDP portscan against a known closed port, like the following:
# nmap -sU -p12345 <target>
Or you can use a version scan to try to identify any open ports and relative banner of service, together with an OS fingerprinting for trying to identify the operating system too:
# nmap -sV -O <target>
Bibliography
Edward Skoudis and Tom Liston. Counter Hack Reloaded: A Step-by-Step Guide to Computer Attacks and Effective Defenses (Paperback), Elizabeth D. Zwicky, Simon Cooper, and D. Brent Chapman. Building Internet Firewalls (Paperback), Bryan Burns, Jennifer Stisa Granick, Steve Manzuik, and Paul Guersch. Security Power Tools (Paperback).
If you did a good job in your firewall configuration Nmap will tell you that all ports are filtered and you can not locate the operating system due to many fingerprints. And so on. Nmap has many other features that you can learn from the online documentation. Have fun with it.
On the 'Net
http://www.iana.org/assignments/port-numbers Port numbers, http://live.sysinternals.com/ Windows utilities directory, http://www.netfilter.org/documentation/HOWTO/packet-filtering-HOWTO.html Linux IPTABLES HOWTO, http://ornellas.apanela.com/dokuwiki/pub:firewall_and_adv_routing#data_flow_diagram Linux firewalls and routing.
Antonio Fanelli
Electronics engineer since 1998 and is extremely keen about information technology and security. He currently works as a project manager for an Internet software house in Bari, Italy. E-mail: [email protected].
18 HAKIN9 4/2010
ADITYA K SOOD
Difficulty
he paper is restricted to not only testing but also discusses the kinds of software and firmware used and incessant vulnerabilities that should be scrutinized while setting up a local network. A detailed discussion will be undertaken about the HTTP servers used for handling authentication procedure and access to firmware image providing functionalities to design and configure your own home local area network. So enjoy the hacks to strengthen your system and home hub security.
These embedded devices can be ADSL router, switches or hubs based on the deployment strategy. The overall procedural part remains the same.
Functional Overview
Generally a home hub, modem or even a switch in the form of embedded device is used for providing functionality run broadband internet. Irrespective of the implemented design, understanding of configuration and model
Figure 1. Router-imagesadsl-diagram-7300gx
The above presented layout is very generic and most of the local embedded devices provide the functionality. Again, it depends on the configuration that which types of ports are to be allowed. The HTTP server provides a GUI interface to firmware but it requires an authentication mechanism to be completed prior to logging in to the running firmware and further configuration changes have to be made to the required local area network. Lets have a look (see Figure 1). The above presented screenshot explains the working of an ADSL router.
Figure 2. Utstarcom
Figure 3. Tp-link
The customized HTTP servers are used for handling web based functions with an appropriate authentication mechanism. There is a specific set of facts about these web servers that are required to be understood. These facts have been accumulated after researching a number of HTTP servers used for different LAN embedded devices. It serves as a different approach of understanding the inherited characteristics based on which the HTTP server works. We will be talking about three web HTTP servers that are used extensively for providing GUI interface for different firmware used in different LAN embedded devices.
ATTACK
Table 1. Comparison of HTTP Methods in diff Web servers Web Servers ROMPAGER NUCLEUS VERATA-EMWE Unknown /0.0 MICRO-HTTPD GET 401 401 401 200 401 HEAD 401 401 401 200 501 POST 405 401 401 405 501 PUT 400 404 404 404 501 DELETE 405 404 404 404 501 PROPFIND 405 401 401 405 501 TRACE 405 200 200 200 501 TRACK 405 401 401 405 501 SEARCH 405 401 401 405 501 OPTIONS 405 401 401 200 501
Table 2. Response Codes HTTP Status Code 401 405 200 501 Error Message Unauthorized Method Not Allowed OK Method Not Implemented
The above mentioned servers are used extensively. These web servers are small in size. and every web server allows access to the firmware in a GUI mode, when appropriate credentials are passed by the user. There are a number of aspects which should be taken care of while configuring and testing. We will be enumerating the issues that must be examined critically.
The authenticated mechanism followed is BASIC.. These web servers are small and customized, there are disadvantages related to the Basic authentication scheme. There is pre-assumption between client and server about the transmission of credentials and it is considered a secure channel which in reality is not so. The credentials are passed in a clear text without any protection and can be intercepted easily. Secondly, the dual mode of data transfer is not secured i.e. data that is sent back by the server to the client. These are the generic security problems in using the basic authentication scheme. During our testing it has been analyzed that the impact of basic authentication is much wider than the normal scenarios. The point is that there is no time parameter set for expiry of the cache of credentials. The problem is that the browser retains those credentials in the form of information and they are
It has been reviewed that certain web servers that provide GUI control of the firmware are not having well defined session expiration parameters. Most of the time, it has been examined that no appropriate log out interface is being used. Due to this factor the browser window is closed directly. For example: The TP-LINK ADSL2/2+ Router and UTSTARCOM ADSL Router do not provide a Log Off feature..
It is very necessary to scrutinize the type of HTTP verbs implemented by the server. Our analysis is based on the source code reviews and testing of live web servers. It is worth mentioning the following facts: 1. Usually the web servers only understand the GET and HEAD request. As HEAD request is alias of GET request except that the body content is not listed. These small web servers are meant for only requesting a resource i.e. main directory which is authenticated. If the request does not contain definite credentials a 401 Unauthorized error is returned. Usually, the web server works on HTTP/1.0 specification. 2. All the other standard requests are not allowed and the web servers return 405 Method Not Implemented errors in the context of the running web server. 3. Our research points that in certain specific web servers, TRACE request is allowed by the web server. This case has been noticed in the VERATE-EMWEB server. If a client issues a trace request, a 200 OK message is displayed back without an authentication warning.
Figure 8. Snmp
4/2010 HAKIN9 23
ATTACK
Listing 1. Admin credentials
<sysUserName value="admin"/> <tr69c state="enable" upgradesManaged="0" upgradeAvailable="0" informEnbl="1" informTime="0" informInterval="129600" noneConnReqAuth="0" debugEnbl="0" acsURL="http://rms.airtelbroadband.in:8103/ACS-server/ACS" acsUser="airtelacs" ac sPwd="airtelacs" parameterKey="12345" connReqURL="" connReqUser="admin" connReqPwd="admin" kickURL="" provisioningCode="12345"/> <sysPassword value="cGFzc3dvcmQ="/> <sptPassword value="c3VwcG9ydA=="/> <usrPassword value="dXNlcg=="/> </SystemInfo>
prior to sending any request for accessing a resource. The TRACE request, one finds 200 OK message and for others it returns 401 Unauthorized. The ROMPAGER web server understands GET, HEAD request with authentication and does not allow TRACE request directly. For specific details of different verbs allowed and handled effectively by four different web servers, analyze the below stated chart 4. From the testing perspective, it is always advisable to enumerate the HTTP verbs allowed by the web server for accepting the requests that are sent by the clients.
It is good from a testing perspective that there is no need to jump directly to brutforcing the passwords of the deployed ADSL router in the home. It has been analyzed and tested that it is one of the biggest mistake made
during configuration by the vendor administrators. So from a testers view point, one always tries to test the default username account or factory enabled passwords. For this specific look up, a tester has to be well versed in the specification control of various ADSL routers. Without the knowledge, it is quite hard to detect and test the factory enabled credentials. The best way is to search for the manual or specification of the deployed device in your home. Another good part is to find the websites which provide centralized information of all the devices with factory information. The credential information is one of the prime pieces of information. The one point should be remembered is that there can be a number of standard users that are enabled by default. During the testing time, it is one of the biggest mistakes because some of the penetration testers only scrutinize the standard user such as admin and not the backup or recovery based accounts. Even if the admin password
is configured appropriately but still there exists a possibility that other accounts are not configured or still present in the default factory state. So always try to test the device in a diversified manner rather than sticking to the simplistic ways. Usually the most suitable combination is username =admin and password=admin . But I will suggest that this website should be added (http:// www.routerpasswords.com) as a part of auditing kit while testing for ADSL routers and other home devices. This website has a huge database and the good part is that all IDs l information is centralized. So it is possible to process the information efficiently to reap the benefits. Lets have a look (see Figure 5). The snapshot is the active website layout for mining the credential based information. Figure 6 presents layout that defines the factory default settings. From administration point of view, watch your actions and calculate the risk reduction by changing these default settings.
A device can be deployed in a number of ways based on the requirements and administrator can enable a standard set of consoles to configure and gain access to the device remotely. If the benchmark is considered, it is deployed according to below mentioned rule set
Lets have a look at the pwned BCM96638 ADSL router (Figure 7). The layout is of the dumpcfg command executed on the console access. Lets have a look at the piece of information presented Listing 1. The configuration shows the credential based information. This ADSL router provides access through three different passwords as sys [system], spt [support] and usr [user]. Looking at the encoding scheme, it is clear that a base 64 encoding has been applied. I have already stated in the beginning of the paper about the weaknesses of base 64 encoding. If the encoding is not enabled, passwords will be in clear text but access to the console is still required. An inherited vulnerability if exploited can provide direct access to configuration. It depends upon the deployment and the configuration. After decoding, the strings passwords are matched as mentioned below:
sysPassword value="cGFzc3dvcmQ= password support sptPassword value="c3VwcG9ydA== usrPassword value="dXNlcg== user
The SNMP is one of the prime resources of information leakage that happens
So you can try the same combination as username and password to access the console prompt. These credentials work effectively through FTP and HTTP access too.
Point to Point protocol is used for node to node communication and provides services such as authentication, privacy, encryption and decompression of data flowing between source and destination. Usually broadband and Dial up connections which provide an interface to connect to the ACS server of the service provider require authentication. This one is different from the ADSL router configuration or access credentials.
ATTACK
through the network. This is the result of improper management of the device or not changing the default settings of the SNMP agent. Most of the time, the SNMP agent is enabled and administrators do not even know about it. So from a testers perspective, the SNMP should be enumerated to trace the information leakage. Primarily, the community strings remain unchanged as private and public. Lets analyze a compromised ADSL router see Figure 8. The above presented screen shot shows that the SNMP agent is enabled and it can be enumerated by looking at the default strings. Lets see the enumeration (see Figure 9). There is a possibility of enumerating through the default strings. In this case, there is no Management Information Base (MIB) present, so SNMP walk is not successfully. One can use different tools to pwn the devices through SNMP.
Exploiting Vulnerabilities
One of the prime parts of any testing methodology is the exploitation of release or unreleased vulnerabilities. After the enumeration and reconnaissance phase, one should test the requisite set of vulnerabilities. The reason for this consideration is based on the fact that there are a number of DSL router boxes deployed which have not been upgraded. The inherent software still has vulnerabilities and only newer versions are patched. Mostly, the vulnerabilities revolve around the following classes Authentication Bypass Cross Site Scripting Cross Site Request Forging HTTP Parameter Pollution. Denial of Service etc
The common set of vulnerabilities should be tested effectively. Lets analyze some released exploits and the structure of attacks on ADSL routers. Some of the release vulnerabilities can be found on the below stated links: Siemens ADSL SL2-141 CSRF Exploit Linksys Wireless ADSL Router (WAG54G V.2) httpd DoS Exploit Belkin wireless G router + ADSL2 modem Auth Bypass Exploit Cisco Router HTTP Administration CSRF Command Execution Exploit A-Link WL54AP3 and WL54AP2 CSRF+XSS Vulnerability
For example: XSS vulnerability in admin login interfaces see Figure 10.
Secunia has released the analysis of an ASUS AAMDEV600 ADSL router and the resultant impact of a vulnerability in recent years (Figure 11). The impact is diversified when it comes to real world scenario. So looking at all the facts and perspectives, testing should be done in an appropriate way keeping all the issues in mind.
On the 'Net
http://wiki.openwrt.org/TableOfHardware http://www.linksysinfo.org/forums/showthread.php?t=47124 http://www.linuxelectrons.com/features/howto/consolidatedhacking-guide-linksyswrt54gl http://www.openwrt.org http://dd-wrt.com
Conclusion
The testing of embedded ADSL routers requires a step by step approach. The benchmark testing should be done in a diversified manner irrespective of the results. This not only enhances the testing sphere but makes the testing pattern interesting too. This paper is designed to sum up a well-designed structured methodology for the users as well testers. This paper serves as a base for all sorts of testing against embedded ADSL routers.
Aditya K Sood
Aditya K Sood is a Sr. Security Researcher at Vulnerability Research Labs (VRL), COSEINC. He has been working in the security filed for the past 7 years. He is also running an independent security research arena, SecNiche Security. He is an active speaker at security conferences and already has spoken at EuSecWest, ExCaliburCon, Xcon, Troopers, Owasp, Xkungfoo, CERT-IN etc. He has written a number of whitepapers for Hakin9, Usenix, Elsevier and BCS. He has released a number of advisories to forefront companies. Besides his normal job routine he loves to do a lot of web based research and designing of cutting edge attack vectors.
DIDIER STEVENS
Difficulty
o fully benefit from the content of this article, you should have some experience writing WIN32 programs in C/C++ and WIN32 shellcode, and understand the differences between both approaches. For the purpose of this article, I define shellcode as position-independent machine code. Normally shellcode is written with an assembler and the developer pays attention to create position-independent code. In other words: that the shellcode will execute correctly whatever its address in memory. The compiler I use is Visual C++ 2008 Express. It is free and supports inline assembly. Shellcode generated with this method is dynamic: it does not use hard-coded API addresses that limit the shellcode to specific versions of Windows. The method uses Dave Aitel's code to lookup addresses of the necessary API functions described in his book The Shellcoder's Handbook. Being able to debug the shellcode inside the Visual C++ IDE during development is an important requirement for me. Developing shellcode is not easy, we can use the all the help we can get, a visual debugger is certainly welcome. Several design-decisions were made because of this debugging requirement. The method does not try to generate compact shellcode. If size is a problem, start with small, optimized handwritten shellcode and call the shellcode generated by the C-Compiler in a later stage.
The following is the full source code listing for shellcode to display a message box. Let us walk through it see Listing 1. The source code for which the C-compiler will generate shellcode is located between functions ShellCodeStart and main (lines 45 to 213). Function main and the following functions are not part of the generated shellcode, they provide support to test/debug the shellcode and automatically extract the shellcode from the generated PE-file (.exe file). As it names implies, ShellCodeStart (line 45) is the start of our shellcode. It calls function ShellCodeMain , and then it returns. That is all it does. Because this simple code is actually written in assembly language and not in C, we have to tell the compiler this. Construct _ _ asm is what we need to achieve this:
__asm { call ShellCodeMain ret
When a C-compiler emits machine code for a normal function, it will add instructions to setup and breakdown the stack frame (an internal C data structure to store arguments and automatic variables on the stack). This code is called the prolog and epilog, respectively. We do not need a stack frame in function
005 006 007 008 009 010 011 012 013 014 015 016 017 018 019
*/
020 #define KERNEL32_LOADLIBRARYA_HASH 0x000d5786 021 #define KERNEL32_GETPROCADDRESSA_HASH 0x00348bfa 022 023 typedef HMODULE (WINAPI *TD_LoadLibraryA)(LPCTSTR lpFileName); 024 typedef FARPROC (WINAPI *TD_GetProcAddressA)(HMODULE hModule, LPCTSTR lpProcName); 025 026 // Add your API function pointer definitions here: 027 typedef int (WINAPI *TD_MessageBoxA)(HWND hWnd, LPCTSTR lpText, LPCTSTR lpCaption, UINT uType); 028 029 struct SHELL_CODE_CONTEXT 030 { 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 TD_LoadLibraryA FP_LoadLibraryA; TD_GetProcAddressA FP_GetProcAddressA; char szEmptyString[1]; // Add your module handles and API function pointer members here: HMODULE hmUSER32; TD_MessageBoxA FP_MessageBoxA;
};
void ShellCodeMain(void); int WriteShellCode(LPCTSTR, PBYTE, size_t); void *ShellCodeData(void); void __declspec(naked) ShellCodeStart(void) { __asm { call ShellCodeMain ret } } #pragma warning(push) #pragma warning(disable:4731)
4/2010 HAKIN9
29
ATTACK
Listing 1b. continued...
117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 add ebx, eax find_procedure: mov esi, [edx] pop eax push eax add esi, eax push 1 push [ebp+12] push esi call hashit test eax, eax jz found_procedure add edx, 4 add ebx, 2 jmp find_procedure found_procedure: pop eax xor edx, edx mov dx, [ebx] shl edx, 2 add ecx, edx add eax, [ecx] pop ecx pop edi pop esi pop ebx mov esp, ebp pop ebp ret 0x08 hashit: push ebp mov ebp, esp push ecx push ebx push edx xor ecx,ecx xor ebx,ebx xor edx,edx mov eax, [ebp+0x08] hashloop: mov dl, [eax] or dl, 0x60 add ebx, edx shl ebx, 0x01 add eax, [ebp+16] mov cl, [eax] test cl, cl loopnz hashloop xor eax, eax mov ecx, [ebp+12] cmp ebx, ecx jz donehash inc eax donehash: pop edx pop ebx pop ecx mov esp, ebp pop ebp ret 12 totheend: } *pFP_LoadLibraryA = FP_LoadLibraryA; 183 184 185 186 187 188 189 190 191 192 193 *pFP_GetProcAddressA = FP_GetProcAddressA; } #pragma warning(pop)
// Write your custom code in this function. // Add extra functions as needed. void ShellCodePayload(SHELL_CODE_CONTEXT *pSCC) { char szHello[] = {'H', 'e', 'l', 'l', 'o', '\0'}; pSCC->FP_MessageBoxA(NULL, szHello, pSCC>szEmptyString, 0); 194 } 195 196 void ShellCodeMain(void) 197 { 198 SHELL_CODE_CONTEXT scc; 199 200 ShellCodeInit(&(scc.FP_LoadLibraryA), &(scc.FP_ GetProcAddressA)); 201 202 scc.szEmptyString[0] = '\0'; 203 204 // Add your own API function initialization code here: 205 char szuser32[] = {'u', 's', 'e', 'r', '3', '2', '\0'}; 206 char szMessageBoxA[] = {'M', 'e', 's', 's', 'a', 'g', 'e', 'B', 'o', 'x', 'A', '\0'}; 207 scc.hmUSER32 = scc.FP_LoadLibraryA(szuser32); 208 scc.FP_MessageBoxA = (TD_MessageBoxA)scc.FP_GetProcAdd ressA(scc.hmUSER32, szMessageBoxA); 209 210 ShellCodePayload(&scc); 211 } 212 213 int main(int argc, char **argv) 214 { 215 size_t dwSize; 216 char szBinFile[MAX_PATH]; 217 218 dwSize = (PBYTE)main (PBYTE)ShellCodeStart; 219 printf("Shellcode start = %p\n", ShellCodeStart); 220 printf("Shellcode size = %08x\n", dwSize); 221 sprintf_s(szBinFile, MAX_PATH, "%s.bin", argv[0]); 222 printf("Shellcode file = %s\n", szBinFile); 223 if (0 == WriteShellCode(szBinFile, (PBYTE)ShellCodeStart, dwSize)) 224 printf("Shellcode file creation successful\n"); 225 else 226 printf("Shellcode file creation failed\n"); 227 228 // Calling ShellCodeMain to debug shellcode inside Visual Studio 229 // Remove this call if you don't want to execute your shellcode inside Visual Studio 230 ShellCodeMain(); 231 232 return 0; 233 } 234 235 // Function to extract and write the shellcode to a file 236 int WriteShellCode(LPCTSTR szFileName, PBYTE pbShellCode, size_t sShellCodeSize) 237 { 238 FILE *pfBin; 239 size_t sWritten;
30 HAKIN9 4/2010
compiler to omit the epilog and the prolog, we decorate ShellCodeMain with attribute _ _ declspec(naked). The purpose of the ShellCodeStart function is twofold: to make the first byte of the shellcode the entry-point, and to provide a start address to extract the shellcode from the PE-file. ShellCodeMain (line 196) is the main function of our shellcode. It provides memory to store data, it calls code to Listing 1c. continued...
240 241 242 243 244 245 246 247 248 }
lookup the addresses of the API functions we need, and executes our core code. The following line (line 198) reserves memory on the stack for our data:
SHELL_CODE_CONTEXT scc; SHELL _ CODE _ CONTEXT is a structure
to contain all data we need throughout our shellcode, like the addresses of WIN32 API functions. It is passed on to all functions (which need it) via a pointer.
if (S_OK != fopen_s(&pfBin, szFileName, "wb")) return -1; sWritten = fwrite(pbShellCode, sShellCodeSize, 1, pfBin); fclose(pfBin); if (sWritten != 1) return -2; return 0;
};
ATTACK
We store the structure on the stack (as an automatic variable) to make our shellcode position-independent. Declaring the variable for the structure as static would instruct the C-compiler to store the variable in the data-segment, which is not position-independent and thus unsuitable for our shellcode. For our MessageBox shellcode, the structure contains these members (line 29): see Listing 2. FP _ LoadLibraryA and FP _ GetProcAddressA are variables (more precisely, function pointer variables) to store the address of kernel32 exports LoadLibraryA and GetProcAddressA . Remember that we write dynamic shellcode, we do not uses hardcoded API addresses but look them up. szEmptyString is a variable to store the empty string. is the empty string, this is different from NULL. We cannot use strings directly in our shellcode, as the Ccompiler would store these strings in the data-segment. The work around I use is to build the strings with code and store them in variables on the stack. This way, strings are part of our shellcode. As the empty string is a string you need in several functions, I decided to store the empty string in the shellcode structure.
hmUSER32 is a variable to store the address of the loaded user32 dll (the one exporting MessageBoxA ). FP _ MessageBoxA is a function pointer variable to MessageBoxA . After creating variables on the stack, we need to initialize them. Function ShellCodeInit (line 57) implements The Shellcoder's Handbook 's code to dynamically lookup the addresses of kernel32 exports LoadLibraryA and GetProcAddressA . The code does this by searching through the process' data structures that contain the list of loaded modules and exported functions. To avoid the use of strings for the name of the functions, it uses hashes (lines 19 to 21): see Listing 3. These two API functions are all we need to lookup other API functions. The code used up till now is a template you will reuse for all other shellcode developed with this method. Initializing the empty string is easy (line 202): scc.szEmptyString[0] = '\0';
Now we need to lookup the address of MessageBoxA with the help of LoadLibraryA and GetProcAddressA . MessageBoxA is exported by user32.dll . We need to reference this module, and maybe load it if it is not already loaded inside the process where our shellcode will execute. We do this with LoadLibraryA with argument user32 . user32 must be a string, but remember that we cannot write literal string user32 in our C-code, because the C-compiler would store string user32 in a place inaccessible to our shellcode. The trick I use to avoid this is to initialize the string with an array of characters (line 205):
char szuser32[] = {'u', 's', 'e', 'r', '3', '2', '\0'};
This statement forces the compiler to emit code that will store each individual character of the string user32 (together with the terminating 0) in stack variable szuser32, thus dynamically building the string at runtime. An advantage of this
Now we can lookup the address of module user32, which we need to lookup the address of API function MessageBoxA (line 207):
scc.hmUSER32 = scc.FP_LoadLibraryA (szuser32);
not with The Shellcoder's Handbook 's code and the appropriate hash, you raise a valid point. There is no reason why you cannot use The Shellcoder's Handbook 's method to lookup MessageBoxA . But this implies that you have to calculate the hash of GetMessageBoxA and write a couple of lines of assembly code to call getfuncaddress. And this is something I wanted to avoid when I worked out my shellcodegeneration method. With my method, you do not need to write assembly code, but you can if you want to.
Now that our environment is ready, let us execute our core code. We do this by calling ShellCodePayload and passing it a pointer to the shellcode control structure (line 210):
ShellCodePayload(&scc); ShellCodePayload (line 190) is easy to
understand:
scc.FP_GetProcAddressA(scc.hmUSER32, szMessageBoxA);
For each function you need from the WIN32 API (like MessageBoxA ), you will need to define a function pointer type:
typedef int (WINAPI *TD_MessageBoxA) (HWND hWnd, LPCTSTR lpText, LPCTSTR lpCaption, UINT uType);
To know exactly what return value type and argument types to declare, lookup the API function on MSDN (http:// msdn.microsoft.com/en-us/library/ ms645505%28VS.85%29.aspx). Pay particular attention to API functions that take strings as arguments. There are 2 variants of these functions: an ASCII variant and a UNICODE variant. I use the ASCII variant of MessageBox: MessageBoxA . This is all we need to setup the environment our shellcode needs to execute properly (in our example, call MessageBox). If you wonder why I decided to lookup MessageBoxA with
ATTACK
We declare and populate a string with value Hello, and then call MessageBoxA with the function pointer FP _ MessageBoxA and pass it the necessary values, like the string to display. To generate the shellcode, you compile the C-program. To extract the shellcode from the generated PE-file, you run the program. The shellcode will be saved to the same directory as the .exe file, with extension .bin. I the example, I also call ShellCodeMain in the main function (line 230). This executes the shellcode when you run the program, and allows you to debug your shellcode inside Visual Studio Express using all of its great debugging features! If you do not want your shellcode to execute when you compile it, remove the call to ShellCodeMain from function main. You will also need to change a couple of properties of your Visual Studio project to instruct the C-compiler to emit appropriate code usable as shellcode. The C-compiler must not emit UNICODE binaries, and may not optimize your code and it may not add stack protection code. Set the following project properties: see Figure 1 4. When you are ready to generate your final version, instruct the compiler not to emit debug code. Switch to Release in stead of Debug, and remove all breakpoints you have set (see Figure 5). One more point you need to pay attention to: do not use functions from the standard C-library, like strcpy. If you need these functions, either write them yourself or use similar functions found in ntdll.dll. A typical function found in shellcode is the inclusion of data (e.g. a file) at the end of the shellcode . It is also possible to write C-code to achieve this. What follows is an example with MessageBox that displays a string appended to the end of the shellcode. We need to add a member to the shellcode context structure to store a pointer to the appended data:
void *vpData;
When you disassemble shellcode generated with this method, you will notice that there are a series of 0xCC or INT3 statements we did not add to our source code. These 0xCC bytes are added by the compiler to aline each function on a 16-bytes boundary. They make the shellcode larger than necessary. If you want to convert the shellcode from its binary format to assembly code, you will need to use a disassembler. As the nasm assembler is my favorite assembler (it is free), I use its complementary disassembler ndisasm. It requires some manual code cleanup before you can reassemble it with nasm. In our example, our shellcode exits by returning (ret statement). But you can code other exits: call to ExitProcess call to ExitThread set SEH and cause an exception
add after ShellCodeMain : see Listing 4. The last step is to call MessageBox with this string in function ShellCodePayload :
pSCC->FP_MessageBoxA(NULL, (LPCTSTR)pSCC->vpData, pSCC->szEmptyString, 0);
Running this shellcode (with appended string) displays a MessageBox with the string at the end of the shellcode (starting with the first _ emit statement). If you need to change this string, you can just open the shellcode file with a hex-editor and replace the string with your string. No need to change the source code and recompile the project.
I used this method to generate shellcode for Joachim Bauch's MemoryLoad program. This is C-code that loads a DLL from memory into memory. I adapted his source code with the techniques of my method and was able to generate shellcode that loads a DLL from memory into memory. The DLL has to be appended at the end of the shellcode. You can download the templates and examples from my blog: http:// blog.DidierStevens.com/software/ shellcode.
Didier Stevens Figure 5. When we execute the program, the shellcode is extracted and saved, and then executed
34 HAKIN9 4/2010
Didier Stevens is an IT Security professional specializing in application security and malware. Didier works for Contraste Europe NV. All his software tools are open source.
SALVATORE FIORILLO
Difficulty
oving through academic papers and industrial documents will be introduced the particular nature of non-volatile memories present in nowadays mobile phones; how they really work and which challenges they pose to forensic investigators. Then will be presented an advanced test in which some brand new flash memories have been used to hide data in man-made bad blocks: the aim is to verify if forensic software tools are able to acquire data from such blocks, and to evaluate the possibility to hide data at analysts eyes.
2006): this paper aims to contribute in the shifting of the flash forensic field from the knowable to known Cynefin domain (Kurtz and Snowden, 2003). On nowadays mobile equipment there are generally two memories: one for the operating system (the NOR flash) and the other (the NAND flash) for user data (Chang and Kuo, 2004). The extent of this paper is limited to data stored in NAND flashes: volatile RAM and SIM card analysis will be kept aside.
Keywords
Mobile forensic, OneNAND, NAND, NOR, bad blocks, wear levelling, ECC, FTL
A Mobile Equipment (ME) is here understood as the radio handset portion of a more generic mobile phone (Jansen and Ayers, 2007), made by various components, most important of which are presented in the representation (see Figure 1). During its evolution mobile phone passed from the PDA phase up to nowadays smart phones that lessen differences with personal computers (ibid). Storage capability also increased dramatically ranging from few Kilobits at very beginning up to several Gigabits of current mobile phones, increasing the space where data can be stored or hided, and adding complexity to work of law enforcement officers (Al-Zarouni,
Flash memory is a non-volatile memory that can be electrically erased and rewritten with a specific process: likely hard disk (even very different for the lack of physical mechanisms), flash memory does not need power to maintain data stored in the chip for future access (OKelly, 2007). Coming from evolution of EPROM, the two main kind of flash memories are NAND and NOR. NOR flash have long erase and write times, but it is nearly immune to corruption and bad blocks, allows random access to any memory location and almost all controllers on mobile phones have a NOR interface (Pon et al., 2007). NAND flash offers higher density capabilities, is cheaper than NOR, is less stable, need a supporting separate RAM to work (ibid) and only allow sequential access mode (Gal and Toledo, 2005). In mobile equipments usually the NOR stores executable software (i.e. BIOS) and
One-way programming
Code model
There are two techniques to execute program code on flash devices (Numonyx, 2008a): Store and Download (SnD), requiring external RAM, and eXecute in Place (XiP) faster than SnD
Flash devices are only able to program a value from 1 to 0 but not from 0 to 1, so when data is updated, it is written to a new location and the old location is marked as invalid (Numonyx, 2008a). The invalid location is then erased usually during a background process before being reused.
Unlike hard disks, the erase-write cycle in flash memories is a physically exhausting activity, so the lifetime on a flash memory is inversely proportional to its use. A location can be programmed and erased reliably up to 100,000 and 10,000 times respectively and, as general rule, the following formula could be used to calculate the expected lifetime of NAND flash with FAT filesystem (Numonyx, 2008a). Techniques to circumvent the problem of flash wearing will be discussed in next pages.
Fat overhead include all management activities the filesystem needs to perform files administration (Hendrikx, 1998)
Figure 1. Old mobile equipment layout with optional NAND module (Kwon, 2009)
Figure 2. Basic design of memory chip (left) and flash memory links (right) (OKelly, 2007)
The Flash Filesystem Architecture is based on logical unit (LUN), blocks, pages and sectors (Intel, 2006, Numonyx, 2008a, Samsung, 1999). A LUN is a logical division of the whole memory land. LUNs are then split in blocks. Each block can vary in size, where the most common is 128KB. In the majority of NAND flash devices each block is made of 64 pages of 2KB each. A page is divided in two regions: the data area, and the spare area used for memory management purposes (more later). Pages are divided in sector units (or chunks) of 512 byte to emulate the popular sector size (ibid). The block is the smallest erasable unit while the page is the smallest programmable unit.
Figure 3. Store and Download Code Model (left) and XiP Code Model (right) (Numonyx, 2008a)
4/2010 HAKIN9 37
ATTACK
At first, a page was 528 bytes long as the original intent of the NAND Flash was to replace magnetic hard disk drives, so it was required a page to be big enough to store one sector (512 bytes) of data with extra 16 Bytes for management purpose (Inoue and Wong, 2004). Then, as capacity storage of flash increased, so did the default page size to comply with FAT file system. On 1Gb flash memory, there are 128 MB of addressable space: for hard drives sized up to 128 MB, the default cluster size in FAT system is 2KB with 4 sectors each, as in the flash memory except for the extra bytes (64B) (Microsoft, 2009)
A spare area, referred also as out-of-band data, is a region generally made of 16 Bytes and there is one for each sector or chunks (Gal and Toledo, 2005, Raghavan et al., 2005); its size is not included in device capacity and it cannot be directly addressed (Elnec, 2009). One use of spare area is to store results of data verification: after a page has been erased, programmed or read, its status is verified with a particular algorithm (aka ECC more next) and the relative output is later used to detect errors whenever the data is read back (BPMicrosystems, 2008). Spare area could store also information on the status of blocks and pages (Tsai et al., 2006), or other information similar to metadata seen in NFTS filesystem (Carrier, 2005, Casey, 2004). The following is a representation of spare area in Samsung OneNAND, for further information see (Samsung, 2005a). The are two storage methods for spare areas: adjacent to data area or separate from it (Micron, 2006a). Looking at most of the Samsung datasheets it seems their mainly used model is the second one.
The main differences between flash devices and hard disks are (Raghavan et al., 2005): standard size of sectors (see flash sector block size in the Figure 10); unlike hard disks, the write and the erase operations in flash device can be an independent activity and related to the software using the flash apparatus;
Figure 7. Assignment of the spare area in the Internal Memory NAND on OneNAND (source: Samsung) flash chips have a limited lifetime due to erase wearing; flash devices can be powered off without proper shutdown and still have consistent data: this is not possible in case of hard disk with normal file systems, so embedded systems need a specific file management flash oriented.
A file system is a data structure that represents a collection of mutable random-access files in a hierarchical name space (Gal and Toledo, 2005). To operate with (legacy) host filesystem, NAND flash memories require either a specific filesystem or a specific driver. Actually we have both: indeed there are specific flash file systems (like YAFFS, JFFS, UBIFS, and LogFS) as well as specific driver better known as Flash Translation Layer. FTL is a driver that works in conjunction with an existing operating system (or, in some embedded applications, as the operating system) to make linear flash memory appear to the system like a disk drive (Intel, 2006) The main mission an FTL carries out is to support all tasks required for managing data transparently to host filesystem: i.e. a FAT filesystem will demand to the FTL all activities required to store and retrieve data properly to/from the NAND flash devices. (BPMicrosystems, 2008, Intel, 1998, Morris, 2007).
Figure 8. Spare Area Assignment in the Internal Memory NAND on OneNAND (source: Samsung)
ATTACK
FTL main tasks are: Mapping the storage area in virtual small sectors Managing data on the flash so they appears to write in place Housekeeping: as flash memories are subject to wear, it is required a software that will level the use of memory areas. embedded in the operating system flash oriented (i.e. YAFFS) or can be made from a flash manufacturer as port for specific operating system like Unistore II made by Samsung for Symbian OS (Morris, 2007, Samsung, 2006b). For more info on algorithms and data structures see (Gal and Toledo, 2005). Coming back to UBIFS, it is a new flash file system developed by Nokia engineers with help of the University of Szeged and may be considered as the next generation of the JFFS2 file-system (MTD_group, 2008).
FTL for NAND can come in several flavours: it can be the one made by the manufacturer and embedded in the device (i.e. Samsung), the one
Figure 10. Standard size of sector block of devices under 256 Mb and over 512 Mb density (source: Samsung)
When data in memory flash are updated, it is not possible to program the same page for the one-way programming peculiarity of flash devices, so the page containing the to-be-updated data is entirely rewritten to a new location (could be or not the same block). In the spare area, the page with new data is marked as valid (live), while the old one is marked as invalid (dead). When the number of dead pages in a block is more than a given clearance than all live pages are rewritten to new locations and the block erased to allow future programming: this is an underground process called Reclaim of Garbage Collection and it is activated without user involvement and at not fixed time (Tsai et al., 2006). Note: in the example above, are used only two blocks but in the real world reclaiming could involve more blocks. To avoid excessive usury of same area despite others, a process called Wear Levelling manages blocks so that they are wisely used: there is a static wear levelling and a dynamic wear levelling, both attempts to extend lifetime of flash (Numonyx, 2008c, Jones, 2008). Wear levelling procedure can be embedded in the firmware of memory flash or left under care of host file system (Numonyx, 2008b, Numonyx, 2008c, Jones, 2008, JI et al., 2009). Data in the invalid blocks or dead pages can store information of interest for the forensic analyst and should be acquired before Reclaim take place: analysts are asked not to alter the state of the evidence, but as Wear Levelling and Reclaim are underground processes this requirement
Figure 12. The Reclaim process as part of the Wear Levelling policy
40 HAKIN9 4/2010
A page can be programmed, erased and read; after each operation is it necessary verify the status of the page. To perform this verification, flash devices use a verification algorithm that produces a sort of hash/CRC value for each accessed page: the value is then stored in the spare area (Numonyx, 2008d). This algorithm is generally referred as the Error Correction Code. If a bit error is detected after the read phase, it can be recovered by ECC; if the error is detected
Figure 13. The state of blocks before and after Reclaim (Intel, 2006)
The SBB can causes a shift between physical and logical arrangement of data in flash device with more than one LUN. SSB could also lead to a block encroachment where a block from a partition (B) is retrieved to be at service of a previous and contiguous partition (A). That is, it will be possible to have two blocks with the same number (BPMicrosystems, 2008, Breeuwsma et al., 2007).
Figure 14. Skip Bad Block (left) vs. Reserve Block Area replacement strategy (right) (BPMicrosystems, 2008)
When utilizing RBA, partitioning of data is not done and the device is simply divided
Figure 15. Block encroachment (left) and Block number duplication (right) (ibid)
4/2010 HAKIN9 41
ATTACK
into user block area and reserve block area (BPMicrosystems, 2008, Samsung, 2006a). A proprietary table is used to
map bad blocks to the RBA. In case the table gets lost, it should be possible to reconstruct a new one by reading flags
that the point of view of engineers is not the same of evidences analysts) When the FTL logic and relative functions are embedded in the NAND, then the flash is categorized as managed NAND, while when FTL is under care of host filesystem (the logic is external to the NAND) then the flash is said raw NAND. Raw NAND contains just the flash memory array and a Program/Erase/Read (P/E/R) controller (Pon et al., 2007). For forensic analysis, it is fundamental considering difference between raw and managed NAND, with particular regard to effects of reclaim and bad block management.
Figure 16. Partitioning for Skip Bad Blocks (left) and Reserve Block Area (right) (White, 2008)
in the spare area of all blocks even if some authors think this is an extremely difficult task (Inoue and Wong, 2004).
(Usually this chapter is set at beginning of any flash document: the reason it has been set here is due to authors opinion
On 2003, Samsung developed a new unified flash memory device for code and data storage: the OneNAND. This device has both high-speed data read function of NOR Flash and high speed write capability of NAND Flash. At date of writing the data storage capability of NAND area is up to 16 Gb. OneNAND has a NOR interface, so the chipset detects the OneNAND as NOR, while the data can be stored directly in the NAND area using multiplexed access lines. OneNAND is classified as a raw NAND with internal ECC capabilities (Samsung, 2005b).
Figure 18. OneNAND layout (left) and Historic vs OneNAND architecture(right) (Samsung)
Salvatore Fiorillo
I am a security consultant and researcher focused on weaknesses of physical and digital systems. Holding a Master of Computer Security accomplished in Western Australia and the ISO 27001 certification, I have trained hundreds of security officer either of public and private organizations. As consultant I work only for few, interesting and selected customers.
TIMOTHY KULP
Difficulty
In this article we will walk through how to build a Threat Model, explaining the process along the way. For the tutorial, our team works for XYZ Motorsports, a high end car manufacturer. XYZ has asked that we build a new feature for their website, a Custom Car Designer that will allow users to build their car online and send the results to a database where a sales clerk can retrieve them at the dealership. As with any development project, the basics of the SDLC (Systems Development Lifecycle) begin with determining system requirements. Working with XYZ we need to determine what the system should be able to do, how people are going to access it, and a slew of other questions that are too many to mention in this article. The answers to these questions will form the requirements that the system must meet in order to be complete. These systems requirements will drive the very first activity of our Threat Modeling process, determining the boundaries of the Threat Model.
To diagram our system we need to illustrate the elements that comprise the system in a Data Flow Diagram (DFD). A DFD illustrates how elements of a system work together. DFDs are not specific; their purpose is to convey an idea of how the system will work without going into the details you would
Out of Bounds System can be physically stolen User count will be small System will be accessible on mobile devices
Building Boundaries
Threat Modeling can be a daunting, open-ended task unless you define boundaries for the model. These boundaries will limit our Threat Model to a scope that is relative and meaningful for our project. Later in the Threat Modeling process, we will be brainstorming all the possible threats that could affect our system. The boundaries are important because they keep you focused on actual threats to your system and not just building an enumeration of all the threats in the world. In our example, we know that XYZ Motorsports has asked that we build a new feature for its website. A sample boundary would be: System is susceptible to web based attacks. Another boundary would be; System is open to anonymous users. These boundaries describe what we need to keep in mind as we are building this system. An example of an out-ofbounds statement would be; System can be physically stolen. Theft is a good
Description Something or someone who interacts with the system but is outside of the system. Examples: User, Web Service, etc... A symbol of what is happening on the system or the name of a process Representation of data moving from one element of the DFD to another. Where the data gets stored in the system. Examples: Database, XML file, registry, etc... The divider between two systems stating where the level of trust changes.
DEFENSE
expect in UML. Table 2 shows the various shapes used in a DFD to illustrate what is happening in the system. Threat Model DFDs add a shape to the standard DFD elements, the Trust Boundary. Trust Boundaries segment the areas of a DFD where the amount of trust changes. An example of this would be between a Web Server and a Database Server. You could separate the two with a Trust Boundary to illustrate that users who can access the Web Server cannot necessarily access the Database Server as well. A systems flow is illustrated by combining the DFD shapes. This allows a broad audience to view and understand the diagram regardless of technical ability. Figure 1 is the Context Diagram (the most general diagram) of our system. The Context Diagram DFD shows a User (Actor) who sends information (Data Flow) to the Custom Car Designer (Process) feature we are building for XYZ Motorsports. There is a trust boundary between the Actor and Process because the User is an anonymous web user who is accessing the Custom Car Designer. We do not trust the user to do anything Table 3. STRIDE and Counter STRIDE STRIDE Attack Spoofing Tampering Repudiation Information Disclosure Denial of Service Elevation of Privileges STRIDE Defense Authentication Integrity Non-Repudiation Confidentiality Availability Authorization but use the process we have defined. As the User sends information to the Custom Car Designer, that information is Sent to Storage (Data Flow) and saved in our Database (Data Store). The second trust boundary separates the Web Server from the Database Server. Users who access the Web Server cannot necessarily access the Database Server, a permission change is necessary. At the bottom of the diagram, you can see the data retrieval process. Notice, this diagram really does not tell us much about the system other than the basics. To get more details we need to go into a Level 0 Diagram, which is an expanded view of a Process in the DFD. Figure 2 is the Level 0 Diagram of the Custom Car Designer Process. We can now see a lot more detail about what is happening in our DFD. Each Process could be further broken down into a Level 1 Diagram, but for our tutorial, we will stop here. Notice we now have two more trust boundaries and our one Process turned into four. We also have the addition of an Actor, the Car Manufacture Web Service. This web service is where our Custom Car Designer finds information about what features are available for which car. Remember, Actors do not always have to be a person; it is someone or something (like a Web Service) outside of the system that affects the system. The new Process are our software tiers, which handle how the system saves user information to the database. Our companys security and development policies dictate that each tier of an application must exist on a separate server. This makes mapping the Trust Boundaries simple because we just need to follow our policies. For our example, the Presentation Layer is an ASP.NET web application that lives on a web server. The Business Logic Layer (BLL) and Data Access Layer (DAL) exist on Application Servers inside of our network. Let us examine the Trust Boundaries: Trust Boundary #1: User is outside of the system and is not trusted. You do not know who is using the system. Trust Boundary #2: Car Manufacturer Web Service is outside of our network. Trust Boundary #3: Ajax Web Interface and Presentation Layer are separate tiers from the BLL and DAL. We do not want an Anonymous user who accesses the ASP.NET application to have access to the BLL or DAL DLLs. Trust Boundary #4: Database exists on the Database Server and is separate from the Application Server that houses the BLL and DAL DLL libraries.
Sometimes defining where trust in the application changes can be subjective. When in doubt, always look to your security policies for guidance.
Now that we have our diagram, we need to Identify Threats. This is the point in the Threat Modeling process where you build your Review Team. I have found that a team consisting of the following works best for my company: developer, tester, network analyst, security professional
Which of these four methodologies you utilize will depend on your companys security policies and quality control practices. Brainstorm with your team on how to handle the threats. Look for solutions using code, default settings, or other technologies to resolve the threat. If you cannot find a suitable solution, then think about whether to cancel the feature or system. Sometimes a threat does not have a reasonable solution in the context of the system. In this
Table 5. Sample Threat List Spreadsheet ID# Threat 1 Change the data requested from web service returning unexpected results Intercept request and read data the user is requesting from web service Attack Web Server for DOS Mitigation Discreet error messages hiding error details and allowing the system to be usable. Use SSL/TLS Mitigation Complete No Comments This will keep system details from being exposed when an unexpected result is encountered. SSL/TLS is already setup for this domain so nothing special has to be done. Use existing web farm and security policies to keep systems secure.
Yes
Harden network
Yes
4/2010 HAKIN9
47
DEFENSE
situation, you need to discuss with the Program Manager whether the systems development should continue. For the three threats we have identified the team brainstorms how we can resolve them. Using the Counter STRIDE (see Table 3 as a reminder) we can see that Threat #1 (Tampering) requires an Integrity control, Threat #2 (Information Disclosure) requires a Confidentiality control and Threat #3 (Denial of Service) requires an Availability control. The team comes up with the following: Malicious user can submit whatever they want but we will build a check to ensure that if the system does not understand the results it provides a generic error message. Integrity To prevent malicious users from listening in on the data going from the Presentation Layer and the Car Manufacturer web service we will use SSL/ TLS to encrypt the traffic. Confidentiality To ensure that the server is resilient to attack we will make sure all security patches are applied in a timely manner, all firewalls are up to date and the application exists on our web farm (for server redundancy) Availability With everyone on the team agreeing on the mitigations, we add them to the spreadsheet. As the development team implements the mitigations they would mark Mitigation Complete to YES. As you can see, our existing infrastructure has SSL/TLS and secure servers so we have two of our three items complete. building your mitigation strategies gives the Review Team a clear head and fresh eyes to see things they missed before. If you have a team (outside of your standard Review Team) that is familiar with Threat Modeling, this is also a good time to bring in new people to get their analysis of the model. Validation is simply the process of double-checking your Threat Model to ensure that it addresses as many threats as you can find in your DFD. As a security professional, this is where you will be a critical asset to your team. You will be able to confirm whether the proposed mitigation will be suitable to address the threat. Take your time with each threat. If your team is discussing a XSS threat, ask how the mitigation strategy will work from a variety of angles. A word of caution; be mindful of how aggressively you attack the Threat Model. Your team has probably worked on this for days or at least a few hours. In my experience, non-security people are very proud of their Threat Models and the last thing they want is someone to point out all the mistakes. Be supportive of your peers Threat Models providing positive constructive criticism. and you need to make sure that your Threat Model is still sound against those attacks.
Yes, Microsoft has released a Threat Modeling tool as a free download online. Using the Microsoft Threat Modeling tool, you can easily build your DFD, apply trust boundaries and then have the tool analyze your DFD for threats. Microsoft Threat Modeling tool automates the process of threat identification using the STRIDE approach. Documenting your mitigation strategies is also integrated into the tool enabling the team to only need one place to work with all their Threat Modeling data. When complete, the tool even has a Reporting feature that will allow the Review Team to produce a Threat Model report for the Program Manager to review. Microsoft has provided many tutorials about their Threat Modeling tool on the SDL website. Download the tool (which also requires Visio 2007, but the demo version will work as well) from the site then walk through some of their tutorials. They have video tutorials as well as written documentation, so there is something for every type of learner.
Summary
Secure software does not build itself. If it did, our computer systems would be much safer. Using Threat Modeling is a great way to verify that your system will be proactive about security and break the security as an afterthought mentality prevalent in the development community today. Introduce Threat Modeling to your development team and reap the benefits of proactive software security.
On the Net
http://www.microsoft.com/security/sdl Microsoft's SDL Website http://msdn.microsoft.com/en-us/magazine/cc163519.aspx Uncover Security Design flaws using STRIDE http://blogs.msdn.com/sdl/archive/tags/threat%20modeling/default.aspx Threat Modeling posts at the SDL Blog
Tim Kulp
Tim Kulp (CISSP, CEH) is an Information Security professional in Baltimore, MD. He specializes in secure software development and penetration testing web applications. In recent years Tim's focus has been working with development teams on updating applications to utilize secure coding practices and studying the security impact of Social Media.
48 HAKIN9 4/2010
KONBOOT V1.1
KonBoot v1.1
URL: http:// www.kryptoslogic.com/ Cost Personal Licence: $15.99 (+VAT in Europe) Commercial Licence: $75.99 (+VAT in Europe)
Kon Boot is an application that is designed to get a user back into their Windows based machine if they have forgotten their local password. It couldnt be easier to use either. Once you have downloaded the application from the site it extracts into 4 folders: Help Files, Kon Boot USB, Kon Boot CD and Kon Boot Floppy. I was surprised to see the last one, didnt think there would have been a call for it personally, but it is good to see it there in a way for those older machines. Once the CD image was burnt to a CD, it was popped into my laptop and rebooted. As Kon Boot loads you see a nice throwback to the old ascii days of screen menus, and then your Windows system starts to boot as normal. Now my local system has a complex seven digit password which I thought would be safe from this. No such luck, within seconds I am automatically logged into the system. I went to the user settings to see what was there, and XP seemed to think that there wasnt any password set on my administrator account. It offered me the option to create a password instead of what I expected to see (change password). By delving into the event logs, I noticed 4 failure audits in my Security logs, right around the time that was booting up using Kon Boot. This is the only real evidence that someone has logged into my machine, without even using a password. Kon Boot has to be the easiest system I have used to gain access to a machine where the password has been forgotten.
Unlike a lot of the Linux disks out there that force you to change the password (Kon Boot doesnt require this to be completed), because whatever machine you are using it on is yours totally and there is more or less no indication that you have actually been on it. According to the specification sheet, Kon Boot is effective on Windows XP Home Edition, through to Windows 2008 Enterprise Server, but it will not work on machines that have an encrypted file system. If a machine is part of a domain, it may work with locally cached credentials unfortunately I was unable to test this. If the username that you are trying to use is blank, you can log in as guest Kon Boot will still work, and you can then escalate upto a system account. Now you cant get much better access than the actual system account itself. This tool is now a permanent resident on my usb key, and will stay there for as long as Kon Boot is still effective on Windows systems, as there is always someone out there who turns round and says I forgot my password can you try to fix it for me.
4/2010 HAKIN9
49
Following on from last months article The Evil Twins Identity Fraud and Phishing this months article looks at the Identity Theft Protection Services industry. We also look at the latest identity fraud threats which highlight why individuals and businesses should consider using these types of protection services.
ome of the most common forms of identity fraud being committed in 2009 and will continue into 2010 and beyond are facility takeover fraud, current address fraud, application fraud and all-in-one frauds and these are not restricted to the US and UK (Note: Although UK/US are referenced in this article, most of the fraud types we discuss can be found all over the world.) its a global problem. Additional to this, we have found that identity fraud techniques are evolving (at a similar rate to malware) which is why when you read on youll find out how at risk individuals and businesses are.
Current address fraud requires a greater level of sophistication from the identity fraudster. Rather than use details from a victim's previous address, identity fraudsters impersonate their victims more successfully if they can obtain a full set of their victims' details and attempt to impersonate them at their current address. FACT: In the UK CIFAS identified a 24% increase in current address fraud reported in 2009, compared with 2008, even
Application Fraud
Application fraud occurs when someone steals enough personal information about you, such as your name, address, telephone number and Social Security Number (SSN this is called National Insurance or NI in the UK), to enable them to apply for financial credit products in your good name. Figure 2 highlights the UK Application Fraud statistics declined in 2009.
Facility takeover fraud in the UK has also continued to increase. A 207% increase in 2008 compared with 2007 has been followed by a further increase of over 16% in 2009 (see Figure 1). This type of fraud is actually opportunist . Fraudsters (i.e. family members) compromise the accounts of friends and family knowing that they will not be liable. Another attack method is phishing whereby fraudsters attempt to steal Internet bank login details using malicious emails, rogue security software (scareware) and malicious websites to name a few.
50 HAKIN9 4/2010
Although the graph above shows Application Fraud fell by a quarter, this figure is considered misleading. The true scale of application fraud may be disguised by the current economic conditions. Not only are many unwilling to apply for credit, but due to tighter lending criteria, many applications are rejected outright before the fraud checking stage.
An all-in-one product is one where a group of financial products is offered together and interact together (for instance a bank account off-setting a mortgage). The number of all-in-one frauds increased in the UK by almost 15% in 2009 compared with 2008. (CIFAS, February 2010). Closer inspection of the UK figures above and the increase may actually be attributed to a rise in application fraud on financial products. Where the product was an all-in-one product there was more application frauds than identity frauds. One possible reason for this might be the nature of the product (i.e. the combination of financial services involved) means that the figures display aspects of the frauds that are affected by the various products involved. As for the low level of identity fraud this could be attributed to a low level of identity fraud typically seen in mortgage fraud.
The identity theft protection service industry is relatively new to the UK, although in the US it has been around for some 5-6 years and is still evolving. In so far as which country leads the way, its clear the US is some way ahead of
A new type of identity fraud scam has appeared here in the UK called Phantom flat transfers Potential tenants are being targeted by so called landlords when they make a request to view a property and are asked to provide a proof of funds by transfer of money in to a friends account. The landlord then requests to see the money transfer receipt and with this gains access to the money at the transfer agency simply by quoting the transfer number. In a recent case in the UK, a student was asked to make a transfer of GBP
Who are the companies that provide these services in the US?
There are a number of companies in the US that provide Identity Theft Protection. Most of them offer similar services with the most service offered is online credit monitoring. These companies include Debix, LifeLock, ID Amor, IDwatchdog, Identity Guard, IdentityTruth, Intelius, ProtectmyID, truecredit and TrustedID.
*If an application for credit is made in your good name you also have the option of receiving an EMAIL or SMS. The three leading credit reference agencies in the US are: Experian, Equifax and TransUnion.
The average cost of US identity protection services varies from $9 to $20 per month. Worth noting if you do decide to purchase a credit monitoring service you will have to pay extra for your credit score.
Here is what you should look for if you are living in the UK: Credit reporting / scores i.e. providing single report or triple reports analysis* Computer protection i.e. antimalware/firewall/anti-virus/password protection 24/7 access to trained ID Theft Resolution Specialists includes identity recovery Identity theft Insurance (up to GBP 50,000) Lost wallet/cards protection will cancel and replace your cards/ passport etc CIFAS Protective Registration places a warning flag against your credit file(s)**
Figure 5. Incidence Rates and Numbers of Victims for 2003-2009 (Javelin 2010)
52 HAKIN9 4/2010
*If an application for credit is made in your good name you also have the option of receiving an EMAIL or SMS. The three leading credit reference agencies in the UK are: Experian, Equifax and CallCredit. **CIFAS Protective Registration can also be purchased separately for GBP14.10 for one year. (Correct as of March 2010) Figure 6 shows that each quarter of 2009 saw more identity fraud cases
Who are the companies that provide these services in the UK?
Nearly 85% of identity frauds were committed against mail order accounts in 2009 were done this way. Bank accounts and communication accounts (i.e. mobile phone accounts) also witnessed large scale rises. Most telling are those current address frauds committed against plastic card accounts such as credit cards. (CIFAS, February 2010) The rise in these more sophisticated types of fraud can often be perpetrated by organised criminal networks exploiting individuals use of computers and the internet to obtain, by stealth (for example through phishing attacks or malware propagation) a more complete set of an individuals private details. FACT: CIFAS the UK's fraud prevention service, has reported a surge of almost a third in identify theft fraud during 2009, something that it says points to collusion between criminal gangs and staff working inside financial services companies. This apparently damning indictment on the morals of call centre staff and management working in the financial
There are a number of companies in the UK that provide Identity Theft Protection. Most of them offer ONLY credit monitoring. These companies include Callcredit, Checkmyfile, CIFAS, CPP, Equifax, Experian, Garlik and PrivacyGuard.
The average cost of UK identity protection services varies from GBP8-10 per month (this mainly applies to credit monitoring only). In the UK there is only one company that offers an identity protection service, similar to what is on offer in the US called Garlik they charge GBP45 for a one year subscription for individuals to use DataPatrol. Garlik DataPatrol DOES NOT offer an online credit monitoring service.
Final thoughts
The identity theft protection service industry is still something very new. Most people dont unfortunately know what identity fraud is and how it can affect them. Apart from the obvious education and awareness and media exposure, identity fraud is little understood in the UK and most developed countries and only a fraction more in the US.
Julian Evans
Julian Evans is an internet security entrepreneur and Managing Director of education and awareness company ID Theft Protect (IDTP). IDTP leads the way in providing identity protection solutions to consumers and also works with large corporate companies on business strategy within the sector on a worldwide basis. Julian is a leading global information security and identity fraud expert who is referenced by many leading industry publications.
4/2010 HAKIN9
53
INTERVIEW
Interview with
Victor Julien,
to learn C-programming. Vuurmuur is an iptables management frontend. It's aim is to allow a network admin to create a good firewall ruleset without the need for iptables knowledge. What I think sets it apart is that it has real-time monitoring features. That way you're not only able to create a ruleset, but you can also closely monitor how it's operating in the same frontend. How and why did you get involved in snort_inline? I read the Snort 2.0 book and it had a chapter about Snort_inline. Because I was very much interested in firewalls and inline security, I decided to check it out. Then I ran into some issues. I emailed Will Metcalf, the maintainer of the project, and after discussing the issue I tried fixing it. After that I remained involved in the project. Most of the work I did was related to the TCP stream tracking and reassembly code. What is the future of Vuurmuur? Vuurmuur is pretty much complete for managing a IPv4 firewall right now. It supports complex firewall rulesets and traffic shaping. I'm not spending much time on it anymore. One obvious
missing feature is IPv6 support. There is a community effort for that, but it's still in it's infancy and not moving very fast at the moment. Can you tell us about what you're doing with Suricata at the OISF? My current role is that I'm the lead developer of Suricata, OISF's open source IDS/IPS engine. I make sure our team of coders get tasks to work on and I review and integrate their work. Next to that I work on the more complex parts of the engine, such as the threading model and the detection engine. Why did you get involved in writing a new IDS Engine? Seems a significant undertaking when what we have does alright. I started coding of Suricata in November 2007. Initially I was just playing with some multi threaded packet forwarding code but quickly realized the potential it had. Matt Jonkman, Will Metcalf and myself had been dreaming about creating a new Open Source IDS for some years. Matt happened to get into touch with people in a position to fund us and so it all started. My reason for starting Suricata is that I wanted to create a multi threaded IDS/
IPS that is designed for inline use from the start. Both things are hard to retrofit properly into an existing program. More information about Suricata and the OISF is available at the OISF website, http://www.openinfosecfoundation.org. What have been the major technical challenges youve encountered in building Suricata? Programming in threads is complicated for sure, but I think we managed to get a good frame work set up that allows good performance and relatively easy development. In general, performance is very important, and so is the memory footprint. It's easy to track a connection in memory. But what if you want to track a million or more? Some of the challenges involve lack of knowledge. Luckily, others can provide that. For example for our HTTP module we asked Ivan Risic, of ModSecurity fame, to write a HTTP parser that is security aware. His work is available to the general public as a separate project called libhtp http://sourceforge.net/projects/libhtp/. What kind of features are you planning to introduce in the next release of Suricata? Currently we're working towards our 1.0 release, the first stable release. We're mostly focussing on performance, stability and a few minor missing features. In what we refer to as phase 2 we're going to be looking at a great number of new features. I'll name a couple; CUDA accelaration, something we're already working on as an experimental subproject, is one of them. An extensive ip-reputation system is another. Passive SSL decryption, at least for HTTPS, is something we're planning to go after as well. Not much of these goals is set in stone yet. We're planning a public OISF meeting in July where anyone that is interested can join us in talking about new ideas. What do you see is the most significant security challenge we face now, and will face in the coming years? I think the biggest challenge isn't technical. The problem is that there
is a lot of collaboration between the people that we need protection against. The real problem is that our defenses are extremely fragmented, both in information and defensive technology. Hacked organizations try to keep their problems below the radar instead of sharing knowledge. Security vendors compete more than they collaborate. While both are understandable, I think this puts us defenders back a lot. Not even the biggest vendors and governments can handle the problem on their own. So, we need more collaboration. The Emerging Threats project is a great example of such a project. It's community works together to quickly provide attack signatures for bleeding edge issues. A lot of information sharing is going on. The OISF project is another attempt at doing that sort of collaboration. In this project our aim is collaborated development of new and better detection and prevention techniques. At OISF we encourage all security vendors to join us to help us develop technology to keep up with the fast moving threat landscape, and benefit from it. Personally, an example for me is the Linux kernel project. Even the biggest competitors work together there. I hope OISF's Suricata engine can one day reach a similar status in the network security world. Something we all build together. Whats next for you? Who knows! :) I hope I can remain involved in Suricata for the next couple of years as there is a lot of work left to do. Next to this I'm still working as a contractor so we will see what crosses my path! Follow my blog (www.inliniac.net/blog/ ) or twitter (twitter.com/inliniac) if you're interested!
4/2010 HAKIN9
55
INTERVIEW
Interview with
Ferruh Mavituna,
every day. The target was developing a web application scanner that doesnt suck. I believe we accomplished that :) Since we believe that Netsparker is superior against its competitors when it comes to find and confirm vulnerabilities we decided to show off our really good SQL Injection and XSS engine to the whole world and give something back to the community. Thats why we released Netsparker Community Edition. We already received some stories about how 0$ Netsparker CE identified a SQL Injection that a 25K$ something scanner missed. Embarrassingly I couldnt start working on BeeF and XSS Tunnel integration yet, but hopefully when its done itll be pretty nice because many people want to run XSS Tunnel with LAMP stack and many people already using BeeF.
Weve got several other projects going on, Netbouncer (an alpha stage aggressive secure coding library for addressing injection issues .NET applications). Netbouncer is something thats going to change secure coding. Hopefully when its ready for production youll hear more about it. We do have another project called SVNDigger. DirBuster doing a great job at finding hidden resources in websites and has one of the best dictionaries to do this task. However I think there is fundamental problem about how the lists compiled. DirBuster lists compiled based on the public, crawlable, linked URLs. However when you do an application test you are generally after not linked files and directories. For example in 10000 websites only 50 of them might have linked to the /admin/ directory but if you go and
4/2010 HAKIN9
57
INTERVIEW
So I use a fuzzer or a tool or a script which can handle it in an automated way. Finally when I see a feature I always think of how the developer coded it, what are the different and most common ways to develop a feature. When you how it can be done its much easier to predict common pitfalls. Other than Netsparker, what are your top tools that you utilize in pentesting the most? Other than Netsparker Im a big fan of web developer toolbar (Firefox extension) and Fiddler. When I need fuzzing I go with Freakin Simple Fuzzer or webshag and almost always DirBuster. Who are your top pentesting luminaries? Who do you read/respect/ respect? Ive got a huge RSS list with many security blogs, Ive even got couple of Chinese and Russian blogs that I had to use Google Translate to follow. I take a look into almost all application related presentations and papers published in security conferences. Every other day I quickly scan Full-Disclosure, BugTraq and couple of other mail-lists as well, read interesting advisories, tools, papers etc. Recently I started to follow many security guys on Twitter as well which is a nice way to quickly hear about the latest buzz. by Jason Haddix
Jason Haddix
Jason Haddix is a Penetration Tester at Redspin, Inc. and Security Blogger at http://www.securityaegi s.com.Jason loves everything to do with (E)hacking, Social Engineering, the con community, et cetera. Jasons current projects include numerous reviews of current pentesting and incident handling teaching curriculum as well as being a main contributor to PentesterScripting.com and Ethicalhacker.net.
58 HAKIN94/2010