A few words about EuroSec2020
One of the targets of CONCORDIA is to build a European Secured, Resilient and Trusted Networked Ecosystem and also create and promote excellent academic research, as well as producing excellent papers for scientific journals, conferences and workshops. Within this context CONCORDIA also supports a number of conferences and workshops by assisting in their organization, promoting their results and findings through the appropriate dissemination channels so that new research members get in touch with this cybersecurity research community which is able to produce fascinating results. This April was the time for the EuroSec2020 workshop.
The 13th European Workshop on Systems Security (EuroSec 2020) initially was about to take place in Heraklion, Crete on 27th of April co-located with the EuroSys 2020 conference. Due to the covid-19 outbreak both EuroSys2020 conference and EuroSec2020 workshop held virtually. The workshop received submissions from a wide variety of research in computer security, seven of which were selected for presentation. These manuscripts describe some outstanding work in detecting modern security attacks and also suggested sophisticated defense mechanisms. Here we briefly present these submissions along with a simple description of the threat that they analyze.
Inspecting software as it runs
People download and install software in their computer that is vital for their work or just for entertainment. Regardless of the trust on the providing software vendor, there is always a possibility that this software tries to do something else than what it was intended for. Indeed, trustworthy and famous software packages have been “caught” to take a look on files containing passwords, reading unrelated computer settings and trying to access private information. But how do researchers uncover this nose-poking behavior? Stamatogiannakis et al. presented a tool, called PANDAcap, that monitors a software while it is running and closely reports its activity. This activity includes attempts to access all types of our system’s resources (i.e. memory, devices, files). By closely inspecting these activities researchers can uncover this type of suspicious behaviour.
Making sure that security checking software in your system is fast and… secure
The processor in your computer faces many security challenges when it comes to perform certain types of computation. Instructions coming from untrusted sources have to be verified before being sent for execution. Thankfully, your computer can perform various checks before deeming a piece of code as secure enough to enter your processor. Yet, there is a trade-off with which engineers have to deal. Either the suspicious code has to run in a separate space with the checking program, which is secure but slow, or the suspicious code can be run in the same memory area with the checking program which is fast but potentially dangerous since malicious code can tamper the checking methods. Wang et al. presented a clever solution that uses some special instructions of modern Intel computers along with a technique called code randomization, with which the checking code continuously and unpredictably changes itself. This method allows the storage of the suspicious code in the same memory without sacrificing efficiency.
Stressing operating system security with nonsensical commands
What is the most common method to gain unprivileged access to a computer? Surprisingly, it is the same as to gain access to a security vault: to break it. Security experts are continuously trying to “stress” operating systems to see if they can simply “break them”. A popular method to “stress” operating systems is to simply run many random instructions. Hopefully, one of them is so unexpected that the operating system has absolutely no idea what to do and it simply breaks, giving full privileges to the user that executed this command. This method of security test is called “fuzzing”. Liang et al., presented a fuzzing method that takes “hints” from legitimate commands about their structure and then randomizes them, hoping to generate commands that have the right amount of legitimacy/randomness so as to break the operating system.
Blocking mal-intended web requests in sites hosted in the “cloud”
Today a large part of the computation required to have a web page running is taking place in the “cloud”. In the cloud a user just rents computation hours and the whole burden of hardware maintenance, scaling and needing a physical location is relieved. But what if an attacker performs millions of meaningless requests to your page, exhausting thus the allotted computation resources? The cloud service will automatically refuse to perform any new request, after the rented quota is reached. This is called “application-layer” vulnerability and Fleck et al. build a defense mechanism that blocks requests that have the sole purpose to exhaust the resources of the hosting service allowing thus the service to remain alive even if an attack is deployed.
Online gaming bots are more human than we thought
Witchel et al. made a very interesting presentation that although it did not target any strictly defined security issue, it demonstrated an interesting type of unfair behaviour in online gaming. Bots in online games can be used by players in order to increase their capabilities. Trying to counteract this, gaming software uses advanced algorithms to detect automatic non human-like behaviour. Yet this work demonstrated that bots are far more intelligent than that and are gradually adopting human-like behaviour in order to avoid detection.
Tampering electoral events with Twitter bots
Speaking of automatic human-like behaviour, another very active area is Twitter. Twitter bot accounts have been identified in the past to spread misinformation of certain ideological inclination, aiming to affect the outcome of certain electoral events. Kellner et al. performed a comprehensive study on the German federal elections of 2017 that included 9.5 million tweets. Indeed, they found 79 accounts that have been mingled also in 2016 US presidential techniques, demonstrating thus the pervasive nature of this tactic.
Who tests the software that tests software?
Mistakes during software development, called bugs, happen often and are the main reason that some programs do not behave as they should. But how do developers search for bugs in their programs? There are special tools that analyze the structure of a program and help developers locate these mistakes. This brings an interesting question: how can we test these tools? Liang et al., presented a method that adds random errors in existing software and then checks if error detection tools can locate them. Interestingly, they concluded that these tools leave many bugs undetected and they have a lot room for improvement.
(By Despoina Antonakaki, FORTH)