I have not failed. I've just found 10,000 ways that won't work. – Thomas A. Edison

MAINTAIN - Intelligent Maintenance of Software Systems


Matheus Paixao, Paulo Henrique Maia, Ismayle De Sousa Santos, Jerffeson Teixeira De Souza, Allysson Allex de Paula Araújo, Anderson Uchôa, Thiago Ferreira, Chaiyong Ragkhitwetsagul
CNPq/MCTI No 10/2023 - Universal 2023

This research is led by Dr. Matheus Paixao and the team of UECE, Brazil. We plan to adopt the state-of-the-art in machine learning and software engineering to develop a novel approach for software maintenance.

Code similarity applications for improving software quality


Chaiyong Ragkhitwetsagul, Prof. Peter Haddawy (Mentor)
Research Grant for New Scholar (RGNS 2021) - Ministry of Higher Education Science Research and Innovation (MHESRI)

This research project aims to propose a novel approach to improve software by using code similarity techniques and create an automated tool for detecting similar code and giving relevant recommendations. The projects will leverage code similarity in order to tackle several issues that lead to low-quality software. For example, by measuring code similarity, one can locate duplicated pieces of code, i.e., code clones, within a software project or across software projects, and properly refactor them. This results in lower amount of redundant code in the software, hence easier to maintain. It can also avoid software licensing violations caused by code reuse and can also recommend software developers on how to write code that are well- accepted by the community. We plan to study and adopt suitable machine learning techniques for code similarity computations and will evaluate its effectiveness. Lastly, we plan to also deploy the tool to be used by developers and evaluate its usefulness and its impact in practice.

Automated Software Engineering for Thailand Software Industry (ASETSI)


Chaiyong Ragkhitwetsagul, Jens Krinke, Morakot Choetkiertikul, Thanwadee Sunetnanta, Federica Sarro, ProGaming, Trinity Roots, Zwiz.AI, CourseSquare

The software industry has become an integral part of Thailand’s competitiveness and the country’s future development plan (Thailand 4.0 and the 20-year roadmap). The current state-of-the-art in software engineering (i.e., the application of engineering approaches to software development and its artefacts) is automated software engineering (ASE), which is an adoption of automated tools and techniques to help human programmers plan, develop, and maintain software. Nonetheless, ASE is still a relatively new concept in Thailand software industry.

This Thai-UK collaboration aims to improve the quality of software development processes and products in Thailand’s software industry by transferring the knowledge of state-of-the-art automated software engineering (ASE) tools and techniques from the UK experts.

Achilles: Automated Tool for Detecting and Visualizing npm Dependency Vulnerabilities


V. Jarukitpipat, W. Wanprasert, K. Chhun, C. Ragkhitwetsagul, R. G. Kula, T. Sunetnant, M. Choetkiertikul

We propose the Achilles tool which detects and visualizes npm project’s dependency vulnerabilities. Achilles assists programmers in comprehending and analyzing potential risks of vulnerabilities of npm packages via the dependency graph visualization and analysis report. We have performed an evaluation using a user study and found that the graph visualization of Achilles helps support developers' decisions on prioritizing vulnerability to fix by providing more information about complexity and direct/indirect dependencies compared to the state-of-the-art tools. The tool can also detect vulnerabilities currently existing in several most-starred GitHub open source projects.

JARVAN: An Automatic Checker for Reused Stack Overflow Code Snippets in Java Software Projecs


P. Poolthong, P. Sirilertworakul, K. Wonwien, C. Ragkhitwetsagul

JARVAN is an automated checker for reused code snippets from Stack Overflow in Java software projects. It helps to analyze code clones from Stack Overflow within GitHub repositories by connecting directly to GitHub. Additionally, JARVAN can detect software license violation caused by reusing code from Stack Overflow. The tool analyzes the given GitHub repository and creates a report of the analysis to a user, which includes an analysis of a reused code used in a software project and warnings for possible software license violations.

Teddy: Automatic Recommendation of Pythonic Idiom Usage For Pull-Based Software Projects


 

Won the Best Tool Demo Award at ICSME 2020!

Phan-udom, P., Wattanakul, N., Sakulniwat, T., Ragkhitwetsagul, C., Sunetnanta, T., Choetkiertikul, M., & Kula, R. G.

Pythonic code is idiomatic code that follows guiding principles and practices within the Python community. Offering performance and readability benefits, Pythonic code is claimed to be widely adopted by experienced Python developers, but can be a learning curve to novice programmers. To aid with Pythonic learning, we create an automated tool, called Teddy, that can help checking the Pythonic idiom usage. The tool offers a prevention mode with Just-In-Time analysis to recommend the use of Pythonic idiom during code review and a detection mode with historical analysis to run a thorough scan of idiomatic and non-idiomatic code.

In this work, we first describe our tool and an evaluation of its performance. Furthermore, we present a case study that demonstrates how to use Teddy in a real-life scenario on an Open Source project. An evaluation shows that Teddy has high precision for detecting Pythonic idiom and non-Pythonic code. Using interactive visualizations, we demonstrate how novice programmers can navigate and identify Pythonic idiom and non-Pythonic code in their projects.

Our video demo with the full interactive visualizations is available at https://youtu.be/vOCQReSvBxA.

Siamese: Scalable and Incremental Code Clone Search via Multiple Code Representations


Journal-First Presentation at ICSE 2020
C. Ragkhitwetsagul, J. Krinke

This project develops a novel code clone search technique that is accurate, incremental, and scalable to hundreds of million lines of code. Our technique in- corporates multiple code representations, query reduction, and a customised ranking function to improve clone search performance.

We implemented the technique in a clone search tool, called Siamese, and evaluated its search accuracy and scalability on three established clone data sets. Siamese offers the highest mean average precision of 95% and 99% on two clone benchmarks compared to seven state-of-the-art clone detection tools. Siamese is scalable and can return cloned code snippets within 8 seconds for a code corpus of 365 million lines of code. Using an index of 130,719 GitHub projects, we demonstrate that Siamese’s incremental indexing capability dramatically decreases the index preparation time for large-scale data sets with multiple releases of software projects.

We successfully applies Siamese to facilitate software development and research with two use cases including online code clone detection and clone search with automated license analysis.

Toxic Code Snippets on Stack Overflow


Journal-First Presentation at ESEC/FSE 2019
C. Ragkhitwetsagul, J. Krinke, M. Paixao, G. Bianco, R. Oliveto

Online code clones are code fragments that are copied from software projects or online sources to Stack Overflow as examples. Due to an absence of a checking mechanism after the code has been copied to Stack Overflow, they can become toxic code snippets, i.e., they suffer from being outdated or violating the original software license. We study online code clones on Stack Overflow (SO) and their toxicity.

Our surveys of 201 high-reputation Stack Overflow answerers and 87 Stack Oveflow visitors showed that 131 answerers have ever been notified of outdated code and 26 of them rarely or never fix the code. 69% of them never check for licensing conflicts between their copied code snippets and SO’s CC BY-SA 3.0. The visitors experienced several issues from SO answers: mismatched solutions, outdated solutions, incorrect solutions, and buggy code. 85% of them are not aware of SO's CC BY-SA 3.0 license, and 66% never check for license conflicts when reusing code snippets.

Our clone detection found online clone pairs between 72,365 Java code snippets on Stack Overflow and 111 open source projects in the curated Qualitas corpus. We analysed 2,289 non-trivial online clone candidates and revealed strong evidence that 153 clones have been copied from a Qualitas project to Stack Overflow. We found 100 of them (66%) to be outdated and potentially harmful for reuse. Furthermore, we found 214 code snippets that could potentially violate the license of their original software and appear 7,112 times in 2,427 GitHub projects.

Image-based Code Clone Detection


 

Won the People's Choice Award! at Int. Workshop on Software Clones 2018

C. Ragkhitwetsagul, J. Krinke, B. Marnette

We introduce a new code clone detection technique based on image similarity. The technique captures visual perception of code seen by humans in an IDE by applying syntax highlighting and images conversion on raw source code text.

We compared two similarity measures, Jaccard and earth mover’s distance (EMD) for our image-based code clone detection technique. Jaccard similarity offered better detection performance than EMD. The F1 score of our technique on detecting Java clones with pervasive code modifications is comparable to five well-known code clone detectors: CCFinderX, Deckard, iClones, NiCad, and Simian. A Gaussian blur filter is chosen as a normalisation technique for type-2 and type-3 clones.

We found that blurring code images before similarity computation resulted in higher precision and recall. The detection performance after including the blur filter increased by 1 to 6 percent. The manual investigation of clone pairs in three software systems revealed that our technique, while it missed some of the true clones, could also detect additional true clone pairs missed by NiCad.

A Comparison of Code Similarity Analysers


Empirical Software Engineering Journal

C. Ragkhitwetsagul, J. Krinke, D. Clark

We evaluate 30 code similarity detection techniques and tools using five experimental scenarios for Java source code. These are (1) pervasively modified code, created with tools for source code and bytecode obfuscation, and boiler-plate code, (2) source code normalisation through compilation and decompilation using different decompilers, (3) reuse of optimal configurations over different data sets, (4) tool evaluation using ranked-based measures, and (5) local + global code modifications. Our experimental results show that in the presence of pervasive modifications, some of the general textual similarity measures can offer similar performance to specialised code similarity tools, while in the presence of boiler-plate code, highly specialised source code similarity detection techniques and tools outperform textual similarity measures. Our study strongly validates the use of compilation/decompilation as a normalisation technique. Its use reduced false classifications to zero for three of the tools. Moreover, we demonstrate that optimal configurations are very sensitive to a specific data set. After directly applying optimal configurations derived from one data set to another, the tools perform poorly on the new data set. The code similarity analysers are thoroughly evaluated not only based on several well-known pair-based and query-based error measures but also on each specific type of pervasive code modification.

Using Compilation-Decompilation to Enhance Clone Detection


 

Accepted at IWSC 2017 and won the People's Choice Award!

C. Ragkhitwetsagul, J. Krinke

We study effects of compilation and decompilation to code clone detection in Java. Compilation/decompilation canonicalise syntactic changes made to source code and can be used as source code normalisation. We used NiCad to detect clones before and after decompilation in three open source software systems, JUnit, JFreeChart, and Tomcat. We filtered and compared the clones in the original and decompiled clone set and found that 1,201 clone pairs (78.7%) are common between the two sets while 326 pairs (21.3\%) are only in one of the sets. A manual investigation identified 325 out of the 326 pairs as true clones. The 252 original-only clone pairs contain a single false positive while the 74 decompiled-only clone pairs are all true positives. Many clones in the original source code that are detected only after decompilation are type-3 clones that are difficult to detect due to added or deleted statements, keywords, package names; flipped if-else statements; or changed loops. We suggest to use decompilation as normalisation to compliment clone detection. By combining clones found before and after decompilation, one can achieve higher recall without losing precision.

Measuring Code Similarity in Large-scaled Code Corpora


Accepted at ICSME 2016 Doctoral Symposium!

C. Ragkhitwetsagul

There are numerous tools available to measure code similarity in the past decades. These tools are created for specific use cases and come with several parameters which are sensitive to dataset and have to be tuned carefully to obtain the optimal tool’s performance. We have evaluated 30 similarity analysers for source code similarity and found that specialised tools such as clone and plagiarism detectors, with proper parameter tuning, outperform general techniques such as string matching. Unfortunately, although these specialised tools can handle code similarity in a local code base, they fail to locate similar code artefacts from a large-scaled corpora. This challenge is important since the amount of online source code is rising and at the same time being reused oftenly. Hence, we propose a scalable search system specifically designed for source code. Our proposed code search framework based on information retrieval, tokenisation, code normalisation, and variable-length gram. This framework will be able to locate similar code artefacts not only based on textual similarity, but also syntactic and structural similarity. It is also resilient to incomplete code fragments that are normally found on the Internet.

Similarity of Source Code in the Presence of Pervasive Modifications


Accepted at SCAM 2016!

C. Ragkhitwetsagul, J. Krinke,
D. Clark


Source code analysis to detect code cloning, code plagiarism, and code reuse suffers from the problem of pervasive code modifications, i.e. transformations that may have a global effect. We compare 30 similarity detection techniques and tools against pervasive code modifications. We evaluate the tools using two experimental scenarios for Java source code. These are (1) pervasive modifications created with tools for source code and bytecode obfuscation and (2) source code normalisation through compilation and decompilation using different decompilers. Our experimental results show that highly specialised source code similarity detection techniques and tools can perform better than more general, textual similarity measures. Our study strongly validates the use of compilation/decompilation as a normalisation technique. Its use reduced false classifications to zero for six of the tools. This broad, thorough study is the largest in existence and potentially an invaluable guide for future users of similarity detection in source code.





Searching for Configurations in Clone Evalution: A Replication Study


Accepted at SSBSE 2016!

C. Ragkhitwetsagul, M. Paixao,
M. Adham, S. Busari, J. Krinke and J.H. Drake


Clone detection is the process of finding duplicated code within a software code base in an automated manner. It is useful in several areas of software development such as code quality analysis, bug detection, and program understanding. We replicate a study of a genetic- algorithm based framework that optimises parameters for clone agree- ment (EvaClone). We apply the framework to 14 releases of Mockito, a Java mocking framework. We observe that the optimised parameters outperform the tools’ default parameters in term of clone agreement by 19.91% to 66.43%. However, the framework gives undesirable results in term of clone quality. EvoClone either maximises or minimises a number of clones in order to achieve the highest agreement resulting in more false positives or false negatives introduced consequently.









Honedew: Predicting meeting date using machine learning algorithm

Project from master degree
C. Ragkhitwetsagul

Today scheduling a meeting via email becomes widely used. However, creating a meeting in a user’s calendar can be tedious. Honeydew is an intelligent agent, which uses machine-learning algorithms to extract all required information from a meeting email. Then it creates suggestions for the user. The user can verify the suggestions, and proceed to place the meeting in his calendar. The agent can dramatically reduce user’s workload of extracting all information from emails and put them in his calendar application. The agent also automatically improves its performances by learning from suggestions that is corrected by human users. This paper discusses an algorithm used for predicting meetings date from content in emails. It also includes the evaluation results of the system.




FoxBeacon: Web Bug Detector Implementing P3P Compact Policy for Mozilla Firefox

Project from master degree
C. Ragkhitwetsagul

This paper describes the design, implementation, and evaluation of a web bug detector called “FoxBeacon.” This detector is originally designed to be an extension of Mozilla Firefox. After being installed, FoxBeacon embeds itself into the Firefox browser and acts as a proxy. It reads every incoming web page and trying to find hidden web bugs. FoxBeacon treats each image as a web bug according to the pre-defined rules. In addition, FoxBeacon also includes the compact policy, one component of Platform of Privacy Policy (P3P), to incorporate and provide more information to its users. This paper also covers the evaluation of FoxBeacon functionalities and its results. The ultimate goal of this project is to optimize between least disruption to the browser’s users and the most effective way in order to make the users aware of the hidden web bugs. In addition, reducing the number of false negatives and false positive is also another important goal.

Evaluating Genetic Algorithm for selection of similarity functions for record linkage

Project from master degree
F. Shaikh and C. Ragkhitwetsagul

Machine learning algorithms have been successfully employed in solving the record linkage problem. Machine learning casts the record linkage problem as a classification problem by training a classifier that classifies 2 records as duplicates or unique. Irrespective of the machine learning algorithm used, the initial step in training a classifier involves selecting a set of similarity functions to be applied to each attribute to get a similarity measure. Usually this is done manually with input from a domain expert. We evaluate an approach in which the optimal combination of similarity function for a given type of input data records is searched using Genetic Algorithms.







Hercules File System:
A Scalable Fault Tolerant Distributed File System

Project from master degree
F. Shaikh, C. Ragkhitwetsagul,
A.R. Pathan


We introduce the design of the Hercules File System (HFS), a distributed file system with scalable MDS cluster and scalable and fault-tolerant DS cluster. The Hercules File System allows Metadata and Data Servers to be dynamically added to the MDS cluster even after the initial setup time while the system is up and running without disrupting the normal operations carried out by the file system. The file system is also fault-tolerant and can serve clients in the events of failures of the DS and MDS. A Health Monitor is also designed which is a GUI tool that monitors the state of the servers of the File System and also gives the run-time visualization of operations requested by the clients.