top of page


Below are reports myself and team members have produced throughout my academic career

Master Thesis: Application of Neural Networks in Stock Market Prediction

4 October 2018

Within modern economies financial long-term wealth is essential for prosperity, health and wellbeing. As such the ability to model and advise investment strategies is essential for both individuals and large corporations. The models are often for the stock market as with over $77 trillion (USD) traded annually [1] there is large potential and ability to make long term wealth. In the past simplistic rules-based strategies were used, however given the increased computation power and Machine Learning knowledge new methods are being explored.

This body of research looked to assess the feasibility and profitability of Deep Reinforcement Learning’s ability to optimise stock market portfolios. This was achieved by running a Deep Q model in a simulated training environment and comparing it to a range of ‘Manual’ (rule based) and ‘Random’ methods. ‘Manual’ rule based Technical Indicators proved to be the best performing with a mean annual profit of 142%, while the Deep Q method only achieved 25%. The Deep Q method was able to surpass bank investments in 75% of the simulations and was profitable in 85% of simulations. Further to this the model’s profit increased with more training cycles indicating that with increased training time and computational resources, the model has the potential to have improved results.

Honours Thesis: Application of Neural Networks in Stock Market Prediction

12 October 2016

In the last 50 years life expectancy has increased by 30% because of improvements in technology, medicine and standard of living. As a result, the need of support, and the financial strain from the aging population has grown. A profitable high performing investment strategy for retirement funds would be ideal to support individuals long after retirement. Currently, large superannuation funds utilise risk analysis, excel modelling and mathematical computations to calculate profitable investments. Within the field of computation science and mechatronic engineering artificial intelligence has flourished. An example of such a technique is neural networks, a modelling tool that is able to predict stock market fluctuations fairly accurately.

By utilising this prediction capability, combined with dynamic decision making methods, a system can be developed to provide long term profits for the aging population. The designed system loads historical data of the specified stock, trains a Neural Network on this input and then goes through a trading simulation based on the predictions and a range of decision methods. The system with initial testing proves to have a strong basis, however, risk is challenging to consider and the system is difficult to implement within a real trading environment. If this system is implemented correctly within a commission free trading environment, taking daily transactions it is expected to make up to 36% annual profit. These results have not only surpassed that available in the literature, the process undertaken has been across more stocks, a large range of decision methods and outlines the power of pure Neural Network prediction. Furthermore, the practical capability of a Neural Network trading system is outlined, this research is the link between the literature and a real trading market implementation.

Big Data Classification

7 May 2017

Educationally the main aim of this study was to provide exposure to a range of topics; classification, sparse data and high dimensionality. Classification problems are common within machine learning as many problems fall into associating similarity and identifying labels. The data provided for this study was high dimensional with sparse non-zero values a situation common to a text classification problem. Finally, the high dimensionality of the feature space provided understanding of brute force gear feature selection and the impact of naïve search processes (K Nearest Neighbours), encouraging researchers to focus on efficiency rather than simple accuracy.

The team included myself Arjun Sathasivam & Kristopher Lopez

Sea Lion Detection

29 May 2017

Background The National Oceanic and Atmospheric Administration (NOAA) Government Agency in the United States is tasked with monitoring and preserving the conditions of the oceans and the flora and fauna which inhabit them. They have noticed a decline in the Sea Lion population on the Western North American coast. They have been monitoring the populations by taking aerial images of the colonies with drones and manually tracking the numbers and types (adult males, sub-adult males, adult females, juveniles, and pups). This process is essential to understanding the progress of pups (infants) maturing into adults, understanding where the problem is, and why the population is declining. Currently they have access to supporting data but are unable to analyse and process what they have available. As a result, they have turned to the data science community to help address this issue. By automating the Sea Lion identification, tracking and tallying process, resources can be freed from performing associated manual tasks, and instead focus on higher cognitive problem solving; focusing on what is causing this decline and how to improve the situation.

Research Question Given an aerial image of a Sea Lion colony, determine how many are present of each of the five types (adult males, sub-adult males, adult females, juveniles, and pups).

CIFAR10 Classification

4 June 2017

CIFAR-10 is a well-known collection of low resolution, labelled images consisting of ten common objects. This body of research attempts to maximise the accuracy of identifying each image’s correct class. Thus, the problem is one of computer vision, how well can software identity an object. This task revolves around complexities and difficulties with large data sets, complex matrix structures and analytics models that can handle a high number of parameters.

The team included myself Arjun Sathasivam & Kristopher Lopez

Protein Substrate Relationship Prediction

1 November 2017

In this project we have studied and applied different techniques learned throughout this course for predicting novel kinase-substrate relationships, more specifically the substrates of Akt and mTOR with the correlated patterns observed on the features from phosphoproteomics data provided.

Predictions were produced using an ensemble model based on motifs in the amino acid sequence as well as amounts and timings of responses to treatments. The predicted probabilities were broadly similar to those produced in the 2016 study. Correlations of our latest Akt and mTOR predictions with the prior year were 80% and 77% respectively.

The team included myself Andy Huang, Dan Elias, Kristopher Lopez & Samuel Bolivar

VAST Challenge 2017 - Visual Analytics Platform

2 November 2017

In March 2017, the Institute of Electrical and Electronics Engineers (‘IEEE’) for Visual Analytics Science and Technology (‘VAST’) announced an annual competition for the Visual Analytics community to design interactive systems to help solve conceptual environmental problems.

A fictional environmental problem, called ‘Mini Challenge 2’ (‘MC2’), was the focus of this project, code named ‘Gaia’.

The project requires the design and development of a Visual Analytics (VA) system. The system is designed to enable a new user to visually manipulate data and gain the insights needed to solve the problem questions.


bottom of page