I have a new journal article accepted into the IEEE Transactions on Neural Network and Learning System (official link; preprint). The work focuses on detecting an optimal number of features given a generic subset selection algorithm via a post-hoc Neyman-Pearson hypothesis test. You can find a Matlab implementation on Github.
My PhD research proposal was approved by my committee members on April 11, 2014. The general topic is to develop a computationally-efficient sequential learning framework — suitable for large scale or streaming data sets — that can determine the most relevant features for a user-defined objective function given no prior information. Some wrapper and embedded methods can select the most important features with little to no prior information; however, such methods must also learn the classifier parameters, which can be computationally burdensome or intractable for incremental learning or massive data sets. One of the goals of the proposed research is to develop a generalizable sequential learning subset selection (SLSS) framework that selects features most relevant for any objective function and can be paired with incremental learning algorithms. Such an approach has been largely under-explored and hence conspicuously missing in the literature despite an ever-increasing number of applications that desperately need fast computation and flexibility of optimization.
A manuscript I wrote with my advisors, Gail Rosen and Robi Polikar, has been accepted to appear in the proceedings of the International Joint Conference on Neural Networks (IJCNN). This manuscript continues the work that we wrote for the CIDUE in 2013, which examined the loss of a multiple expert system (MES) making predictions under the assumptions of concept drift.
I this latest manuscript, we dive further into the analysis from the previous work and form a more decisive upper bound on the loss of the MES. We also provided some experiments on real-world data streams support the analysis.
I have released the code for this manuscript on my Github page.
On a little be more of an odd note, I was making a presentation on PCoA and PCA in Python, and I wanted to put a funny picture at the end of the talk (right before we open up the shell and start writing code). I couldn’t help but think of the XKCD post on Python. It turns out that you can import anitgravity into Python! Give this a try: