High-Dimensional Bayesian Regularised Regression with the BayesReg Package

In conjunction with Enes Makalic I have recently finished writing MATLAB and R code to implement efficient, high dimensional Bayesian regression with continuous shrinkage priors. The package is very flexible, fast and highly numerically stable, particularly in the case of the horseshoe/horseshoe+, for which the heavy tails of the prior distributions cause problems for most other implementations. It supports the following data models:

  1. Gaussian (“L2 errors”)
  2. Laplace (“L1 errors”)
  3. Student-t (very heavy tails)
  4. Logistic regression (binary data)

It also supports a range of state-of-the-art continuous shrinkage priors to handle different underlying regression model structures:

  1. Ridge regression (“L2″ shrinkage/regularisation)
  2. LASSO regression (“L1″ shrinkage/regularisation)
  3. Horseshoe regression (global-local shrinkage for sparse models)
  4. Horseshoe+ regression (global-local shrinkage for ultra-sparse models)

The MATLAB code for Version 1.2 of the package can be downloaded here, and the R code can be obtained from CRAN under the package name “bayesreg”. This R package can also be installed from within R by using the command “install.packages(“bayesreg”)”.If you use the package, and wish to cite it in your work, please use the reference below.

 

References

  1. “High-Dimensional Bayesian Regularised Regression with the BayesReg Package”, E. Makalic and D. F. Schmidt, arXiv:1611.06649 [stat.CO], 2016

Australian AI 2016

I have just return from the 29th Australasian Joint Conference on Artificial Intelligence, held at Hobart, Tasmania, Australia from the 5th through 9th of December. This conference is usually an interesting, open and friendly environment to discuss topics in applied machine learning, and this year was no different. There was quite a focus on “deep learning”, as would be expected given the current hype surrounding this neural network revival, but there was also a number of other interesting topics covered in the technical sessions.

I presented, or was involved with the presentation of, three papers: “Approximating Message Lengths of Hierarchical Bayesian Models Using Posterior Sampling”“Bayesian Robust Regression with the Horseshoe+ Estimator” and “Bayesian Grouped Horseshoe Regression with Application to Additive Model”, which were all quite “horseshoe”-centric, given my current interest in global-local shrinkage models.

If you are an Australian — or even international — researcher with interests in applied machine learning and artificial intelligence, I recommend you give this conference a visit one time. Next year is particularly attractive as it co-incides with IJCAI and is being held in Melbourne.