Class Linear

java.lang.Object
de.bwaldvogel.liblinear.Linear

public class Linear extends Object

Java port of liblinear

The usage should be pretty similar to the C version of liblinear.

Please consider reading the README file of liblinear.

The port was done by Benedikt Waldvogel (mail at bwaldvogel.de)

Version:
2.44
  • Field Details

    • VERSION

      static final int VERSION
      See Also:
    • FILE_CHARSET

      static final Charset FILE_CHARSET
    • DEFAULT_LOCALE

      private static final Locale DEFAULT_LOCALE
    • OUTPUT_MUTEX

      private static final Object OUTPUT_MUTEX
    • DEBUG_OUTPUT

      private static PrintStream DEBUG_OUTPUT
  • Constructor Details

    • Linear

      public Linear()
  • Method Details

    • crossValidation

      public static void crossValidation(Problem prob, Parameter param, int nr_fold, double[] target)
      Parameters:
      target - predicted classes
    • findParameters

      public static ParameterSearchResult findParameters(Problem prob, Parameter param, int nr_fold, double start_C, double start_p)
    • groupClasses

      private static Linear.GroupClassesReturn groupClasses(Problem prob, int[] perm)
    • info

      static void info(String message)
    • info

      static void info(String format, Object... args)
    • atof

      static double atof(String s)
      Parameters:
      s - the string to parse for the double value
      Throws:
      IllegalArgumentException - if s is empty or represents NaN or Infinity
      NumberFormatException - see Double.parseDouble(String)
    • atoi

      static int atoi(String s) throws NumberFormatException
      Parameters:
      s - the string to parse for the integer value
      Throws:
      IllegalArgumentException - if s is empty
      NumberFormatException - see Integer.parseInt(String)
    • loadModel

      public static Model loadModel(Reader inputReader) throws IOException
      Loads the model from inputReader. It uses Locale.ENGLISH for number formatting.

      Note: The inputReader is NOT closed after reading or in case of an exception.

      Throws:
      IOException
    • loadModel

      public static Model loadModel(File modelFile) throws IOException
      Deprecated.
      use loadModel(Path) instead
      Loads the model from the file with ISO-8859-1 charset. It uses Locale.ENGLISH for number formatting.
      Throws:
      IOException
    • loadModel

      public static Model loadModel(Path modelPath) throws IOException
      Loads the model from the file with ISO-8859-1 charset. It uses Locale.ENGLISH for number formatting.
      Throws:
      IOException
    • predict

      public static double predict(Model model, Feature[] x)
    • predictProbability

      public static double predictProbability(Model model, Feature[] x, double[] prob_estimates) throws IllegalArgumentException
      Throws:
      IllegalArgumentException - if model is not probabilistic (see Model.isProbabilityModel())
    • predictValues

      public static double predictValues(Model model, Feature[] x, double[] dec_values)
    • printf

      static void printf(Formatter formatter, String format, Object... args) throws IOException
      Throws:
      IOException
    • saveModel

      public static void saveModel(Writer modelOutput, Model model) throws IOException
      Writes the model to the modelOutput. It uses Locale.ENGLISH for number formatting.

      Note: The modelOutput is closed after reading or in case of an exception.

      Throws:
      IOException
    • saveModel

      public static void saveModel(File modelFile, Model model) throws IOException
      Deprecated.
      Writes the model to the file with ISO-8859-1 charset. It uses Locale.ENGLISH for number formatting.
      Throws:
      IOException
    • saveModel

      public static void saveModel(Path modelPath, Model model) throws IOException
      Writes the model to the file with ISO-8859-1 charset. It uses Locale.ENGLISH for number formatting.
      Throws:
      IOException
    • GETI

      private static int GETI(byte[] y, int i)
    • solve_l2r_l1l2_svc

      private static int solve_l2r_l1l2_svc(Problem prob, Parameter param, double[] w, double Cp, double Cn, int max_iter)
      A coordinate descent algorithm for L1-loss and L2-loss SVM dual problems
        min_\alpha  0.5(\alpha^T (Q + D)\alpha) - e^T \alpha,
          s.t.      0 invalid input: '<'= \alpha_i invalid input: '<'= upper_bound_i,
      
        where Qij = yi yj xi^T xj and
        D is a diagonal matrix
      
       In L1-SVM case:
                    upper_bound_i = Cp if y_i = 1
                    upper_bound_i = Cn if y_i = -1
                    D_ii = 0
       In L2-SVM case:
                    upper_bound_i = INF
                    D_ii = 1/(2*Cp) if y_i = 1
                    D_ii = 1/(2*Cn) if y_i = -1
      
       Given:
       x, y, Cp, Cn
       eps is the stopping tolerance
      
       solution will be put in w
      
       this function returns the number of iterations
      
       See Algorithm 3 of Hsieh et al., ICML 2008
      
    • GETI_SVR

      private static int GETI_SVR(int i)
    • solve_l2r_l1l2_svr

      private static int solve_l2r_l1l2_svr(Problem prob, Parameter param, double[] w, int max_iter)
      A coordinate descent algorithm for L1-loss and L2-loss epsilon-SVR dual problem min_\beta 0.5\beta^T (Q + diag(lambda)) \beta - p \sum_{i=1}^l|\beta_i| + \sum_{i=1}^l yi\beta_i, s.t. -upper_bound_i invalid input: '<'= \beta_i invalid input: '<'= upper_bound_i, where Qij = xi^T xj and D is a diagonal matrix In L1-SVM case: upper_bound_i = C lambda_i = 0 In L2-SVM case: upper_bound_i = INF lambda_i = 1/(2*C) Given: x, y, p, C eps is the stopping tolerance solution will be put in w this function returns the number of iterations See Algorithm 4 of Ho and Lin, 2012
    • solve_l2r_lr_dual

      private static int solve_l2r_lr_dual(Problem prob, Parameter param, double[] w, double Cp, double Cn, int max_iter)
      A coordinate descent algorithm for the dual of L2-regularized logistic regression problems
        min_\alpha  0.5(\alpha^T Q \alpha) + \sum \alpha_i log (\alpha_i) + (upper_bound_i - \alpha_i) log (upper_bound_i - \alpha_i) ,
           s.t.      0 invalid input: '<'= \alpha_i invalid input: '<'= upper_bound_i,
      
        where Qij = yi yj xi^T xj and
        upper_bound_i = Cp if y_i = 1
        upper_bound_i = Cn if y_i = -1
      
       Given:
       x, y, Cp, Cn
       eps is the stopping tolerance
      
       solution will be put in w
      
       this function returns the number of iterations
      
       See Algorithm 5 of Yu et al., MLJ 2010
      
      Since:
      1.7
    • solve_l1r_l2_svc

      private static int solve_l1r_l2_svc(Problem prob_col, Parameter param, double[] w, double Cp, double Cn, double eps, int max_iter)
      A coordinate descent algorithm for L1-regularized L2-loss support vector classification
        min_w \sum |wj| + C \sum max(0, 1-yi w^T xi)^2,
      
       Given:
       x, y, Cp, Cn
       eps is the stopping tolerance
      
       solution will be put in w
      
       this function returns the number of iterations
      
       See Yuan et al. (2010) and appendix of LIBLINEAR paper, Fan et al. (2008)
      
       To not regularize the bias (i.e., regularize_bias = 0), a constant feature = 1
       must have been added to the original data. (see -B and -R option)
      
      Since:
      1.5
    • solve_l1r_lr

      private static int solve_l1r_lr(Problem prob_col, Parameter param, double[] w, double Cp, double Cn, double eps, int max_iter)
      A coordinate descent algorithm for L1-regularized logistic regression problems
        min_w \sum |wj| + C \sum log(1+exp(-yi w^T xi)),
      
       Given:
       x, y, Cp, Cn
       eps is the stopping tolerance
      
       solution will be put in w
      
       this function returns the number of iterations
      
       See Yuan et al. (2011) and appendix of LIBLINEAR paper, Fan et al. (2008)
      
       To not regularize the bias (i.e., regularize_bias = 0), a constant feature = 1
       must have been added to the original data. (see -B and -R option)
      
      Since:
      1.5
    • solve_oneclass_svm

      static int solve_oneclass_svm(Problem prob, Parameter param, double[] w, MutableDouble rho, int max_iter)
    • transpose

      static Problem transpose(Problem prob)
    • swap

      static void swap(double[] array, int idxA, int idxB)
    • swap

      static void swap(int[] array, int idxA, int idxB)
    • swap

      static void swap(IntArrayPointer array, int idxA, int idxB)
    • swap

      static void swap(Feature[] array, int idxA, int idxB)
    • train

      public static Model train(Problem prob, Parameter param)
      Throws:
      IllegalArgumentException - if the feature nodes of prob are not sorted in ascending order
    • checkProblemSize

      private static void checkProblemSize(int n, int nr_class)
      verify the size and throw an exception early if the problem is too large
    • train_one

      private static void train_one(Problem prob, Parameter param, double[] w, double Cp, double Cn)
    • calc_start_C

      private static double calc_start_C(Problem prob, Parameter param)
    • calc_max_p

      private static double calc_max_p(Problem prob)
    • find_parameter_C

      public static ParameterCSearchResult find_parameter_C(Problem prob, Parameter param_tmp, double start_C, double max_C, int[] fold_start, int[] perm, Problem[] subprob, int nr_fold)
    • disableDebugOutput

      public static void disableDebugOutput()
    • enableDebugOutput

      public static void enableDebugOutput()
    • setDebugOutput

      public static void setDebugOutput(PrintStream debugOutput)
    • getVersion

      public static int getVersion()
    • resetRandom

      public static void resetRandom()
      Deprecated.
      resets the PRNG