|
| 1 | +Warm-Starting with ReHLine |
| 2 | +========================== |
| 3 | + |
| 4 | +This tutorial explains how to use warm-starting with ReHLine, a Python library for regression with hinge loss, to enhance the efficiency of solving similar optimization problems. |
| 5 | + |
| 6 | +Introduction |
| 7 | +------------ |
| 8 | + |
| 9 | +Warm-starting is a technique used to accelerate the convergence of optimization algorithms by initializing them with a solution from a previous run. This is particularly beneficial when you have a sequence of related problems to solve. |
| 10 | + |
| 11 | +Setup |
| 12 | +----- |
| 13 | + |
| 14 | +Before you begin, ensure you have the necessary packages installed. You need the `rehline` library, which is used for regression with hinge loss, and `numpy` for numerical operations. Install these packages using pip if you haven't already: |
| 15 | + |
| 16 | +.. code-block:: bash |
| 17 | +
|
| 18 | + pip install rehline numpy |
| 19 | +
|
| 20 | +Simulating the Dataset |
| 21 | +---------------------- |
| 22 | + |
| 23 | +We first create a random dataset for classification: |
| 24 | + |
| 25 | +.. code-block:: python |
| 26 | +
|
| 27 | + import numpy as np |
| 28 | +
|
| 29 | + n, d, C = 1000, 3, 0.5 |
| 30 | + X = np.random.randn(n, d) |
| 31 | + beta0 = np.random.randn(d) |
| 32 | + y = np.sign(X.dot(beta0) + np.random.randn(n)) |
| 33 | +
|
| 34 | +- **n** is the number of samples. |
| 35 | +- **d** is the number of features. |
| 36 | +- **C** is a regularization parameter. |
| 37 | +- **X** is the feature matrix. |
| 38 | +- **y** is the target vector, generated as a sign function of a linear combination of features plus some noise. |
| 39 | + |
| 40 | +Using ReHLine Solver |
| 41 | +-------------------- |
| 42 | + |
| 43 | +The `ReHLine_solver` is tested first with a cold start and then with a warm start: |
| 44 | + |
| 45 | +.. code-block:: python |
| 46 | +
|
| 47 | + from rehline._base import ReHLine_solver |
| 48 | +
|
| 49 | + U = -(C*y).reshape(1,-1) |
| 50 | + V = (C*np.array(np.ones(n))).reshape(1,-1) |
| 51 | + res = ReHLine_solver(X, U, V) # Cold start |
| 52 | + res_ws = ReHLine_solver(X, U, V, Lambda=res.Lambda) # Warm start |
| 53 | +
|
| 54 | +- **Cold Start**: The solver starts from scratch without any prior information. |
| 55 | +- **Warm Start**: The solver uses the solution from the cold start (`res.Lambda`) as the initial point for the next run. |
| 56 | + |
| 57 | +Using ReHLine Class |
| 58 | +------------------- |
| 59 | + |
| 60 | +The `ReHLine` class is used to fit a model: |
| 61 | + |
| 62 | +.. code-block:: python |
| 63 | +
|
| 64 | + from rehline import ReHLine |
| 65 | +
|
| 66 | + clf = ReHLine(verbose=1) |
| 67 | + clf.C = C |
| 68 | + clf.U = -y.reshape(1,-1) |
| 69 | + clf.V = np.array(np.ones(n)).reshape(1,-1) |
| 70 | + clf.fit(X) # Cold start |
| 71 | +
|
| 72 | + clf.C = 2*C |
| 73 | + clf.warm_start = 1 |
| 74 | + clf.fit(X) # Warm start |
| 75 | +
|
| 76 | +- **Cold Start**: The class is fitted with the initial data. |
| 77 | +- **Warm Start**: The class is fitted again with a different regularization parameter (`2*C`), using the previous solution as a starting point. |
| 78 | + |
| 79 | +Using plqERM_Ridge |
| 80 | +------------------ |
| 81 | + |
| 82 | +Finally, the `plqERM_Ridge` class is tested similarly: |
| 83 | + |
| 84 | +.. code-block:: python |
| 85 | +
|
| 86 | + from rehline import plqERM_Ridge |
| 87 | +
|
| 88 | + clf = plqERM_Ridge(loss={'name': 'svm'}, C=C, verbose=1) |
| 89 | + clf.fit(X=X, y=y) # Cold start |
| 90 | +
|
| 91 | + clf.C = 2*C |
| 92 | + clf.warm_start = 1 |
| 93 | + clf.fit(X=X, y=y) # Warm start |
| 94 | +
|
0 commit comments