|
8 | 8 | }, |
9 | 9 | "outputs": [], |
10 | 10 | "source": [ |
| 11 | + "!export KERAS_BACKEND=\"torch\"\n", |
11 | 12 | "!pip install autokeras" |
12 | 13 | ] |
13 | 14 | }, |
|
19 | 20 | }, |
20 | 21 | "outputs": [], |
21 | 22 | "source": [ |
22 | | - "import numpy as np\n", |
23 | 23 | "from keras.datasets import mnist\n", |
24 | 24 | "\n", |
25 | 25 | "import autokeras as ak" |
|
225 | 225 | "clf.fit(x_train, y_train, epochs=1)" |
226 | 226 | ] |
227 | 227 | }, |
228 | | - { |
229 | | - "cell_type": "markdown", |
230 | | - "metadata": { |
231 | | - "colab_type": "text" |
232 | | - }, |
233 | | - "source": [ |
234 | | - "## Data Format\n", |
235 | | - "The AutoKeras ImageClassifier is quite flexible for the data format.\n", |
236 | | - "\n", |
237 | | - "For the image, it accepts data formats both with and without the channel\n", |
238 | | - "dimension. The images in the MNIST dataset do not have the channel dimension.\n", |
239 | | - "Each image is a matrix with shape (28, 28). AutoKeras also accepts images of\n", |
240 | | - "three dimensions with the channel dimension at last, e.g., (32, 32, 3), (28,\n", |
241 | | - "28, 1).\n", |
242 | | - "\n", |
243 | | - "For the classification labels, AutoKeras accepts both plain labels, i.e.\n", |
244 | | - "strings or integers, and one-hot encoded encoded labels, i.e. vectors of 0s and\n", |
245 | | - "1s.\n", |
246 | | - "\n", |
247 | | - "So if you prepare your data in the following way, the ImageClassifier should\n", |
248 | | - "still work.\n" |
249 | | - ] |
250 | | - }, |
251 | | - { |
252 | | - "cell_type": "code", |
253 | | - "execution_count": 0, |
254 | | - "metadata": { |
255 | | - "colab_type": "code" |
256 | | - }, |
257 | | - "outputs": [], |
258 | | - "source": [ |
259 | | - "(x_train, y_train), (x_test, y_test) = mnist.load_data()\n", |
260 | | - "\n", |
261 | | - "# Reshape the images to have the channel dimension.\n", |
262 | | - "x_train = x_train.reshape(x_train.shape + (1,))\n", |
263 | | - "x_test = x_test.reshape(x_test.shape + (1,))\n", |
264 | | - "\n", |
265 | | - "# One-hot encode the labels.\n", |
266 | | - "eye = np.eye(10)\n", |
267 | | - "y_train = eye[y_train]\n", |
268 | | - "y_test = eye[y_test]\n", |
269 | | - "\n", |
270 | | - "print(x_train.shape) # (60000, 28, 28, 1)\n", |
271 | | - "print(y_train.shape) # (60000, 10)\n", |
272 | | - "print(y_train[:3])\n", |
273 | | - "# array([[0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],\n", |
274 | | - "# [1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n", |
275 | | - "# [0., 0., 0., 0., 1., 0., 0., 0., 0., 0.]])\n", |
276 | | - "\n", |
277 | | - "clf = ak.ImageClassifier(overwrite=True, max_trials=1)\n", |
278 | | - "# Feed the Dataset to the classifier.\n", |
279 | | - "clf.fit(x=x_train, y=y_train, epochs=1)\n", |
280 | | - "# Predict with the best model.\n", |
281 | | - "predicted_y = clf.predict(x=x_test)\n", |
282 | | - "# Evaluate the best model with testing data.\n", |
283 | | - "print(clf.evaluate(x=x_test, y=y_test))" |
284 | | - ] |
285 | | - }, |
286 | 228 | { |
287 | 229 | "cell_type": "markdown", |
288 | 230 | "metadata": { |
|
0 commit comments