…¶
Installing Tensorflow¶
!conda install -c apple tensorflow-deps -y
!pip install tensorflow-macos
!pip install tensorflow-metal
Test tensoflow¶
!pip install tensorflow_datasets
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
Num GPUs Available: 1
%%time
import tensorflow as tf
import tensorflow_datasets as tfds
print("TensorFlow version:", tf.__version__)
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
tf.config.list_physical_devices('GPU')
(ds_train, ds_test), ds_info = tfds.load(
'mnist',
split=['train', 'test'],
shuffle_files=True,
as_supervised=True,
with_info=True,
)
def normalize_img(image, label):
"""Normalizes images: `uint8` -> `float32`."""
return tf.cast(image, tf.float32) / 255., label
batch_size = 128
ds_train = ds_train.map(
normalize_img, num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds_train = ds_train.cache()
ds_train = ds_train.shuffle(ds_info.splits['train'].num_examples)
ds_train = ds_train.batch(batch_size)
ds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE)
ds_test = ds_test.map(
normalize_img, num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds_test = ds_test.batch(batch_size)
ds_test = ds_test.cache()
ds_test = ds_test.prefetch(tf.data.experimental.AUTOTUNE)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, kernel_size=(3, 3),
activation='relu'),
tf.keras.layers.Conv2D(64, kernel_size=(3, 3),
activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
# tf.keras.layers.Dropout(0.25),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
# tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss='sparse_categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(0.001),
metrics=['accuracy'],
)
model.fit(
ds_train,
epochs=12,
validation_data=ds_test,
)
TensorFlow version: 2.9.2 Num GPUs Available: 1
2022-08-07 14:48:03.944151: W tensorflow/core/platform/cloud/google_auth_provider.cc:184] All attempts to get a Google authentication bearer token failed, returning an empty token. Retrieving token from files failed with "NOT_FOUND: Could not locate the credentials file.". Retrieving token from GCE failed with "FAILED_PRECONDITION: Error executing an HTTP request: libcurl code 6 meaning 'Couldn't resolve host name', error details: Could not resolve host: metadata".
Downloading and preparing dataset 11.06 MiB (download: 11.06 MiB, generated: 21.00 MiB, total: 32.06 MiB) to ~/tensorflow_datasets/mnist/3.0.1...
Dl Completed...: 0%| | 0/4 [00:00<?, ? file/s]
Dataset mnist downloaded and prepared to ~/tensorflow_datasets/mnist/3.0.1. Subsequent calls will reuse this data.
Metal device set to: Apple M1
WARNING:tensorflow:AutoGraph could not transform <function normalize_img at 0x16e01add0> and will run it as-is.
Cause: Unable to locate the source code of <function normalize_img at 0x16e01add0>. Note that functions defined in certain environments, like the interactive Python shell, do not expose their source code. If that is the case, you should define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.experimental.do_not_convert. Original error: could not get source code
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
2022-08-07 14:48:11.226461: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2022-08-07 14:48:11.226637: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>) WARNING:tensorflow:AutoGraph could not transform <function normalize_img at 0x16e01add0> and will run it as-is. Cause: Unable to locate the source code of <function normalize_img at 0x16e01add0>. Note that functions defined in certain environments, like the interactive Python shell, do not expose their source code. If that is the case, you should define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.experimental.do_not_convert. Original error: could not get source code To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function normalize_img at 0x16e01add0> and will run it as-is. Cause: Unable to locate the source code of <function normalize_img at 0x16e01add0>. Note that functions defined in certain environments, like the interactive Python shell, do not expose their source code. If that is the case, you should define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.experimental.do_not_convert. Original error: could not get source code To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert Epoch 1/12
2022-08-07 14:48:11.622382: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz 2022-08-07 14:48:11.622818: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
469/469 [==============================] - ETA: 0s - loss: 0.1616 - accuracy: 0.9507
2022-08-07 14:48:23.217336: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
469/469 [==============================] - 13s 24ms/step - loss: 0.1616 - accuracy: 0.9507 - val_loss: 0.0519 - val_accuracy: 0.9836 Epoch 2/12 469/469 [==============================] - 11s 22ms/step - loss: 0.0434 - accuracy: 0.9870 - val_loss: 0.0414 - val_accuracy: 0.9871 Epoch 3/12 469/469 [==============================] - 11s 23ms/step - loss: 0.0282 - accuracy: 0.9911 - val_loss: 0.0336 - val_accuracy: 0.9893 Epoch 4/12 469/469 [==============================] - 11s 23ms/step - loss: 0.0188 - accuracy: 0.9940 - val_loss: 0.0333 - val_accuracy: 0.9900 Epoch 5/12 469/469 [==============================] - 11s 23ms/step - loss: 0.0118 - accuracy: 0.9963 - val_loss: 0.0356 - val_accuracy: 0.9888 Epoch 6/12 469/469 [==============================] - 13s 29ms/step - loss: 0.0110 - accuracy: 0.9963 - val_loss: 0.0367 - val_accuracy: 0.9900 Epoch 7/12 469/469 [==============================] - 12s 25ms/step - loss: 0.0079 - accuracy: 0.9975 - val_loss: 0.0387 - val_accuracy: 0.9888 Epoch 8/12 469/469 [==============================] - 10s 22ms/step - loss: 0.0065 - accuracy: 0.9980 - val_loss: 0.0358 - val_accuracy: 0.9906 Epoch 9/12 469/469 [==============================] - 11s 24ms/step - loss: 0.0045 - accuracy: 0.9983 - val_loss: 0.0476 - val_accuracy: 0.9894 Epoch 10/12 469/469 [==============================] - 11s 23ms/step - loss: 0.0047 - accuracy: 0.9984 - val_loss: 0.0519 - val_accuracy: 0.9894 Epoch 11/12 469/469 [==============================] - 11s 23ms/step - loss: 0.0042 - accuracy: 0.9988 - val_loss: 0.0449 - val_accuracy: 0.9907 Epoch 12/12 469/469 [==============================] - 11s 24ms/step - loss: 0.0051 - accuracy: 0.9983 - val_loss: 0.0390 - val_accuracy: 0.9908 CPU times: user 51.8 s, sys: 36.5 s, total: 1min 28s Wall time: 2min 23s
<keras.callbacks.History at 0x16dd4ca60>