¿Qué es XLA_GPU y XLA_CPU para tensorflow?

Puedo listar los dispositivos gpu y cantar el siguiente código de tensorflow:

import tensorflow as tf from tensorflow.python.client import device_lib print device_lib.list_local_devices() 

El resultado es:

 [name: "/device:CPU:0" device_type: "CPU" memory_limit: 268435456 locality { } incarnation: 17897160860519880862, name: "/device:XLA_GPU:0" device_type: "XLA_GPU" memory_limit: 17179869184 locality { } incarnation: 9751861134541508701 physical_device_desc: "device: XLA_GPU device", name: "/device:XLA_CPU:0" device_type: "XLA_CPU" memory_limit: 17179869184 locality { } incarnation: 5368380567397471193 physical_device_desc: "device: XLA_CPU device", name: "/device:GPU:0" device_type: "GPU" memory_limit: 21366299034 locality { bus_id: 1 links { link { device_id: 1 type: "StreamExecutor" strength: 1 } } } incarnation: 7110958745101815531 physical_device_desc: "device: 0, name: Tesla P40, pci bus id: 0000:02:00.0, compute capability: 6.1", name: "/device:GPU:1" device_type: "GPU" memory_limit: 17336821351 locality { bus_id: 1 links { link { type: "StreamExecutor" strength: 1 } } } incarnation: 3366465227705362600 physical_device_desc: "device: 1, name: Tesla P40, pci bus id: 0000:03:00.0, compute capability: 6.1", name: "/device:GPU:2" device_type: "GPU" memory_limit: 22590563943 locality { bus_id: 2 numa_node: 1 links { link { device_id: 3 type: "StreamExecutor" strength: 1 } } } incarnation: 8774017944003495680 physical_device_desc: "device: 2, name: Tesla P40, pci bus id: 0000:83:00.0, compute capability: 6.1", name: "/device:GPU:3" device_type: "GPU" memory_limit: 22590563943 locality { bus_id: 2 numa_node: 1 links { link { device_id: 2 type: "StreamExecutor" strength: 1 } } } incarnation: 2007348906807258050 physical_device_desc: "device: 3, name: Tesla P40, pci bus id: 0000:84:00.0, compute capability: 6.1"] 

Quiero saber qué es XLA_GPU y XLA_CPU ?

XLA (Álgebra Lineal Acelerada) es un comstackdor específico de dominio para álgebra lineal que optimiza los cálculos de TensorFlow. Los resultados son mejoras en la velocidad, el uso de la memoria y la portabilidad en servidores y plataformas móviles.

El backend de la GPU actualmente es compatible con las GPU NVIDIA a través del backend LLVM NVPTX; el backend de la CPU soporta múltiples ISAs de CPU.

También, vea esto