SKU/Artículo: AMZ-B0DFMC1GQF

seeed studio

seeed studio Coral M.2 Accelerator A+E Key

Size:

A+E key

Detalles del producto
Disponibilidad:
En stock
Peso con empaque:
0.15 kg
Devolución:
Condición
Nuevo
Producto de:
Amazon
Viaja desde
USA

Sobre este producto
  • The Coral M.2 Accelerator is an M.2 module that brings the Edge TPU coprocessor to existing systems and products. The Edge TPU is a small ASIC designed by Google that provides high performance ML inferencing with low power requirements: it's capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at almost 400 FPS, in a power efficient manner. This on-device processing reduces latency, increases data privacy, and removes the need for constant high-bandwidth connectivity. The M.2 Accelerator is a dual-key M.2 card (either A+E or B+M keys), designed to fit any compatible M.2 slot. This form-factor enables easy integration into ARM and x86 platforms so you can add local ML acceleration to products such as embedded platforms, mini-PCs, and industrial gateways.
AR$293.779
49% OFF
AR$150.653

IMPORT EASILY

By purchasing this product you can deduct VAT with your RUT number

AR$293.779
49% OFF
AR$150.653

Pagá fácil y rápido con Mercado Pago o MODO

Llega en 8 a 12 días hábiles
con envío
Tienes garantía de entrega
Este producto viaja de USA a tus manos en

Conoce más detalles

Performs high-speed ML inferencing: The on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at 400 FPS, in a power efficient manner. Works with Debian Linux: Integrates with any Debian-based Linux system with a compatible card module slot. Supports TensorFlow Lite: No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU. Supports AutoML Vision Edge: Easily build and deploy fast, high-accuracy custom image classification models to your device with AutoML Vision Edge.